首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
This paper investigates a distributed optimization problem over multi-agent networks subject to both local and coupled constraints in a non-stationary environment, where a set of agents aim to cooperatively minimize the sum of locally time-varying cost functions when the communication graphs are time-changing connected and unbalanced. Based on dual decomposition, we propose a distributed online dual push-sum learning algorithm by incorporating the push-sum protocol into dual gradient method. We then show that the regret bound has a sublinear growth of O(Tp) and the constraint violation is also sublinear with order of O(T1?p/2), where T is the time horizon and 0 < p ≤ 1/2. Finally, simulation experiments on a plug-in electric vehicle charging problem are utilized to verify the performance of the proposed algorithm. The proposed algorithm is adaptive without knowing the total number of iterations T in advance. The convergence results are established on more general unbalanced graphs without the boundedness assumption on dual variables. In addition, more privacy concerns are guaranteed since only dual variables related with coupled constraints are exchanged among agents.  相似文献   

13.
14.
15.
This paper considers solving a class of optimization problems over a network of agents, in which the cost function is expressed as the sum of individual objectives of the agents. The underlying communication graph is assumed to be undirected and connected. A distributed algorithm in which agents employ time-varying and heterogeneous step-sizes is proposed by combining consensus of multi-agent systems with gradient tracking technique. The algorithm not only drives the agents’ iterates to a global and consensual minimizer but also finds the optimal value of the cost function. When the individual objectives are convex and smooth, we prove that the algorithm converges at a rate of O(1/t) if the homogeneous step-size does not exceed some upper bound, and it accelerates to O(1/t) if the homogeneous step-size is sufficiently small. When at least one of the individual objectives is strongly convex and all are smooth, we prove that the algorithm converges at a linear rate of O(λt) with 0?<?λ?<?1 even though the step-sizes are time-varying and heterogeneous. Two numerical examples are provided to demonstrate the efficiency of the proposed algorithm and to validate the theoretical findings.  相似文献   

16.
17.
18.
Support vector machines regression (SVMR) is an important tool in many machine learning applications. In this paper, we focus on the theoretical understanding of SVMR based on the ??insensitive loss. For fixed ??≥?0 and general data generating distributions, we show that the minimizer of the expected risk for ??insensitive loss used in SVMR is a set-valued function called conditional ??median. We then establish a calibration inequality of ??insensitive loss under a noise condition on the conditional distributions. This inequality also ensures us to present a nontrivial variance-expectation bound for ??insensitive loss, and which is known to be important in statistical analysis of the regularized learning algorithms. With the help of the calibration inequality and variance-expectation bound, we finally derive an explicit learning rate for SVMR in some Lr?space.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号