A comparison between upper bounds on performance of two consensus-based distributed optimization algorithms
Date: September 14 - September 15, 2012
In this paper we address the problem of multi-agent optimization for convex functions expressible as sums of convex functions. Each agent has access to only one function in the sum and can use only local information to update its current estimate of the optimal solution. We consider two consensus-based iterative algorithms, based on a combination between a consensus step and a subgradient decent update. The main difference between the two algorithms is the order in which the consensus-step and the subgradient descent update are performed. We obtain upper bounds on performance metrics of the two algorithms. We show that updating first the current estimate in the direction of a subgradient and then executing the consensus step ensures a tighter upper bound compared with the case where the steps are executed in reversed order. In support of our analytical results, we give some numerical simulations of the algorithms as well.