Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking

05/07/2019
by   Ying Sun, et al.
0

We study distributed, strongly convex and nonconvex, multiagent optimization over (directed, time-varying) graphs. We consider the minimization of the sum of a smooth (possibly nonconvex) function--the agent's sum-utility plus a nonsmooth convex one, subject to convex constraints. In a companion paper, we introduced SONATA, the first algorithmic framework applicable to such a general class of composite minimization, and we studied its convergence when the smooth part of the objective function is nonconvex. The algorithm combines successive convex approximation techniques with a perturbed push-sum consensus mechanism that aims to track locally the gradient of the (smooth part of the) sum-utility. This paper studies the convergence rate of SONATA. When the smooth part of the objective function is strongly convex, SONATA is proved to converge at a linear rate whereas sublinar rate is proved when the objective function is nonconvex. To our knowledge, this is the first work proving a convergence rate (in particular, linear rate) for distributed algorithms applicable to such a general class of composite, constrained optimization problems over graphs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset