Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking

05/07/2019
by   Ying Sun, et al.
0

We study distributed, strongly convex and nonconvex, multiagent optimization over (directed, time-varying) graphs. We consider the minimization of the sum of a smooth (possibly nonconvex) function--the agent's sum-utility plus a nonsmooth convex one, subject to convex constraints. In a companion paper, we introduced SONATA, the first algorithmic framework applicable to such a general class of composite minimization, and we studied its convergence when the smooth part of the objective function is nonconvex. The algorithm combines successive convex approximation techniques with a perturbed push-sum consensus mechanism that aims to track locally the gradient of the (smooth part of the) sum-utility. This paper studies the convergence rate of SONATA. When the smooth part of the objective function is strongly convex, SONATA is proved to converge at a linear rate whereas sublinar rate is proved when the objective function is nonconvex. To our knowledge, this is the first work proving a convergence rate (in particular, linear rate) for distributed algorithms applicable to such a general class of composite, constrained optimization problems over graphs.

READ FULL TEXT
research
09/04/2018

Distributed Nonconvex Constrained Optimization over Time-Varying Digraphs

This paper considers nonconvex distributed constrained optimization over...
research
06/25/2018

A DCA-Like Algorithm and its Accelerated Version with Application in Data Visualization

In this paper, we present two variants of DCA (Different of Convex funct...
research
02/25/2020

Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis

We study distributed composite optimization over networks: agents minimi...
research
02/25/2020

Distributed Algorithms for Composite Optimization: Unified and Tight Convergence Analysis

We study distributed composite optimization over networks: agents minimi...
research
05/19/2021

Trilevel and Multilevel Optimization using Monotone Operator Theory

We consider rather a general class of multi-level optimization problems,...
research
08/17/2018

Decentralized Dictionary Learning Over Time-Varying Digraphs

This paper studies Dictionary Learning problems wherein the learning tas...
research
01/26/2021

Complementary Composite Minimization, Small Gradients in General Norms, and Applications to Regression Problems

Composite minimization is a powerful framework in large-scale convex opt...

Please sign up or login with your details

Forgot password? Click here to reset