
Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis
We study distributed composite optimization over networks: agents minimi...
read it

Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking
We study distributed, strongly convex and nonconvex, multiagent optimiza...
read it

On the convergence rate of the three operator splitting scheme
The three operator splitting scheme was recently proposed by [Davis and ...
read it

Distributed Nonconvex Constrained Optimization over TimeVarying Digraphs
This paper considers nonconvex distributed constrained optimization over...
read it

A Continuum of Optimal PrimalDual Algorithms for Convex Composite Minimization Problems with Applications to Structured Sparsity
Many statistical learning problems can be posed as minimization of a sum...
read it

On Centralized and Distributed Mirror Descent: Exponential Convergence Analysis Using Quadratic Constraints
Mirror descent (MD) is a powerful firstorder optimization technique tha...
read it

Distributed Inexact Successive Convex Approximation ADMM: AnalysisPart I
In this twopart work, we propose an algorithmic framework for solving n...
read it
Distributed Algorithms for Composite Optimization: Unified and Tight Convergence Analysis
We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions, the agents' sumutility, plus a nonsmooth (extendedvalued) convex one. We propose a general unified algorithmic framework for such a class of problems and provide a unified convergence analysis leveraging the theory of operator splitting. Distinguishing features of our scheme are: (i) When the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependence on the agents' functions and network topology is decoupled, matching the typical rates of centralized optimization; the rate expression is sharp and improves on existing results; (ii) When the objective function is convex (but not strongly convex), similar separation as in (i) is established for the coefficient of the proved sublinear rate; (iii) The algorithm can adjust the ratio between the number of communications and computations to achieve a rate (in terms of computations) independent on the network connectivity; and (iv) A byproduct of our analysis is a tuning recommendation for several existing (non accelerated) distributed algorithms yielding the fastest provably (worstcase) convergence rate. This is the first time that a general distributed algorithmic framework applicable to composite optimization enjoys all such properties.
READ FULL TEXT
Comments
There are no comments yet.