Distributed Algorithms for Composite Optimization: Unified and Tight Convergence Analysis

02/25/2020
by   Jinming Xu, et al.
0

We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions, the agents' sum-utility, plus a nonsmooth (extended-valued) convex one. We propose a general unified algorithmic framework for such a class of problems and provide a unified convergence analysis leveraging the theory of operator splitting. Distinguishing features of our scheme are: (i) When the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependence on the agents' functions and network topology is decoupled, matching the typical rates of centralized optimization; the rate expression is sharp and improves on existing results; (ii) When the objective function is convex (but not strongly convex), similar separation as in (i) is established for the coefficient of the proved sublinear rate; (iii) The algorithm can adjust the ratio between the number of communications and computations to achieve a rate (in terms of computations) independent on the network connectivity; and (iv) A by-product of our analysis is a tuning recommendation for several existing (non accelerated) distributed algorithms yielding the fastest provably (worst-case) convergence rate. This is the first time that a general distributed algorithmic framework applicable to composite optimization enjoys all such properties.

READ FULL TEXT
research
02/25/2020

Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis

We study distributed composite optimization over networks: agents minimi...
research
05/07/2019

Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking

We study distributed, strongly convex and nonconvex, multiagent optimiza...
research
10/25/2016

On the convergence rate of the three operator splitting scheme

The three operator splitting scheme was recently proposed by [Davis and ...
research
12/31/2021

Distributed Random Reshuffling over Networks

In this paper, we consider the distributed optimization problem where n ...
research
11/24/2020

Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems

Distributed optimization often requires finding the minimum of a global ...
research
07/12/2012

Distributed Strongly Convex Optimization

A lot of effort has been invested into characterizing the convergence ra...
research
06/03/2019

Towards Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity

In this paper, through a very intuitive vanilla proximal method perspec...

Please sign up or login with your details

Forgot password? Click here to reset