DeepAI AI Chat
Log In Sign Up

On linear convergence of two decentralized algorithms

by   Yao Li, et al.
Michigan State University

Decentralized algorithms solve multi-agent problems over a connected network, where the information can only be exchanged with accessible neighbors. Though there exist several decentralized optimization algorithms, there are still gaps in convergence conditions and rates between decentralized algorithms and centralized ones. In this paper, we fill some gaps by considering two decentralized consensus algorithms: EXTRA and NIDS. Both algorithms converge linearly with strongly convex functions. We will answer two questions regarding both algorithms. What are the optimal upper bounds for their stepsizes? Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones? More specifically, we relax the required conditions for linear convergence for both algorithms. For EXTRA, we show that the stepsize is in order of O(1/L) (L is the Lipschitz constant of the gradient of the functions), which is comparable to that of centralized algorithms, though the upper bound is still smaller than that of centralized ones. For NIDS, we show that the upper bound of the stepsize is the same as that of centralized ones, and it does not depend on the network. In addition, we relax the requirement for the functions and the mixing matrix, which reflects the topology of the network. As far as we know, we provide the linear convergence results for both algorithms under the weakest conditions.


page 1

page 2

page 3

page 4


A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization

Decentralized optimization is a promising paradigm that finds various ap...

A primal-dual algorithm with optimal stepsizes and its application in decentralized consensus optimization

We consider a primal-dual algorithm for minimizing f(x)+h(Ax) with diffe...

Optimal algorithms for smooth and strongly convex distributed optimization in networks

In this paper, we determine the optimal convergence rates for strongly c...

On the Convergence of Consensus Algorithms with Markovian Noise and Gradient Bias

This paper presents a finite time convergence analysis for a decentraliz...

Decentralized Statistical Inference with Unrolled Graph Neural Networks

In this paper, we investigate the decentralized statistical inference pr...