Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization

06/21/2020
by   Dmitry Kovalev, et al.
19

We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes a network. For this problem, lower bounds on the number of gradient computations and the number of communication rounds required to achieve ε accuracy have recently been proven. We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees. We show that our first method is optimal both in terms of the number of communication rounds and in terms of the number of gradient computations. Unlike existing optimal algorithms, our algorithm does not rely on the expensive evaluation of dual gradients. Our second algorithm is optimal in terms of the number of communication rounds, without a logarithmic factor. Our approach relies on viewing the two proposed algorithms as accelerated variants of the Forward Backward algorithm to solve monotone inclusions associated with the decentralized optimization problem. We also verify the efficacy of our methods against state-of-the-art algorithms through numerical experiments.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/08/2021

Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks

We consider the task of minimizing the sum of smooth and strongly convex...
02/28/2017

Optimal algorithms for smooth and strongly convex distributed optimization in networks

In this paper, we determine the optimal convergence rates for strongly c...
11/15/2017

Random gradient extrapolation for distributed and stochastic optimization

In this paper, we consider a class of finite-sum convex optimization pro...
08/10/2021

Decentralized Composite Optimization with Compression

Decentralized optimization and communication compression have exhibited ...
05/25/2018

Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication

Recently, the decentralized optimization problem is attracting growing a...
06/25/2021

Optimal Checkpointing for Adjoint Multistage Time-Stepping Schemes

We consider checkpointing strategies that minimize the number of recompu...
02/18/2021

ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks

We propose ADOM - an accelerated method for smooth and strongly convex d...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.