ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!

02/18/2022
by   Konstantin Mishchenko, et al.
5

We introduce ProxSkip – a surprisingly simple and provably efficient method for minimizing the sum of a smooth (f) and an expensive nonsmooth proximable (ψ) function. The canonical approach to solving such problems is via the proximal gradient descent (ProxGD) algorithm, which is based on the evaluation of the gradient of f and the prox operator of ψ in each iteration. In this work we are specifically interested in the regime in which the evaluation of prox is costly relative to the evaluation of the gradient, which is the case in many applications. ProxSkip allows for the expensive prox operator to be skipped in most iterations: while its iteration complexity is (κlog1/ε), where κ is the condition number of f, the number of prox evaluations is (√(κ)log1/ε) only. Our main motivation comes from federated learning, where evaluation of the gradient operator corresponds to taking a local GD step independently on all devices, and evaluation of prox corresponds to (expensive) communication in the form of gradient averaging. In this context, ProxSkip offers an effective acceleration of communication complexity. Unlike other local gradient-type methods, such as FedAvg, SCAFFOLD, S-Local-GD and FedLin, whose theoretical communication complexity is worse than, or at best matching, that of vanilla GD in the heterogeneous data regime, we obtain a provable and large improvement without any heterogeneity-bounding assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2019

First Analysis of Local GD on Heterogeneous Data

We provide the first convergence analysis of local gradient descent for ...
research
07/08/2022

Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox

Inspired by a recent breakthrough of Mishchenko et al (2022), who for th...
research
02/12/2021

Proximal and Federated Random Reshuffling

Random Reshuffling (RR), also known as Stochastic Gradient Descent (SGD)...
research
02/20/2023

TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation

In federated learning, a large number of users are involved in a global ...
research
12/29/2022

Can 5th Generation Local Training Methods Support Client Sampling? Yes!

The celebrated FedAvg algorithm of McMahan et al. (2017) is based on thr...
research
09/19/2022

Heterogeneous Federated Learning on a Graph

Federated learning, where algorithms are trained across multiple decentr...

Please sign up or login with your details

Forgot password? Click here to reset