Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning

08/02/2023
by   Constantin Philippenko, et al.
0

In this paper, we investigate the impact of compression on stochastic gradient algorithms for machine learning, a technique widely used in distributed and federated learning. We underline differences in terms of convergence rates between several unbiased compression operators, that all satisfy the same condition on their variance, thus going beyond the classical worst-case analysis. To do so, we focus on the case of least-squares regression (LSR) and analyze a general stochastic approximation algorithm for minimizing quadratic functions relying on a random field. We consider weak assumptions on the random field, tailored to the analysis (specifically, expected Hölder regularity), and on the noise covariance, enabling the analysis of various randomizing mechanisms, including compression. We then extend our results to the case of federated learning. More formally, we highlight the impact on the convergence of the covariance ℭ_ania of the additive noise induced by the algorithm. We demonstrate despite the non-regularity of the stochastic field, that the limit variance term scales with Tr(ℭ_ania H^-1)/K (where H is the Hessian of the optimization problem and K the number of iterations) generalizing the rate for the vanilla LSR case where it is σ^2 Tr(H H^-1) / K = σ^2 d / K (Bach and Moulines, 2013). Then, we analyze the dependency of ℭ_ania on the compression strategy and ultimately its impact on convergence, first in the centralized case, then in two heterogeneous FL frameworks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2020

Artemis: tight convergence guarantees for bidirectional compression in Federated Learning

We introduce a new algorithm - Artemis - tackling the problem of learnin...
research
10/31/2019

On the Convergence of Local Descent Methods in Federated Learning

In federated distributed learning, the goal is to optimize a global trai...
research
10/18/2022

FLECS-CGD: A Federated Learning Second-Order Framework via Compression and Sketching with Compressed Gradient Differences

In the recent paper FLECS (Agafonov et al, FLECS: A Federated Learning S...
research
07/02/2020

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

In federated learning, communication cost is often a critical bottleneck...
research
02/06/2023

z-SignFedAvg: A Unified Stochastic Sign-based Compression for Federated Learning

Federated Learning (FL) is a promising privacy-preserving distributed le...
research
02/02/2019

Learning Linear Dynamical Systems with Semi-Parametric Least Squares

We analyze a simple prefiltered variation of the least squares estimator...
research
10/25/2022

Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents

This paper proposes an algorithm for Federated Learning (FL) with a two-...

Please sign up or login with your details

Forgot password? Click here to reset