Preconditioned Federated Learning

09/20/2023
by   Zeyi Tao, et al.
0

Federated Learning (FL) is a distributed machine learning approach that enables model training in communication efficient and privacy-preserving manner. The standard optimization method in FL is Federated Averaging (FedAvg), which performs multiple local SGD steps between communication rounds. FedAvg has been considered to lack algorithm adaptivity compared to modern first-order adaptive optimizations. In this paper, we propose new communication-efficient FL algortithms based on two adaptive frameworks: local adaptivity (PreFed) and server-side adaptivity (PreFedOp). Proposed methods adopt adaptivity by using a novel covariance matrix preconditioner. Theoretically, we provide convergence guarantees for our algorithms. The empirical experiments show our methods achieve state-of-the-art performances on both i.i.d. and non-i.i.d. settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2021

On Second-order Optimization Methods for Federated Learning

We consider federated learning (FL), where the training data is distribu...
research
12/02/2022

Faster Adaptive Federated Learning

Federated learning has attracted increasing attention with the emergence...
research
10/07/2021

Neural Tangent Kernel Empowered Federated Learning

Federated learning (FL) is a privacy-preserving paradigm where multiple ...
research
06/01/2023

CRS-FL: Conditional Random Sampling for Communication-Efficient and Privacy-Preserving Federated Learning

Federated Learning (FL), a privacy-oriented distributed ML paradigm, is ...
research
11/22/2021

FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

Federated Learning (FL) is an increasingly popular machine learning para...
research
02/13/2023

FedDA: Faster Framework of Local Adaptive Gradient Methods via Restarted Dual Averaging

Federated learning (FL) is an emerging learning paradigm to tackle massi...
research
02/27/2023

Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence

Reducing communication overhead in federated learning (FL) is challengin...

Please sign up or login with your details

Forgot password? Click here to reset