Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

03/08/2021
by   Zachary Charles, et al.
0

We study a family of algorithms, which we refer to as local update methods, generalizing many federated and meta-learning algorithms. We prove that for quadratic models, local update methods are equivalent to first-order optimization on a surrogate loss we exactly characterize. Moreover, fundamental algorithmic choices (such as learning rates) explicitly govern a trade-off between the condition number of the surrogate loss and its alignment with the true loss. We derive novel convergence rates showcasing these trade-offs and highlight their importance in communication-limited settings. Using these insights, we are able to compare local update methods based on their convergence/accuracy trade-off, not just their convergence to critical points of the empirical loss. Our results shed new light on a broad range of phenomena, including the efficacy of server momentum in federated learning and the impact of proximal client updates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2020

On the Outsized Importance of Learning Rates in Local Update Methods

We study a family of algorithms, which we refer to as local update metho...
research
02/25/2021

Achieving Linear Convergence in Federated Learning under Objective and Systems Heterogeneity

We consider a standard federated learning architecture where a group of ...
research
02/27/2023

Communication Trade-offs in Federated Learning of Spiking Neural Networks

Spiking Neural Networks (SNNs) are biologically inspired alternatives to...
research
01/30/2021

On Data Efficiency of Meta-learning

Meta-learning has enabled learning statistical models that can be quickl...
research
06/29/2023

Elastically-Constrained Meta-Learner for Federated Learning

Federated learning is an approach to collaboratively training machine le...
research
05/11/2020

FedSplit: An algorithmic framework for fast federated optimization

Motivated by federated learning, we consider the hub-and-spoke model of ...
research
06/09/2021

Memory-based Optimization Methods for Model-Agnostic Meta-Learning

Recently, model-agnostic meta-learning (MAML) has garnered tremendous at...

Please sign up or login with your details

Forgot password? Click here to reset