EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data

02/14/2023
by   Michael Crawshaw, et al.
0

Gradient clipping is an important technique for deep neural networks with exploding gradients, such as recurrent neural networks. Recent studies have shown that the loss functions of these networks do not satisfy the conventional smoothness condition, but instead satisfy a relaxed smoothness condition, i.e., the Lipschitz constant of the gradient scales linearly in terms of the gradient norm. Due to this observation, several gradient clipping algorithms have been developed for nonconvex and relaxed-smooth functions. However, the existing algorithms only apply to the single-machine or multiple-machine setting with homogeneous data across machines. It remains unclear how to design provably efficient gradient clipping algorithms in the general Federated Learning (FL) setting with heterogeneous data and limited communication rounds. In this paper, we design EPISODE, the very first algorithm to solve FL problems with heterogeneous data in the nonconvex and relaxed smoothness setting. The key ingredients of the algorithm are two new techniques called episodic gradient clipping and periodic resampled corrections. At the beginning of each round, EPISODE resamples stochastic gradients from each client and obtains the global averaged gradient, which is used to (1) determine whether to apply gradient clipping for the entire round and (2) construct local gradient corrections for each client. Notably, our algorithm and analysis provide a unified framework for both homogeneous and heterogeneous data under any noise level of the stochastic gradient, and it achieves state-of-the-art complexity results. In particular, we prove that EPISODE can achieve linear speedup in the number of machines, and it requires significantly fewer communication rounds. Experiments on several heterogeneous datasets show the superior performance of EPISODE over several strong baselines in FL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2022

A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks

In distributed training of deep neural networks or Federated Learning (F...
research
10/31/2019

On the Convergence of Local Descent Methods in Federated Learning

In federated distributed learning, the goal is to optimize a global trai...
research
12/16/2022

Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization

Federated learning (FL) enables geographically dispersed edge devices (i...
research
11/19/2021

Client Selection in Federated Learning based on Gradients Importance

Federated learning (FL) enables multiple devices to collaboratively lear...
research
08/17/2022

NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

Federated learning (FL) has received a surge of interest in recent years...
research
05/29/2021

A Federated Learning Framework for Nonconvex-PL Minimax Problems

We consider a general class of nonconvex-PL minimax problems in the cros...
research
09/01/2022

Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning

While variance reduction methods have shown great success in solving lar...

Please sign up or login with your details

Forgot password? Click here to reset