Toward Communication Efficient Adaptive Gradient Method

09/10/2021
by   Xiangyi Chen, et al.
2

In recent years, distributed optimization is proven to be an effective approach to accelerate training of large scale machine learning models such as deep neural networks. With the increasing computation power of GPUs, the bottleneck of training speed in distributed training is gradually shifting from computation to communication. Meanwhile, in the hope of training machine learning models on mobile devices, a new distributed training paradigm called “federated learning” has become popular. The communication time in federated learning is especially important due to the low bandwidth of mobile devices. While various approaches to improve the communication efficiency have been proposed for federated learning, most of them are designed with SGD as the prototype training algorithm. While adaptive gradient methods have been proven effective for training neural nets, the study of adaptive gradient methods in federated learning is scarce. In this paper, we propose an adaptive gradient method that can guarantee both the convergence and the communication efficiency for federated learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2022

FedGrad: Optimisation in Decentralised Machine Learning

Federated Learning is a machine learning paradigm where we aim to train ...
research
10/06/2021

Efficient and Private Federated Learning with Partially Trainable Networks

Federated learning is used for decentralized training of machine learnin...
research
02/06/2020

Faster On-Device Training Using New Federated Momentum Algorithm

Mobile crowdsensing has gained significant attention in recent years and...
research
10/11/2021

ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training

Federated learning is a powerful distributed learning scheme that allows...
research
08/02/2021

Communication-Efficient Federated Learning via Predictive Coding

Federated learning can enable remote workers to collaboratively train a ...
research
03/04/2021

Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices

Training deep neural networks on large datasets can often be accelerated...
research
07/26/2020

Fast-Convergent Federated Learning

Federated learning has emerged recently as a promising solution for dist...

Please sign up or login with your details

Forgot password? Click here to reset