-
Federated Learning with Nesterov Accelerated Gradient Momentum Method
Federated learning (FL) is a fast-developing technique that allows multi...
read it
-
Federated Learning's Blessing: FedAvg has Linear Speedup
Federated learning (FL) learns a model jointly from a set of participati...
read it
-
An Efficient Framework for Clustered Federated Learning
We address the problem of Federated Learning (FL) where users are distri...
read it
-
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Federated learning (FL) is an emerging technique for training machine le...
read it
-
Federated Generalized Bayesian Learning via Distributed Stein Variational Gradient Descent
This paper introduces Distributed Stein Variational Gradient Descent (DS...
read it
-
Federated Uncertainty-Aware Learning for Distributed Hospital EHR Data
Recent works have shown that applying Machine Learning to Electronic Hea...
read it
-
When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning
Emerging technologies and applications including Internet of Things (IoT...
read it
Accelerating Federated Learning via Momentum Gradient Descent
Federated learning (FL) provides a communication-efficient approach to solve machine learning problems concerning distributed data, without sending raw data to a central server. However, existing works on FL only utilize first-order gradient descent (GD) and do not consider the preceding iterations to gradient update which can potentially accelerate convergence. In this paper, we consider momentum term which relates to the last iteration. The proposed momentum federated learning (MFL) uses momentum gradient descent (MGD) in the local update step of FL system. We establish global convergence properties of MFL and derive an upper bound on MFL convergence rate. Comparing the upper bounds on MFL and FL convergence rate, we provide conditions in which MFL accelerates the convergence. For different machine learning models, the convergence performance of MFL is evaluated based on experiments with MNIST dataset. Simulation results comfirm that MFL is globally convergent and further reveal significant convergence improvement over FL.
READ FULL TEXT
Comments
There are no comments yet.