Network Gradient Descent Algorithm for Decentralized Federated Learning

05/06/2022
by   Shuyuan Wu, et al.
0

We study a fully decentralized federated learning algorithm, which is a novel gradient descent algorithm executed on a communication-based network. For convenience, we refer to it as a network gradient descent (NGD) method. In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy. Meanwhile, different clients communicate with each other directly according to a carefully designed network structure without a central master. This greatly enhances the reliability of the entire algorithm. Those nice properties inspire us to carefully study the NGD method both theoretically and numerically. Theoretically, we start with a classical linear regression model. We find that both the learning rate and the network structure play significant roles in determining the NGD estimator's statistical efficiency. The resulting NGD estimator can be statistically as efficient as the global estimator, if the learning rate is sufficiently small and the network structure is well balanced, even if the data are distributed heterogeneously. Those interesting findings are then extended to general models and loss functions. Extensive numerical studies are presented to corroborate our theoretical findings. Classical deep learning models are also presented for illustration purpose.

READ FULL TEXT
research
05/24/2023

Stochastic Unrolled Federated Learning

Algorithm unrolling has emerged as a learning-based optimization paradig...
research
10/27/2020

Federated Learning From Big Data Over Networks

This paper formulates and studies a novel algorithm for federated learni...
research
08/15/2022

Federated Quantum Natural Gradient Descent for Quantum Federated Learning

The heart of Quantum Federated Learning (QFL) is associated with a distr...
research
02/08/2023

Towards Model-Agnostic Federated Learning over Networks

We present a model-agnostic federated learning method for decentralized ...
research
11/02/2021

An Asymptotic Analysis of Minibatch-Based Momentum Methods for Linear Regression Models

Momentum methods have been shown to accelerate the convergence of the st...
research
04/13/2023

Statistical Analysis of Fixed Mini-Batch Gradient Descent Estimator

We study here a fixed mini-batch gradient decent (FMGD) algorithm to sol...
research
02/28/2017

On architectural choices in deep learning: From network structure to gradient convergence and parameter estimation

We study mechanisms to characterize how the asymptotic convergence of ba...

Please sign up or login with your details

Forgot password? Click here to reset