Depersonalized Federated Learning: Tackling Statistical Heterogeneity by Alternating Stochastic Gradient Descent

10/07/2022
by   Yujie Zhou, et al.
0

Federated learning (FL) has gained increasing attention recently, which enables distributed devices to train a common machine learning (ML) model for intelligent inference cooperatively without data sharing. However, the raw data held by various involved participators are always non-independent-and-identically-distributed (non-i.i.d), which results in slow convergence of the FL training process. To address this issue, we propose a new FL method that can significantly mitigate statistical heterogeneity by the depersonalized mechanism. Particularly, we decouple the global and local objectives optimized by performing stochastic gradient descent alternately to reduce the accumulated variance on the global model (generated in local update phases) hence accelerating the FL convergence. Then we analyze the proposed method detailedly to show the proposed method converging at a sublinear speed in the general non-convex setting. Finally, extensive numerical results are conducted with experiments on public datasets to verify the effectiveness of our proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2019

Accelerating Federated Learning via Momentum Gradient Descent

Federated learning (FL) provides a communication-efficient approach to s...
research
03/22/2023

Delay-Aware Hierarchical Federated Learning

Federated learning has gained popularity as a means of training models d...
research
10/07/2021

Neural Tangent Kernel Empowered Federated Learning

Federated learning (FL) is a privacy-preserving paradigm where multiple ...
research
04/18/2022

FedKL: Tackling Data Heterogeneity in Federated Reinforcement Learning by Penalizing KL Divergence

As a distributed learning paradigm, Federated Learning (FL) faces the co...
research
02/19/2023

Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification in the Presence of Data Heterogeneity

Communication overhead has become one of the major bottlenecks in the di...
research
10/25/2022

Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents

This paper proposes an algorithm for Federated Learning (FL) with a two-...
research
10/31/2022

Federated Averaging Langevin Dynamics: Toward a unified theory and new algorithms

This paper focuses on Bayesian inference in a federated learning context...

Please sign up or login with your details

Forgot password? Click here to reset