Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents

10/25/2022
by   M. R. Rostami, et al.
0

This paper proposes an algorithm for Federated Learning (FL) with a two-layer structure that achieves both variance reduction and a faster convergence rate to an optimal solution in the setting where each agent has an arbitrary probability of selection in each iteration. In distributed machine learning, when privacy matters, FL is a functional tool. Placing FL in an environment where it has some irregular connections of agents (devices), reaching a trained model in both an economical and quick way can be a demanding job. The first layer of our algorithm corresponds to the model parameter propagation across agents done by the server. In the second layer, each agent does its local update with a stochastic and variance-reduced technique called Stochastic Variance Reduced Gradient (SVRG). We leverage the concept of variance reduction from stochastic optimization when the agents want to do their local update step to reduce the variance caused by stochastic gradient descent (SGD). We provide a convergence bound for our algorithm which improves the rate from O(1/√(K)) to O(1/K) by using a constant step-size. We demonstrate the performance of our algorithm using numerical examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2019

Accelerating Federated Learning via Momentum Gradient Descent

Federated learning (FL) provides a communication-efficient approach to s...
research
12/05/2022

Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

This study considers a federated learning setup where cost-sensitive and...
research
10/07/2022

Depersonalized Federated Learning: Tackling Statistical Heterogeneity by Alternating Stochastic Gradient Descent

Federated learning (FL) has gained increasing attention recently, which ...
research
09/18/2020

Federated Learning with Nesterov Accelerated Gradient Momentum Method

Federated learning (FL) is a fast-developing technique that allows multi...
research
08/06/2023

Serverless Federated AUPRC Optimization for Multi-Party Collaborative Imbalanced Data Mining

Multi-party collaborative training, such as distributed learning and fed...
research
04/24/2023

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

We study the generalization error of statistical learning models in a Fe...
research
08/02/2023

Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning

In this paper, we investigate the impact of compression on stochastic gr...

Please sign up or login with your details

Forgot password? Click here to reset