SAGDA: Achieving 𝒪(ε^-2) Communication Complexity in Federated Min-Max Learning

10/02/2022
by   Haibo Yang, et al.
0

To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning. However, due to the more complicated inter-outer problem structure in federated min-max learning, theoretical understandings of communication complexity for federated min-max learning with infrequent communications remain very limited in the literature. This is particularly true for settings with non-i.i.d. datasets and partial client participation. To address this challenge, in this paper, we propose a new algorithmic framework called stochastic sampling averaging gradient descent ascent (SAGDA), which i) assembles stochastic gradient estimators from randomly sampled clients as control variates and ii) leverages two learning rates on both server and client sides. We show that SAGDA achieves a linear speedup in terms of both the number of clients and local update steps, which yields an 𝒪(ϵ^-2) communication complexity that is orders of magnitude lower than the state of the art. Interestingly, by noting that the standard federated stochastic gradient descent ascent (FSGDA) is in fact a control-variate-free special version of SAGDA, we immediately arrive at an 𝒪(ϵ^-2) communication complexity result for FSGDA. Therefore, through the lens of SAGDA, we also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2021

FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2...
research
12/15/2020

CosSGD: Nonlinear Quantization for Communication-efficient Federated Learning

Federated learning facilitates learning across clients without transferr...
research
02/17/2022

Federated Stochastic Gradient Descent Begets Self-Induced Momentum

Federated learning (FL) is an emerging machine learning method that can ...
research
02/20/2022

Personalized Federated Learning with Exact Stochastic Gradient Descent

In Federated Learning (FL), datasets across clients tend to be heterogen...
research
08/07/2022

Federated Adversarial Learning: A Framework with Convergence Analysis

Federated learning (FL) is a trending training paradigm to utilize decen...
research
02/20/2023

TAMUNA: Accelerated Federated Learning with Local Training and Partial Participation

In federated learning, a large number of users are involved in a global ...
research
02/09/2021

Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity

eep UC (area under the ROC curve) aximization (DAM) has attracted much a...

Please sign up or login with your details

Forgot password? Click here to reset