Distributionally Robust Federated Averaging

02/25/2021
by   Yuyang Deng, et al.
20

In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling. In contrast to standard empirical risk minimization, due to the minimax structure of the underlying optimization problem, a key difficulty arises from the fact that the global parameter that controls the mixture of local losses can only be updated infrequently on the global stage. To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter. We analyze the convergence rate of DRFA in both convex-linear and nonconvex-linear settings. We also generalize the proposed idea to objectives with regularization on the mixture parameter and propose a proximal variant, dubbed as DRFA-Prox, with provable convergence rates. We also analyze an alternative optimization method for regularized cases in strongly-convex-strongly-concave and non-convex (under PL condition)-strongly-concave settings. To the best of our knowledge, this paper is the first to solve distributionally robust federated learning with reduced communication, and to analyze the efficiency of local descent methods on distributed minimax problems. We give corroborating experimental evidence for our theoretical results in federated learning settings.

READ FULL TEXT
research
10/31/2019

On the Convergence of Local Descent Methods in Federated Learning

In federated distributed learning, the goal is to optimize a global trai...
research
06/14/2021

Decentralized Personalized Federated Min-Max Problems

Personalized Federated Learning has recently seen tremendous progress, a...
research
05/11/2020

FedSplit: An algorithmic framework for fast federated optimization

Motivated by federated learning, we consider the hub-and-spoke model of ...
research
07/02/2020

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

In federated learning, communication cost is often a critical bottleneck...
research
09/28/2019

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization

Federated learning is a new distributed machine learning approach, where...
research
11/01/2019

Robust Federated Learning with Noisy Communication

Federated learning is a communication-efficient training process that al...
research
06/02/2022

A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

In this paper, we study a large-scale multi-agent minimax optimization p...

Please sign up or login with your details

Forgot password? Click here to reset