A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

06/02/2022
by   Zhenyu Sun, et al.
0

In this paper, we study a large-scale multi-agent minimax optimization problem, which models many interesting applications in statistical learning and game theory, including Generative Adversarial Networks (GANs). The overall objective is a sum of agents' private local objective functions. We first analyze an important special case, empirical minimax problem, where the overall objective approximates a true population minimax risk by statistical samples. We provide generalization bounds for learning with this objective through Rademacher complexity analysis. Then, we focus on the federated setting, where agents can perform local computation and communicate with a central server. Most existing federated minimax algorithms either require communication per iteration or lack performance guarantees with the exception of Local Stochastic Gradient Descent Ascent (SGDA), a multiple-local-update descent ascent algorithm which guarantees convergence under a diminishing stepsize. By analyzing Local SGDA under the ideal condition of no gradient noise, we show that generally it cannot guarantee exact convergence with constant stepsizes and thus suffers from slow rates of convergence. To tackle this issue, we propose FedGDA-GT, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT). When local objectives are Lipschitz smooth and strongly-convex-strongly-concave, we prove that FedGDA-GT converges linearly with a constant stepsize to global ϵ-approximation solution with 𝒪(log (1/ϵ)) rounds of communication, which matches the time complexity of centralized GDA method. Finally, we numerically show that FedGDA-GT outperforms Local SGDA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2022

Federated Minimax Optimization: Improved Convergence Analyses and Algorithms

In this paper, we consider nonconvex minimax optimization, which is gain...
research
02/25/2021

Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency

Local SGD is a promising approach to overcome the communication overhead...
research
02/13/2020

An Optimal Multistage Stochastic Gradient Method for Minimax Problems

In this paper, we study the minimax optimization problem in the smooth a...
research
02/25/2021

Distributionally Robust Federated Averaging

In this paper, we study communication efficient distributed algorithms f...
research
06/16/2020

Federated Accelerated Stochastic Gradient Descent

We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a ...
research
11/30/2021

Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

For strongly convex objectives that are smooth, the classical theory of ...
research
06/18/2021

Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Minimax Problems

Large scale convex-concave minimax problems arise in numerous applicatio...

Please sign up or login with your details

Forgot password? Click here to reset