Federated Minimax Optimization: Improved Convergence Analyses and Algorithms

03/09/2022
by   Pranay Sharma, et al.
11

In this paper, we consider nonconvex minimax optimization, which is gaining prominence in many modern machine learning applications such as GANs. Large-scale edge-based collection of training data in these applications calls for communication-efficient distributed optimization algorithms, such as those used in federated learning, to process the data. In this paper, we analyze Local stochastic gradient descent ascent (SGDA), the local-update version of the SGDA algorithm. SGDA is the core algorithm used in minimax optimization, but it is not well-understood in a distributed setting. We prove that Local SGDA has order-optimal sample complexity for several classes of nonconvex-concave and nonconvex-nonconcave minimax problems, and also enjoys linear speedup with respect to the number of clients. We provide a novel and tighter analysis, which improves the convergence and communication guarantees in the existing literature. For nonconvex-PL and nonconvex-one-point-concave functions, we improve the existing complexity results for centralized minimax problems. Furthermore, we propose a momentum-based local-update algorithm, which has the same convergence guarantees, but outperforms Local SGDA as demonstrated in our experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2020

Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities

In this paper, we study zeroth-order algorithms for minimax optimization...
research
06/02/2022

A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

In this paper, we study a large-scale multi-agent minimax optimization p...
research
05/29/2021

A Federated Learning Framework for Nonconvex-PL Minimax Problems

We consider a general class of nonconvex-PL minimax problems in the cros...
research
06/15/2020

The Landscape of Nonconvex-Nonconcave Minimax Optimization

Minimax optimization has become a central tool for modern machine learni...
research
12/31/2020

Federated Nonconvex Sparse Learning

Nonconvex sparse learning plays an essential role in many areas, such as...
research
06/08/2023

Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates

Distributed and federated learning algorithms and techniques associated ...
research
03/31/2020

Second-Order Guarantees in Centralized, Federated and Decentralized Nonconvex Optimization

Rapid advances in data collection and processing capabilities have allow...

Please sign up or login with your details

Forgot password? Click here to reset