Achieving Linear Speedup in Decentralized Stochastic Compositional Minimax Optimization

07/25/2023
by   Hongchang Gao, et al.
0

The stochastic compositional minimax problem has attracted a surge of attention in recent years since it covers many emerging machine learning models. Meanwhile, due to the emergence of distributed data, optimizing this kind of problem under the decentralized setting becomes badly needed. However, the compositional structure in the loss function brings unique challenges to designing efficient decentralized optimization algorithms. In particular, our study shows that the standard gossip communication strategy cannot achieve linear speedup for decentralized compositional minimax problems due to the large consensus error about the inner-level function. To address this issue, we developed a novel decentralized stochastic compositional gradient descent ascent with momentum algorithm to reduce the consensus error in the inner-level function. As such, our theoretical results demonstrate that it is able to achieve linear speedup with respect to the number of workers. We believe this novel algorithmic design could benefit the development of decentralized compositional optimization. Finally, we applied our methods to the imbalanced classification problem. The extensive experimental results provide evidence for the effectiveness of our algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2020

Adaptive Serverless Learning

With the emergence of distributed data, training machine learning models...
research
12/06/2022

Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems

Minimax optimization problems have attracted significant attention in re...
research
08/24/2020

Periodic Stochastic Gradient Descent with Momentum for Decentralized Training

Decentralized training has been actively studied in recent years. Althou...
research
04/20/2023

Federated Compositional Deep AUC Maximization

Federated learning has attracted increasing attention due to the promise...
research
06/30/2022

Stochastic Bilevel Distributed Optimization over a Network

Bilevel optimization has been applied to a wide variety of machine learn...
research
07/23/2022

A Dual Accelerated Method for Online Stochastic Distributed Averaging: From Consensus to Decentralized Policy Evaluation

Motivated by decentralized sensing and policy evaluation problems, we co...
research
05/04/2022

FEDNEST: Federated Bilevel, Minimax, and Compositional Optimization

Standard federated optimization methods successfully apply to stochastic...

Please sign up or login with your details

Forgot password? Click here to reset