Adaptive Federated Minimax Optimization with Lower complexities

11/14/2022
by   Feihu Huang, et al.
0

Federated learning is a popular distributed and privacy-preserving machine learning approach. Meanwhile, minimax optimization is an effective hierarchical model in machine learning. Recently, some federated learning methods have been proposed to solve the distributed minimax optimization. However, these federated minimax optimization methods still suffer from high gradient and communication complexities. To fill this gap, in the paper, we study the Nonconvex-Strongly-Concave (NSC) minimax optimization, and propose a class of accelerated federated minimax optimization methods (i.e., FGDA and AdaFGDA) to solve the distributed minimax problems. Specifically, our methods build on the momentum-based variance reduced and local-SGD techniques, and our adaptive algorithm (i.e., AdaFGDA) can flexibly incorporate various adaptive learning rates by using the unified adaptive matrix. Theoretically, we provide a solid convergence analysis framework for our algorithms under non-i.i.d. setting. Moreover, we prove our algorithms obtain lower gradient (i.e., SFO) complexity of Õ(ϵ^-3) with lower communication complexity of Õ(ϵ^-2) in finding ϵ-stationary point of NSC minimax problems. Experimentally, we conduct the distributed fair learning and robust federated learning tasks to verify efficiency of our methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset