Variance-reduced accelerated methods for decentralized stochastic double-regularized nonconvex strongly-concave minimax problems
In this paper, we consider the decentralized, stochastic nonconvex strongly-concave (NCSC) minimax problem with nonsmooth regularization terms on both primal and dual variables, wherein a network of m computing agents collaborate via peer-to-peer communications. We consider when the coupling function is in expectation or finite-sum form and the double regularizers are convex functions, applied separately to the primal and dual variables. Our algorithmic framework introduces a Lagrangian multiplier to eliminate the consensus constraint on the dual variable. Coupling this with variance-reduction (VR) techniques, our proposed method, entitled VRLM, by a single neighbor communication per iteration, is able to achieve an 𝒪(κ^3ε^-3) sample complexity under the general stochastic setting, with either a big-batch or small-batch VR option, where κ is the condition number of the problem and ε is the desired solution accuracy. With a big-batch VR, we can additionally achieve 𝒪(κ^2ε^-2) communication complexity. Under the special finite-sum setting, our method with a big-batch VR can achieve an 𝒪(n + √(n)κ^2ε^-2) sample complexity and 𝒪(κ^2ε^-2) communication complexity, where n is the number of components in the finite sum. All complexity results match the best-known results achieved by a few existing methods for solving special cases of the problem we consider. To the best of our knowledge, this is the first work which provides convergence guarantees for NCSC minimax problems with general convex nonsmooth regularizers applied to both the primal and dual variables in the decentralized stochastic setting. Numerical experiments are conducted on two machine learning problems. Our code is downloadable from https://github.com/RPI-OPT/VRLM.
READ FULL TEXT