Nonsmooth Composite Nonconvex-Concave Minimax Optimization

by   Jiajin Li, et al.

Nonconvex-concave minimax optimization has received intense interest in machine learning, including learning with robustness to data distribution, learning with non-decomposable loss, adversarial learning, to name a few. Nevertheless, most existing works focus on the gradient-descent-ascent (GDA) variants that can only be applied in smooth settings. In this paper, we consider a family of minimax problems whose objective function enjoys the nonsmooth composite structure in the variable of minimization and is concave in the variables of maximization. By fully exploiting the composite structure, we propose a smoothed proximal linear descent ascent (smoothed PLDA) algorithm and further establish its 𝒪(ϵ^-4) iteration complexity, which matches that of smoothed GDA <cit.> under smooth settings. Moreover, under the mild assumption that the objective function satisfies the one-sided Kurdyka-Łojasiewicz condition with exponent θ∈ (0,1), we can further improve the iteration complexity to 𝒪(ϵ^-2max{2θ,1}). To the best of our knowledge, this is the first provably efficient algorithm for nonsmooth nonconvex-concave problems that can achieve the optimal iteration complexity 𝒪(ϵ^-2) if θ∈ (0,1/2]. As a byproduct, we discuss different stationarity concepts and clarify their relationships quantitatively, which could be of independent interest. Empirically, we illustrate the effectiveness of the proposed smoothed PLDA in variation regularized Wasserstein distributionally robust optimization problems.


page 1

page 2

page 3

page 4


Zeroth-Order Alternating Randomized Gradient Projection Algorithms for General Nonconvex-Concave Minimax Problems

In this paper, we study zeroth-order algorithms for nonconvex-concave mi...

On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems

We consider nonconvex-concave minimax problems, _x_y∈Y f(x, y), where f ...

Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities

In this paper, we study zeroth-order algorithms for minimax optimization...

Doubly Smoothed GDA: Global Convergent Algorithm for Constrained Nonconvex-Nonconcave Minimax Optimization

Nonconvex-nonconcave minimax optimization has been the focus of intense ...

A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems

Nonconvex-concave min-max problem arises in many machine learning applic...

Decentralized gradient descent maximization method for composite nonconvex strongly-concave minimax problems

Minimax problems have recently attracted a lot of research interests. A ...

Recursive Decomposition for Nonconvex Optimization

Continuous optimization is an important problem in many areas of AI, inc...

Please sign up or login with your details

Forgot password? Click here to reset