DeepAI AI Chat
Log In Sign Up

A first-order augmented Lagrangian method for constrained minimax optimization

by   Zhaosong Lu, et al.

In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a first-order method recently developed in [26] by the authors. Under some suitable assumptions, an operation complexity of O(ε^-4logε^-1), measured by its fundamental operations, is established for the first-order augmented Lagrangian method for finding an ε-KKT solution of the constrained minimax problems.


page 1

page 2

page 3

page 4


First-order penalty methods for bilevel optimization

In this paper we study a class of unconstrained and constrained bilevel ...

Minimax Learning for Remote Prediction

The classical problem of supervised learning is to infer an accurate pre...

Iteration-complexity of first-order augmented Lagrangian methods for convex conic programming

In this paper we consider a class of convex conic programming. In partic...

On Landscape of Lagrangian Functions and Stochastic Search for Constrained Nonconvex Optimization

We study constrained nonconvex optimization problems in machine learning...

Robust and stochastic compliance-based topology optimization with finitely many loading scenarios

In this paper, the problem of load uncertainty in compliance problems is...

Multi-constrained 3D topology optimization via augmented topological level-set

The objective of this paper is to introduce and demonstrate a robust met...

Separable Approximations and Decomposition Methods for the Augmented Lagrangian

In this paper we study decomposition methods based on separable approxim...