A Single-Loop Gradient Descent and Perturbed Ascent Algorithm for Nonconvex Functional Constrained Optimization

07/12/2022
by   Songtao Lu, et al.
0

Nonconvex constrained optimization problems can be used to model a number of machine learning problems, such as multi-class Neyman-Pearson classification and constrained Markov decision processes. However, such kinds of problems are challenging because both the objective and constraints are possibly nonconvex, so it is difficult to balance the reduction of the loss value and reduction of constraint violation. Although there are a few methods that solve this class of problems, all of them are double-loop or triple-loop algorithms, and they require oracles to solve some subproblems up to certain accuracy by tuning multiple hyperparameters at each iteration. In this paper, we propose a novel gradient descent and perturbed ascent (GDPA) algorithm to solve a class of smooth nonconvex inequality constrained problems. The GDPA is a primal-dual algorithm, which only exploits the first-order information of both the objective and constraint functions to update the primal and dual variables in an alternating way. The key feature of the proposed algorithm is that it is a single-loop algorithm, where only two step-sizes need to be tuned. We show that under a mild regularity condition GDPA is able to find Karush-Kuhn-Tucker (KKT) points of nonconvex functional constrained problems with convergence rate guarantees. To the best of our knowledge, it is the first single-loop algorithm that can solve the general nonconvex smooth problems with nonconvex inequality constraints. Numerical results also showcase the superiority of GDPA compared with the best-known algorithms (in terms of both stationarity measure and feasibility of the obtained solutions).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2019

Proximal Point Methods for Optimization with Nonconvex Functional Constraints

Nonconvex optimization is becoming more and more important in machine le...
research
05/25/2016

NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization

We study a stochastic and distributed algorithm for nonconvex problems w...
research
09/22/2020

Improving Convergence for Nonconvex Composite Programming

High-dimensional nonconvex problems are popular in today's machine learn...
research
06/09/2021

Avoiding Traps in Nonconvex Problems

Iterative projection methods may become trapped at non-solutions when th...
research
12/13/2017

Penalty Dual Decomposition Method For Nonsmooth Nonconvex Optimization

Many contemporary signal processing, machine learning and wireless commu...
research
04/10/2023

First-order methods for Stochastic Variational Inequality problems with Function Constraints

The monotone Variational Inequality (VI) is an important problem in mach...
research
02/01/2023

Accelerated First-Order Optimization under Nonlinear Constraints

We exploit analogies between first-order algorithms for constrained opti...

Please sign up or login with your details

Forgot password? Click here to reset