First-order methods for problems with O(1) functional constraints can have almost the same convergence rate as for unconstrained problems

10/05/2020
by   Yangyang Xu, et al.
0

First-order methods (FOMs) have recently been applied and analyzed for solving problems with complicated functional constraints. Existing works show that FOMs for functional constrained problems have lower-order convergence rates than those for unconstrained problems. In particular, an FOM for a smooth strongly-convex problem can have linear convergence, while it can only converge sublinearly for a constrained problem if the projection onto the constraint set is prohibited. In this paper, we point out that the slower convergence is caused by the large number of functional constraints but not the constraints themselves. When there are only m=O(1) functional constraints, we show that an FOM can have almost the same convergence rate as that for solving an unconstrained problem, even without the projection onto the feasible set. In addition, given an ε>0, we show that a complexity result that is better than a lower bound can be obtained, if there are only m=o(ε^-1/2) functional constraints. Our result is surprising but does not contradict to the existing lower complexity bound, because we focus on a specific subclass of problems. Experimental results on quadratically-constrained quadratic programs demonstrate our theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2020

Augmented Lagrangian based first-order methods for convex and nonconvex programs: nonergodic convergence and iteration complexity

First-order methods (FOMs) have been widely used for large-scale problem...
research
07/10/2023

Invex Programs: First Order Algorithms and Their Convergence

Invex programs are a special kind of non-convex problems which attain gl...
research
01/29/2019

A Parallel Projection Method for Metric Constrained Optimization

Many clustering applications in machine learning and data mining rely on...
research
05/29/2021

On Centralized and Distributed Mirror Descent: Exponential Convergence Analysis Using Quadratic Constraints

Mirror descent (MD) is a powerful first-order optimization technique tha...
research
11/14/2022

Alternating Implicit Projected SGD and Its Efficient Variants for Equality-constrained Bilevel Optimization

Stochastic bilevel optimization, which captures the inherent nested stru...
research
06/19/2018

Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity

This paper deals with an optimization problem over a network of agents, ...
research
07/17/2023

RAYEN: Imposition of Hard Convex Constraints on Neural Networks

This paper presents RAYEN, a framework to impose hard convex constraints...

Please sign up or login with your details

Forgot password? Click here to reset