A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates

08/11/2016
by   Tianbao Yang, et al.
0

This paper focuses on convex constrained optimization problems, where the solution is subject to a convex inequality constraint. In particular, we aim at challenging problems for which both projection into the constrained domain and a linear optimization under the inequality constraint are time-consuming, which render both projected gradient methods and conditional gradient methods (a.k.a. the Frank-Wolfe algorithm) expensive. In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function. We first present a general theory of optimization with only one projection. Its application to smooth optimization with only one projection yields O(1/ϵ) iteration complexity, which improves over the O(1/ϵ^2) iteration complexity established before for non-smooth optimization and can be further reduced under strong convexity. Then we introduce a local error bound condition and develop faster algorithms for non-strongly convex optimization at the price of a logarithmic number of projections. In particular, we achieve an iteration complexity of O(1/ϵ^2(1-θ)) for non-smooth optimization and O(1/ϵ^1-θ) for smooth optimization, where θ∈(0,1] appearing the local error bound condition characterizes the functional local growth rate around the optimal solutions. Novel applications in solving the constrained ℓ_1 minimization problem and a positive semi-definite constrained distance metric learning problem demonstrate that the proposed algorithms achieve significant speed-up compared with previous algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2018

Solving Non-smooth Constrained Programs with Lower Complexity than O(1/ε): A Primal-Dual Homotopy Smoothing Approach

We propose a new primal-dual homotopy smoothing algorithm for a linearly...
research
06/30/2020

Conditional Gradient Methods for convex optimization with function constraints

Conditional gradient methods have attracted much attention in both machi...
research
07/04/2016

Accelerated Stochastic Subgradient Methods under Local Error Bound Condition

In this paper, we propose two accelerated stochastic subgradient method...
research
12/09/2015

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

In this paper, we study the efficiency of a Restarted Sub Gradient (RS...
research
12/11/2015

A Unified Approach to Error Bounds for Structured Convex Optimization Problems

Error bounds, which refer to inequalities that bound the distance of vec...
research
05/20/2018

Projection-Free Algorithms in Statistical Estimation

Frank-Wolfe algorithm (FW) and its variants have gained a surge of inter...
research
12/02/2022

Fast Algorithm for Constrained Linear Inverse Problems

We consider the constrained Linear Inverse Problem (LIP), where a certai...

Please sign up or login with your details

Forgot password? Click here to reset