Line Search for Convex Minimization

07/31/2023
by   Laurent Orseau, et al.
0

Golden-section search and bisection search are the two main principled algorithms for 1d minimization of quasiconvex (unimodal) functions. The first one only uses function queries, while the second one also uses gradient queries. Other algorithms exist under much stronger assumptions, such as Newton's method. However, to the best of our knowledge, there is no principled exact line search algorithm for general convex functions – including piecewise-linear and max-compositions of convex functions – that takes advantage of convexity. We propose two such algorithms: Δ-Bisection is a variant of bisection search that uses (sub)gradient information and convexity to speed up convergence, while Δ-Secant is a variant of golden-section search and uses only function queries. While bisection search reduces the x interval by a factor 2 at every iteration, Δ-Bisection reduces the (sometimes much) smaller x^*-gap Δ^x (the x coordinates of Δ) by at least a factor 2 at every iteration. Similarly, Δ-Secant also reduces the x^*-gap by at least a factor 2 every second function query. Moreover, the y^*-gap Δ^y (the y coordinates of Δ) also provides a refined stopping criterion, which can also be used with other algorithms. Experiments on a few convex functions confirm that our algorithms are always faster than their quasiconvex counterparts, often by more than a factor 2. We further design a quasi-exact line search algorithm based on Δ-Secant. It can be used with gradient descent as a replacement for backtracking line search, for which some parameters can be finicky to tune – and we provide examples to this effect, on strongly-convex and smooth functions. We provide convergence guarantees, and confirm the efficiency of quasi-exact line search on a few single- and multivariate convex functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2018

Revisiting Projection-Free Optimization for Strongly Convex Constraint Sets

We revisit the Frank-Wolfe (FW) optimization under strongly convex const...
research
04/24/2008

A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning

We extend the well-known BFGS quasi-Newton method and its memory-limited...
research
09/14/2015

Dropping Convexity for Faster Semi-definite Optimization

We study the minimization of a convex function f(X) over the set of n× n...
research
11/13/2017

A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization

In this paper, we propose a convergent parallel best-response algorithm ...
research
07/10/2016

On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization

The cyclic block coordinate descent-type (CBCD-type) methods, which perf...
research
02/19/2021

Permutation-Based SGD: Is Random Optimal?

A recent line of ground-breaking results for permutation-based SGD has c...
research
11/09/2022

Automated Learning: An Implementation of The A* Search Algorithm over The Random Base Functions

This letter explains an algorithm for finding a set of base functions. T...

Please sign up or login with your details

Forgot password? Click here to reset