How to trap a gradient flow

01/09/2020
by   Sébastien Bubeck, et al.
0

We consider the problem of finding an ε-approximate stationary point of a smooth function on a compact domain of R^d. In contrast with dimension-free approaches such as gradient descent, we focus here on the case where d is finite, and potentially small. This viewpoint was explored in 1993 by Vavasis, who proposed an algorithm which, for any fixed finite dimension d, improves upon the O(1/ε^2) oracle complexity of gradient descent. For example for d=2, Vavasis' approach obtains the complexity O(1/ε). Moreover for d=2 he also proved a lower bound of Ω(1/√(ε)) for deterministic algorithms (we extend this result to randomized algorithms). Our main contribution is an algorithm, which we call gradient flow trapping (GFT), and the analysis of its oracle complexity. In dimension d=2, GFT closes the gap with Vavasis' lower bound (up to a logarithmic factor), as we show that it has complexity O(√(log(1/ε)/ε)). In dimension d=3, we show a complexity of O(log(1/ε)/ε), improving upon Vavasis' O(1 / ε^1.2). In higher dimensions, GFT has the remarkable property of being a logarithmic parallel depth strategy, in stark contrast with the polynomial depth of gradient descent or Vavasis' algorithm. In this higher dimensional regime, the total work of GFT improves quadratically upon the only other known polylogarithmic depth strategy for this problem, namely naive grid search.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2019

Complexity of Highly Parallel Non-Smooth Convex Optimization

A landmark result of non-smooth convex optimization is that gradient des...
research
10/05/2020

No quantum speedup over gradient descent for non-smooth convex optimization

We study the first-order convex optimization problem, where we have blac...
research
04/16/2022

On Acceleration of Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

We study the acceleration of the Local Polynomial Interpolation-based Gr...
research
09/21/2022

On the Complexity of Finding Small Subgradients in Nonsmooth Optimization

We study the oracle complexity of producing (δ,ϵ)-stationary points of L...
research
02/16/2023

Deterministic Nonsmooth Nonconvex Optimization

We study the complexity of optimizing nonsmooth nonconvex Lipschitz func...
research
09/26/2022

On the Complexity of Deterministic Nonsmooth and Nonconvex Optimization

In this paper, we present several new results on minimizing a nonsmooth ...
research
06/06/2021

Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1

We study the complexity of Stein Variational Gradient Descent (SVGD), wh...

Please sign up or login with your details

Forgot password? Click here to reset