No quantum speedup over gradient descent for non-smooth convex optimization

10/05/2020
by   Ankit Garg, et al.
0

We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function f:ℝ^n →ℝ and its (sub)gradient. Our goal is to find an ϵ-approximate minimum of f starting from a point that is distance at most R from the true minimum. If f is G-Lipschitz, then the classic gradient descent algorithm solves this problem with O((GR/ϵ)^2) queries. Importantly, the number of queries is independent of the dimension n and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension n. In this paper we reprove the randomized lower bound of Ω((GR/ϵ)^2) using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using O(GR/ϵ) quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need Ω((GR/ϵ)^2) queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2021

Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness

We study the complexity of optimizing highly smooth convex functions. Fo...
research
06/25/2019

Complexity of Highly Parallel Non-Smooth Convex Optimization

A landmark result of non-smooth convex optimization is that gradient des...
research
04/14/2017

On the Gap Between Strict-Saddles and True Convexity: An Omega(log d) Lower Bound for Eigenvector Approximation

We prove a query complexity lower bound on rank-one principal component ...
research
01/09/2020

How to trap a gradient flow

We consider the problem of finding an ε-approximate stationary point of ...
research
08/12/2018

Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization

In this paper we study the limitations of parallelization in convex opti...
research
12/05/2022

Robustness of Quantum Algorithms for Nonconvex Optimization

Recent results suggest that quantum computers possess the potential to s...
research
02/16/2023

Deterministic Nonsmooth Nonconvex Optimization

We study the complexity of optimizing nonsmooth nonconvex Lipschitz func...

Please sign up or login with your details

Forgot password? Click here to reset