Optimizing quantum optimization algorithms via faster quantum gradient computation

11/01/2017
by   András Gilyén, et al.
0

We consider a generic framework of optimization algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function f:R^d→R by evaluating it at only a logarithmic number of points in superposition. Our algorithm is an improved version of Jordan's gradient calculation algorithm, providing an approximation of the gradient ∇ f with quadratically better dependence on the evaluation accuracy of f, for an important class of smooth functions. Furthermore, we show that most objective functions arising from quantum optimization procedures satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension d. One of the technical challenges in applying our gradient computation procedure for quantum optimization problems is the need to convert between a probability oracle (which is common in quantum optimization procedures) and a phase oracle (which is common in quantum algorithms) of the objective function f. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers, and quantum approximate optimization algorithms (QAOA).

READ FULL TEXT

page 1

page 2

page 3

page 4

07/20/2020

Quantum Algorithms for Escaping from Saddle Points

We initiate the study of quantum algorithms for escaping from saddle poi...
01/06/2022

Federated Optimization of Smooth Loss Functions

In this work, we study empirical risk minimization (ERM) within a federa...
06/25/2019

Complexity of Highly Parallel Non-Smooth Convex Optimization

A landmark result of non-smooth convex optimization is that gradient des...
02/04/2020

Policy Gradient based Quantum Approximate Optimization Algorithm

The quantum approximate optimization algorithm (QAOA), as a hybrid quant...
06/01/2022

Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition

Stein Variational Gradient Descent (SVGD) is an important alternative to...
01/09/2020

How to trap a gradient flow

We consider the problem of finding an ε-approximate stationary point of ...
03/21/2020

Black-box Methods for Restoring Monotonicity

In many practical applications, heuristic or approximation algorithms ar...