1 Introduction
Quantum computers are designed to use quantum mechanics to outperform their classical counterparts. As well as the remarkable exponential speedups that are known for specialised problems such as integer factorisation and simulation of quantummechanical systems, there are also quantum algorithms which speed up generalpurpose classical algorithms in the domains of combinatorial search and optimisation. These algorithms may achieve relatively modest speedups, but make up for this by having very broad applications. The most famous example is Grover’s algorithm [26], which achieves a quadratic speedup of classical unstructured search, and can be used to accelerate classical algorithms for solving hard constraint satisfaction problems such as Boolean satisfiability.
Here our focus is on quantum algorithms that accelerate classical numerical optimisation algorithms: that is, algorithms that attempt to solve the problem of finding such that is minimised, for some function . (We use boldface throughout for elements of .) A vast number of optimisation algorithms are known. Some algorithms seek to find (or approximate) a global minimum of , given some constraints on ; others only attempt to find a local minimum. Some algorithms have provable correctness and/or performance bounds, while the performance of others must be verified experimentally. Whether or not an algorithm has good theoretical properties, its performance on a given problem often can only be determined by running it. These factors have led to the development and use of many numerical optimisation algorithms based on varied techniques.
Here we consider some prominent generalpurpose numerical optimisation techniques, and investigate the extent to which they can be accelerated by quantum algorithms. We stress that our goal is not to develop new quantum optimisation techniques (that perhaps would not have rigorous performance bounds), but rather to find quantum algorithms that speed up existing classical techniques, while retaining the same performance guarantees. That is, if the classical algorithm performs well in terms of solution quality or execution time on a given problem instance, the quantum algorithm should also perform well. We assume throughout that the quantum algorithm has access to an oracle that computes exactly on particular inputs , implemented as a quantum circuit^{1}^{1}1As we would like to store in a register of qubits, technically this is only possible if we consider inputs within a bounded region and discretised up to a certain level of precision, and assume that is also bounded. However, this is also the case for the corresponding classical algorithms that we accelerate.. That is, we assume we have access to the map . This contrasts with a model sometimes used elsewhere in the literature, where is assumed to be provided to the quantum algorithm as a quantum state of qubits [34, 47] stored in a quantum RAM, and the goal is to produce a quantum state corresponding to .
Our results can be summarised as follows, where we use the notation (as in the rest of the paper) for an upper bound on the time required to evaluate the function . See Table 1 for a summary of the speedups we obtain.

Section 2: We show that a number of techniques for global optimisation under a Lipschitz constraint can be accelerated nearquadratically, and also discuss some challenges associated with speeding up the related and wellknown classical algorithm DIRECT [31]. In Lipschitzian optimisation, one assumes that for some that is known in advance (the Lipschitz constant of ), where is the Euclidean norm. Many techniques for Lipschitzian optimisation can be understood in the framework of branchandbound algorithms [28]. These algorithms are based on dividing ’s domain into subsets, and using a lowerbounding procedure to rule out certain subsets from consideration. This enables the use of a quantum algorithm for speeding up branchandbound algorithms [43]. The complexity of branchandbound algorithms is controlled by a parameter discussed below; the quantum algorithm achieves a quadratic reduction in complexity in terms of this parameter. A simple representative example of an algorithm fitting into this framework is Galperin’s cubic algorithm [21]. In this case, the quantum algorithm’s complexity is then , where is the depth of the branchandbound tree, whereas the classical complexity is .

Section 3: We show that backtracking line search [45, Algorithm 3.1], a subroutine used in many quasiNewton optimisation algorithms such as the BFGS algorithm, can be accelerated using a quantum algorithm which is a variant of Grover search [39]. Backtracking line search is based on choosing a direction and searching along that direction. If the overall algorithm makes iterations, the complexity of choosing is , and the number of search steps taken by the classical algorithm is , the complexity of one iteration of this classical routine is , while the complexity of the quantum algorithm is .

Section 4: We show that the NelderMead algorithm [44], a widelyused derivativefree numerical optimisation algorithm, can be accelerated using quantum minimumfinding [17]. The algorithm is an iterative procedure based on maintaining a simplex. Assume that , and that the algorithm performs iterations, of which are “shrink” steps (qv). Then the complexity of the quantum algorithm is , as compared with the classical complexity, . So if the number of shrink steps is large with respect to , or is small, the quantum speedup can be relatively substantial (up to a factor).

Section 5: Approximate computation of a gradient is a key subroutine in many optimisation algorithms, including the very widelyused gradient descent algorithm [8]. We show that the gradient of functions of the form can be computed more efficiently using a quantum algorithm of Gilyén, Arunachalam and Wiebe [22]. Given that each individual function is bounded and can be computed in time (and satisfies some technical constraints on its partial derivatives), the quantum algorithm outputs an approximation of the gradient that is accurate up to in the norm, in time , as compared with the classical complexity . (The notation hides polylogarithmic factors in , and .) However, as we will discuss, it is not clear whether this notion of approximation is sufficient to accelerate classical stochastic gradient descent algorithms.
§  Algorithm  Classical  Quantum  Technique 

2  Global opt. w/Lipschitz constraint  (e.g.)  Branchandbound [43]  
3  Backtracking line search  Variant of Grover’s algorithm [39]  
4  NelderMead  Quantum minimumfinding [17]  
5  Gradients of averaged functions  Quantum gradient computation [22] 
In each case, the quantum speedups we find are based on the use of existing quantum algorithms, rather than the development of new algorithmic techniques. We believe that there are many more quantum speedups of numerical optimisation algorithms to be discovered. We remark that, in many of the cases we consider, the extent of the quantum speedup achieved depends on the interplay of various parameters governing the optimisation algorithm’s runtime, so not every problem instance will yield a speedup.
Prior work on quantum speedups of numerical optimisation algorithms (as opposed to the analysis of new quantum algorithms such as the adiabatic algorithm [20] or quantum approximate optimisation algorithm [30, 19]) has been relatively limited. Dürr and Høyer [17] gave a quantum algorithm to find a global minimum of a function on a discrete space of size , which is based on the use of Grover’s algorithm and uses evaluations of . Arunachalam [5]
applied Dürr and Høyer’s algorithm to improve the generalised pattern search and meshadaptive direct search optimisation algorithms. A sequence of papers has found quantum speedups of linear programming and semidefinite programming algorithms
[10, 3, 2, 35, 9]; quantum speedups of more general convex optimisation algorithms are also known [51, 14]. Quantum speedups are known for computing gradients [32, 22, 15], an important subroutine in many optimisation algorithms; larger (exponential) speedups could be available in gradient descenttype algorithms if the inputs to the optimisation algorithm are available in a quantum RAM (qRAM) [34, 47]. Recently, it was shown that classical algorithms based on the general technique known as branchandbound can be accelerated nearquadratically [43].2 Branchandbound algorithms for global optimisation with a Lipschitz constraint
Finding a global minimum of an arbitrary function can be a very challenging (or indeed impossible) task. One way to make this problem more tractable is to assume that satisfies a Lipschitz condition: for some that is known in advance, where is the Euclidean norm. Finding a global minimum of under this condition is known as Lipschitzian optimisation. Lipschitzian optimisation is very general and hence can be applied in many contexts. Hansen and Jaumard [28] describe a selection of applications of Lipschitzian optimisation, including solution of nonlinear equations and inequalities; parametrisation of statistical models; black box system optimisation; and location problems.
It is natural to restrict the domain of to , and to assume that is bounded such that for all . Finally, we can relax to solving the approximate optimisation problem of finding such that , for some accuracy parameter that is determined in advance. Even in the case and with these restrictions, this problem is far from trivial. One class of algorithms that can solve Lipschitzian optimisation problems are branchandbound algorithms. Generically, a branchandbound algorithm solves a minimisation problem using the following procedures:

A branching procedure which, given a subset of possible solutions, divides into two or more smaller subsets, or returns that should not be divided further.

A bounding procedure which, when given a subset produced during the branching process, returns a lower bound such that .
Branchandbound algorithms can be seen as exploring a tree, whose vertices correspond to subsets . The children of a subset correspond to the subsets which was divided into, and leaves are subsets that should not be divided further. For a leaf, one should additionally have that . Branchandbound algorithms use the additional information provided by the branch and bound procedures to explore the most promising sets early on, and to avoid exploring subsets such that is larger than the best solution found so far. One can show that the complexity of an optimal classical branchandbound algorithm based on these generic procedures is controlled by the size of the branchandbound tree, truncated by deleting all vertices whose corresponding lower bounds are less than the optimal cost : if the size of this tree is , the optimal classical algorithm makes calls to the branch and bound procedures [33]. It is not required to know in order to apply this bound.
A generic framework for branchandbound algorithms in the context of Lipschitzian optimisation was given by Hansen and Jaumard [28, Section 3.3], and we describe it as Algorithm 1. The algorithm splits into hyperrectangles , each of which is recursively split again. Each hyperrectangle has an associated upper bound (obtained by evaluating at a discrete set of points in that hyperrectangle) and lower bound (obtained via a separate lowerbounding function), and the algorithm terminates when it finds a hyperrectangle whose upper bound is sufficiently close to its lower bound. Convergence is guaranteed if some simple criteria are satisfied, discussed in [28] (for example, the upper bound and lower bound should converge as the interval size tends to 0). Hansen and Jaumard show that many previously known algorithms for Lipschitzian optimisation can be understood as particular cases of Algorithm 1. These include Galperin’s cubic algorithm [21], which proceeds by dividing the search space into hypercubes, and algorithms of Pijavskii [46], Shubert [48] and Mladineo [40].
The branching procedure of Algorithm 1 fits into the standard branchandbound framework. Given a subset , an upper bound is obtained by evaluating at a discrete set of positions , and a lower bound is obtained using the bounding function . If the two are within , should not be expanded further. Otherwise, is split into subsets. Algorithm 1 has a notion of selecting the next subset in using a selection rule, but it is shown in [33] that the best possible selection rule in branchandbound procedures (in a query complexity sense) is to expand the subset whose bounding function is smallest^{2}^{2}2The proof of this is based on the intuition that the algorithm cannot rule out subsets whose lower bound is smaller than the cost of the optimal solution. In the setting of Lipschitzian optimisation, this only holds if the lower bounding rule is tight, in the sense that given a lower bound on , for , there exists a Lipschitz function such that this lower bound is achieved..
There is a quantum algorithm that can achieve a nearquadratic speedup of classical branchandbound algorithms [43]
. The algorithm is based on the use of quantum procedures for estimating the size of an unknown tree
[1], and searching within such a tree [6, 7, 42]. The algorithm achieves a complexity of uses of the branch and bound procedures for finding the minimum of up to accuracy . In this bound is the maximal depth of the branchandbound tree and the notation hides polylogarithmic factors in , , and , whereis the probability of failure. (We remark that the algorithm as presented in
[43] assumes knowledge of an upper bound on in advance, but such a bound can be found efficiently by applying the quantum tree search algorithms of [42, 6, 7] to the branchandbound tree obtained by truncating at depth , with exponentially increasing choices of , until is found where the corresponding tree does not contain any internal vertices that have not been expanded.)The quantum branchandbound algorithm can immediately be applied to Algorithm 1. If the time complexity of the branching and bounding rules is upperbounded by , the cost of the quantum algorithm is , as compared with the classical complexity, which is . If , the speedup of the quantum algorithm over its classical counterpart in terms of the number of uses of the branching and bounding rules is nearquadratic. If these rules in turn are relatively simple to compute compared with (as is likely to be the case for challenging optimisation problems that occur in practice), this translates into a nearquadratic runtime speedup.
To illustrate how this approach could be applied in practice, a simple example of an algorithm fitting into this framework is Galperin’s cubic algorithm [21]. The branch and bound procedures are defined as follows, recalling that is the Lipschitz constant of :

Branch: the subproblem corresponding to a hypercube is divided into equal hypercubes, for some , by dividing each side into equal parts.

Lower bounding rule: Let be an extreme point of . has side length for some integer . Then a lower bound is , maximised over extreme points of .

Upper bounding rule: Evaluate on the extreme points of and return the minimum value found.
Galperin’s algorithm is illustrated in Figure 2 for the case . The complexity of the branch and bounding steps is dominated by the cost of evaluating at the extreme points of each hypercube , which is . The quantum complexity is then , whereas the classical complexity is ; so we see that the speedup is largest for small , e.g. .
2.1 The DIRECT algorithm
A prominent algorithm proposed to handle Lipschitzian optimisation for variate functions where one does not know the Lipschitz constant in advance is known as DIRECT [31] (for “dividing rectangles”). The basic concept is to divide into (hyper)rectangles, and at each step of the algorithm to produce a list of potentially optimal rectangles, which are those that should be expanded further; see Appendix A for more details. This is similar to the branchandbound algorithms of the previous section, but with the additional complication of generating the list of potentially optimal rectangles, which involves interaction across several nodes of the branchandbound tree. This creates a difficulty for the quantum branchandbound algorithm, as it can only use branch and bound procedures based on only local information from the tree. Therefore it is unclear whether a similar quadratic speedup can be obtained.
To identify the potentially optimal vertices, the DIRECT algorithm uses a 2d convex hull algorithm. It is a natural idea to speed this up via a quantum convex hull algorithm. Lanzagorta and Uhlmann [38] have described a quantum algorithm based on Grover’s algorithm for computing a convex hull of points in 2d with complexity , where
is the number of points in the convex hull; they also give an algorithm based on a heuristic whose runtime may be
for practically relevant problems. However, the special case of the convex hull problem that is relevant to DIRECT can be solved in time [31], so this does not lead to an overall quantum speedup.3 Backtracking line search
Backtracking line search^{3}^{3}3Not to be confused with the combinatorial optimisation technique known as backtracking. [45] is a line search optimisation algorithm devised by Armijo in 1966 [4]. The goal of a line search method is, given a starting point and a direction , to move to a new point in the direction , in order to minimise a function . Backtracking line search is a particular line search technique based on the use of an exponentially decreasing parameter . A generic optimisation method based on backtracking line search is described as Algorithm 3. In this section we describe a quantum speedup of this algorithm.
Different approaches can be used to choose . These include:

Steepest descent: .

Newton’s method: , where is the Hessian of .

QuasiNewton methods (such as BFGS): , where is some approximation of .
Let denote the complexity of choosing the direction ; note that , because just writing down requires time . Then the overall complexity of one iteration of Algorithm 3 is . We can reduce this complexity using the following result of Lin and Lin [39] (see also [36]):
Theorem 1 (Lin and Lin [39]).
Consider a function . Let , if this set is nonempty, or otherwise . Then there is a quantum algorithm that succeeds with probability at least 0.99 and outputs using evaluations of if , and otherwise outputs that in steps.
We apply this result to step 3 of the classical algorithm to achieve a squareroot reduction in the dependence on . To achieve a final probability of failure bounded by a small constant, by a union bound over the iterations, it is sufficient to repeat the algorithm of Theorem 1 times to achieve failure probability at each iteration. This gives an overall complexity of the quantum algorithm which is per iteration. If the overall algorithm makes iterations, and is the largest value of for any iteration, we have an overall complexity of . In cases where (such as the steepest descent method), , and is not exponentially large in , the dominant term in this complexity bound is the second one, and we always achieve a quantum speedup. The assumption is natural if depends on all variables.
This condition that is used in step 3 is called the Armijo condition. If is Lipschitz at with Lipschitz constant (), any
satisfies the Armijo condition [25, Theorem 2.1]. If we choose such that , then since , , . Therefore, the speedup achieved by the quantum algorithm (based on this worstcase bound) will be greatest when is large (representing that could change rapidly), yet is small (representing that does not change rapidly in direction ).
Another way in which one might hope to speed up Algorithm 3 is computing more efficiently. For example, a quantum algorithm was presented by Gilyén, Arunachalam and Wiebe [23], based on a detailed analysis of and modifications to an earlier algorithm of Jordan [32], that approximately computes for smooth functions quadratically more efficiently than classical methods (that are based e.g. on finite differences). However, it seems challenging to prove that such an approximation can be inserted in the backtracking line search framework without affecting the performance of the overall algorithm, in the worst case. This is because even a small change in the direction can significantly change the behaviour of the algorithm, as the definition of Step 3 of Algorithm 3 is such that an arbitrarily small change to the values taken by along the direction can change substantially. See Section 5 below for a further discussion of this algorithm.
Finally, we remark that one simple way to find a direction such that is nonzero, as required for the line search procedure, is to choose such that is nonzero. Although a valid choice, in practice this could be less efficient than (for example) moving in the direction of steepest descent. The use of Grover’s algorithm would reduce the complexity of this step to , as compared with the classical .
4 NelderMead algorithm
The NelderMead algorithm is a direct search optimisation algorithm; that is, one which does not require information about the gradient of the objective function. It is commonlyused and implemented within many computer algebra packages. However, little convergence theory exists and in practice it is ineffective in higher dimensions^{4}^{4}4Indeed, according to Lagarias et al. [37], “given all the known inefficiencies and failures of the NelderMead algorithm… one might wonder why it is used at all, let alone why it is so extraordinarily popular.”. [37, 27]. The NelderMead algorithm uses expansion, reflection, contraction and shrink steps to update a simplex in . A number of variants of the algorithm have been proposed. The variant we will use was analysed by Lagarias et al. [37], and is presented as Algorithm 4. Algorithm 4 does not specify a termination criterion. Termination criteria that could be used include the function values at the simplex points becoming sufficiently close; the simplex points themselves becoming sufficiently close; or an iteration limit being reached.
In this section we describe a quantum speedup of the NelderMead algorithm. We first determine the classical complexity of the algorithm, drawing on the analysis of [49]. The complexity of step 1 is to write down the points. To analyse step 2, observe that a complete ordering of the points is never required; the only information about the ordering needed is the worst vertex , the nextworst vertex , and the best vertex . Knowledge of the identities of these points is sufficient to compute the centroid , and to carry out all the updates required, including the shrink step. So the first time that step 2 is executed, its complexity is , where the comes from computing the centroid. Each time step 2 is executed subsequently, except following a shrink step, the required updates can be made in time . The complexity of each of steps 3 to 6 is ; step 7 is . So the complexity of performing iterations, of which include a shrink step, is . If , this simplifies to .
The complexity of step 2, when executed for the first time or following a shrink step, can be improved using quantum minimumfinding:
Theorem 2 (Dürr and Høyer [17]).
Given a function and , there is a quantum algorithm that outputs with probability at least using evaluations of .
Thus a quantum algorithm using Theorem 2 can find the worst, nextworst and best vertices with failure probability at each iteration in time in total. This choice of failure probability is so that, by a union bound, the total probability of failure can be bounded by an arbitrarily small constant. Further, observe that the centroid can be updated in time following a shrink step, as if denotes the updated centroid, then . This does not give a quantum speedup of step 2 in all cases; the first time that step 2 is executed, if , its complexity is dominated by the cost of computing the centroid. There also remains an cost for updating the points at each shrink step. (There may be a more efficient way of keeping track of these shrink steps; however, we do not pursue this further here.) Then the overall complexity of the quantum algorithm is , and using a union bound over the steps, the algorithm’s failure probability is bounded above by an arbitrarily small constant. If , this simplifies to . Comparing with the classical complexity, we see that the quantum speedup is largest when is large compared with .
However, in practice shrink steps appear to be rare; in one set of experiments, only 33 shrink steps were observed in 2.9M iterations [50], and shrink steps never occur when NelderMead is applied to a strictly convex function [37]. If there are no shrink steps and , the complexity of the quantum algorithm is , while the complexity of the classical algorithm is . This is still a quantum speedup if ; on the other hand, if , the complexity is dominated by evaluating once at each iteration, and it is difficult to see how a quantum speedup could be achieved.
To be able to use quantum minimumfinding, we have assumed the ability to construct superpositions of the form , which enables us to evaluate in superposition. This is a quantum RAM [24], and quantum RAMs are often assumed to be difficult to construct; however, our requirements are very weak, because we only need the addressing to be performed in time , rather than , which can be achieved using an explicit quantum circuit.
Finally, we consider the possibility of accelerating calculation of the centroid
using a quantum algorithm. If each component of each vector
is suitably bounded (e.g. ) we could use quantum mean estimation [29, 11, 41] to estimate each component of up to accuracy in time with failure probability bounded by a small constant, where the term comes from reducing the failure probability for each component to . Classical mean estimation could be used instead with an overhead of an additional factor. This would give an overall time complexity similar to that derived above, but it is not obvious what the effect of replacing the centroid with an approximate centroid would be on the overall algorithm. For example, it is argued in [18] that random perturbations to the centroid throughout the algorithm can be beneficial.5 Stochastic gradient descent
One of the most widelyused, effective and simple methods for finding a local minimum of a function is gradient descent. Given a function and an initial point , the algorithm moves to the point , where
. In application areas such as machine learning
[8], one often encounters functions of the form(1) 
for some “simple” functions , where is large. (For example,
could be the error of a neural network parametrised by
on the ’th item of training data, and we might seek to minimise the average error.) Rather than computing the exact gradient by summing over all choices for , it is natural to approximate by sampling random indices with replacement and outputting . (The case is known as stochastic gradient descent; the sample is sometimes known as a minibatch.) If satisfies the Lipschitz condition that , to approximate up to additive error in the norm with failure probability it is sufficient to take by a Chernoff bound argument. Let denote an upper bound on the time required to compute for all . If we approximate using the finite difference method, then each approximation to can be computed in time , giving a total complexity of .The use of quantum amplitude estimation [12] would improve the dependence on quadratically. Here we observe that the dependence on can also be improved quadratically, using a result of Gilyén, Arunachalam and Wiebe [22]. We will impose the restriction (for technical reasons) that the range of each function is within , where these numbers could be replaced with any constants between 0 and 1. Given the more typical constraint that (e.g. if the output of represents a probability),
can easily be modified to satisfy this constraint by a simple linear transformation, which does not change
.The results of [22] use two somewhat nonstandard oracle models which we now define. First we will consider probability access, and define what a probability oracle is.
Definition 1 (Probability oracle).
Let , where forms an orthonormal basis of the Hilbert space , and let be an ancilla register on qubits. Then an operator is called a probability oracle for if
for some arbitrary qubit states , .
Essentially, within this model our objective function corresponds precisely to the probability of a certain outcome being observed upon measurement (in particular, the probability of seeing when measuring the final qubit). Indeed, given a classical description of the function , an oracle of this form can be constructed without a significant overhead [13]. The next access model we consider is access via a phase oracle.
Definition 2 (Phase oracle).
Given a function , and given that forms an orthonormal basis of the Hilbert space , then the corresponding phase oracle allows queries of the form
The authors of [22] showed that a probability oracle is capable of simulating a phase oracle, and vice versa, with only logarithmic overhead:
Theorem 3 (Converting between probability and phase oracles [22]).
Suppose is given by access to a probability oracle which makes use of auxiliary qubits. Then we can simulate an approximate phase oracle using queries to ; the gate complexity is the same up to a factor of . Similarly, suppose is given by access to a phase oracle . Then we can construct an approximate probability oracle for using queries to . The gate complexity is the same up to a factor of .
What this shows is that the two access models are moreorless equivalent in power. Now we have defined probability oracles, we can show that access to probability oracles for the individual functions immediately gives such access for itself.
Lemma 4.
Assume we have access to each function via a probability oracle . Then we can construct a probability oracle for with a single use of controlled operations (in superposition) and additional operations.
Proof.
We start with the superposition , where denotes a description of the real vector in terms of binary, up to some digits of precision, leading to an orthonormal basis. If is a power of 2, this state can be constructed easily by applying Hadamard gates to each qubit in a register of qubits. If not, the state can be constructed in circuit complexity as follows: attach a register of qubits; apply Hadamard gates to produce ; compute the function “” into an ancilla qubit using an efficient comparison circuit (e.g. [16]); measure the ancilla qubit; and proceed only if the answer is 1. If not (which occurs with probability at most ), repeat this step. We then apply the controlled operation . This produces
for some sequences of normalised states , . Rearranging subsystems, we can write this as
for some unnormalised states , where as required by the definition of a probability oracle for . ∎
We will use this probability oracle within the framework of the fast quantum algorithm of [22] for computing gradients. This algorithm is applicable to functions that satisfy a certain smoothness condition. Given some analytic function , let , and for any , , let
The following result shows that if each function satisfies the required smoothness condition [22], we have that the overall function also satisfies the same condition.
Claim 5.
Let be a real constant, and fix some . Suppose that for all the function is analytic, and that for every natural number , and , we have that
then we have that also satisfies the same condition.
Proof.
We apply the linearity of . Observe that
and we are done. ∎
In fact it’s not too hard to see that this claim generalises to moreorless any bound on the partial derivatives. We can now state the result we will need from [22].
Theorem 6 (Gilyén, Arunachalam and Wiebe [22, Theorem 25]).
Suppose that is an analytic function such that, for all and , . Assume access to is given by a phase oracle . Then there exists an algorithm that outputs a vector such that with 99% probability, using queries to the oracle and additional time .
Note that, if the time complexity of evaluating is , this dominates the overall runtime bound. We can encapsulate the combination of these results in the following theorem.
Theorem 7.
Proof.
Given the ability to compute each function in time , we can produce a phase oracle computing in time . By Theorem 3, and using that , we can then obtain an operation approximating a probability oracle for up to error in time . By Lemma 4, this gives a probability oracle for , at additional cost . By Theorem 3, we then obtain a phase oracle for at additional cost . This finally allows us to apply Theorem 6 to achieve the stated complexity. ∎
Despite Theorem 7 giving a more efficient quantum algorithm for approximately computing , it is not clear whether this translates into a more efficient quantum algorithm for stochastic gradient descent, or a quantum speedup of other algorithms making use of . This is because the algorithm of [22]
only outputs an approximate gradient, and one which may not be an unbiased estimate of
. To prove approximate convergence of stochastic gradient descent, it is not essential for the gradient estimates to be unbiased [8], and it is plausible that an approximate estimate of the gradient should lead to an approximate minimiser for being found. However, the technique used in [8] to show approximate convergence in this scenario requires the 2norm of the approximate gradient to be close to that of . The algorithm of [22] provides accuracy in the norm, which would only give accuracy in the 2norm. Further, it was shown by Cornelissen [15] that if is picked from a certain class of smooth functions, approximating up to 2norm accuracy requires uses of a phase oracle for in the worst case, so this is not merely a technical restriction. Nevertheless, it is possible that quantum gradient estimation may be more efficient than stochastic gradient descent in practice.Acknowledgements
We would like to thank Srinivasan Arunachalam for helpful explanations of the results of [22]. We acknowledge support from the QuantERA ERANET Cofund in Quantum Technologies implemented within the European Union’s Horizon 2020 Programme (QuantAlgo project) and EPSRC grants EP/R043957/1 and EP/T001062/1. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 817581). No new data were created during this study.
Appendix A The DIRECT algorithm
In this appendix we briefly describe the DIRECT (“dividing rectangles”) algorithm [31] for global optimisation of functions , which is presented as Algorithm 5. The algorithm is based on maintaining a partition of the hypercube into hyperrectangles using the concept of “potentially optimal” hyperrectangles:
Definition 3.
Let , let be the current best function value found, and let be the current number of hyperrectangles in the partition of . Let denote the centre of the th hyperrectangle, and let denote the distance from the centre to the vertices. Hyperrectangle is said to be potentially optimal if there exists such that ,
(2) 
and
(3) 
We think of in Definition 3 as a surrogate for the Lipschitz constant of (which is not assumed to be known in advance). An example of the first couple of steps of dividing into rectangles is shown in Figure 5(a). The set of potentially optimal hyperrectangles can be determined in time , where is the number of distinct interval lengths, using a convex hull technique described in [31] and illustrated in Figure 5(b). The conditions (2) and (3) are satisfied by the points that lie on the lower convex hull when is plotted against for each hyperrectangle, and we also include the point . In Figure 5(b) the red dots represent potentially optimal hyperrectangles whereas the black dots represent hyperrectangles that are not potentially optimal.
References

[1]
A. Ambainis and M. Kokainis.
Quantum algorithm for tree size estimation, with applications to
backtracking and 2player games.
In Proc. 49^{th}
Annual ACM Symp. Theory of Computing
, pages 989–1002, 2017.  [2] J. van Apeldoorn and A. Gilyén. Improvements in quantum SDPsolving with applications, 2018. arXiv:1804.05058.
 [3] J. van Apeldoorn, A. Gilyén, S. Gribling, and R. de Wolf. Quantum sdpsolvers: Better upper and lower bounds. In Proc. 58^{th} Annual Symp. Foundations of Computer Science, pages 403–414, 2017. arXiv:1705.01843.
 [4] L. Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1–3, 1966.
 [5] S. Arunachalam. Quantum speedups for boolean satisfiability and derivativefree optimization. Master’s thesis, University of Waterloo, 2014.
 [6] A. Belovs. Quantum walks and electric networks, 2013. arXiv:1302.3143.
 [7] A. Belovs, A. Childs, S. Jeffery, R. Kothari, and F. Magniez. Timeefficient quantum walks for 3distinctness. In Proc. 40^{th} International Conference on Automata, Languages and Programming (ICALP’13), pages 105–122, 2013.
 [8] L. Bottou, F. Curtis, and J. Nocedal. Optimization methods for largescale machine learning. SIAM Review, 2018. arXiv:1606.04838.
 [9] F. Brandão, A. Kalev, T. Li, C. Y.Y. Lin, K. Svore, and X. Wu. Quantum SDP solvers: Large speedups, optimality, and applications to quantum learning. In Proc. 46^{th} International Conference on Automata, Languages and Programming (ICALP’19), page to appear, 2019. arXiv:1710.02581.
 [10] F. Brandão and K. Svore. Quantum speedups for semidefinite programming. In Proc. 58^{th} Annual Symp. Foundations of Computer Science, pages 415–426, 2017. arXiv:1609.05537.
 [11] G. Brassard, F. Dupuis, S. Gambs, and A. Tapp. An optimal quantum algorithm to approximate the mean and its application for approximating the median of a set of points over an arbitrary distance, 2011. arXiv:1106.4267.
 [12] G. Brassard, P. Høyer, M. Mosca, and A. Tapp. Quantum amplitude amplification and estimation. Quantum Computation and Quantum Information: A Millennium Volume, pages 53–74, 2002. quantph/0005055.
 [13] Y. Cao, A. Papageorgiou, I. Petras, J. Traub, and S. Kais. Quantum algorithm and circuit design solving the poisson equation. New J. Phys., 15:013021, 2013. arXiv:1207.2485.
 [14] S. Chakrabarti, A. Childs, T. Li, and X. Wu. Quantum algorithms and lower bounds for convex optimization, 2018. arXiv:1809.01731.
 [15] A. Cornelissen. Quantum gradient estimation of Gevrey functions, 2019. arXiv:1909.13528.
 [16] T. Draper, S. Kutin, E. Rains, and K. Svore. A logarithmicdepth quantum carrylookahead adder. Quantum Inf. Comput., 6(4–5):351–369, 2006. quantph/0406142.
 [17] C. Dürr and P. Høyer. A quantum algorithm for finding the minimum, 1996. quantph/9607014.
 [18] I. Fajfar, A. Bűrmen, and J. Puhan. The NelderMead simplex algorithm with perturbed centroid for highdimensional function optimization. Optimization Letters, 13, 07 2018.
 [19] E. Farhi, J. Goldstone, and S. Gutmann. A quantum approximate optimization algorithm, 2014. arXiv:1411.4028.
 [20] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Quantum computation by adiabatic evolution. Technical Report MITCTP2936, MIT, 2000. quantph/0001106.
 [21] E. Galperin. The cubic algorithm. Journal of Mathematical Analysis and Applications, 112(2):635–640, 1985.
 [22] A. Gilyén, S. Arunachalam, and N. Wiebe. Optimizing quantum optimization algorithms via faster quantum gradient computation. In Proceedings of the Thirtieth Annual ACMSIAM Symposium on Discrete Algorithms, pages 1425–1444. Society for Industrial and Applied Mathematics, 2019. arXiv:1711.00465.
 [23] A. Gilyén, Y. Su, G. Low, and N. Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proc. 51^{st} Annual ACM Symp. Theory of Computing, pages 193–204, 2019. arXiv:1806.01838.
 [24] V. Giovannetti, S. Lloyd, and L. Maccone. Quantum random access memory. Phys. Rev. Lett., 100:160501, 2008. arXiv:0708.1879.
 [25] N. Gould and S. Leyffer. An introduction to algorithms for nonlinear optimization. In Frontiers in numerical analysis, pages 109–197. Springer, 2003.
 [26] L. Grover. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett., 79(2):325–328, 1997. quantph/9706033.
 [27] L. Han and M. Neumann. Effect of dimensionality on the Nelder–Mead simplex method. Optimization Methods and Software, 21(1):1–16, 2006.
 [28] P. Hansen and B. Jaumard. Lipschitz optimization. In Handbook of Global Optimization, pages 407–493. Springer, 1995.
 [29] S. Heinrich. Quantum summation with an application to integration. Journal of Complexity, 18(1):1–50, 2001. quantph/0105116.
 [30] T. Hogg and D. Portnov. Quantum optimization. Information Sciences, 128:181–197, 2000. quantph/0006090.
 [31] D. Jones, C. Perttunen, and B. Stuckman. Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Application, 79(1):157–181, 1993.
 [32] S. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 95:050501, 2005. quantph/0405146.
 [33] R. Karp and Y. Zhang. Randomized parallel algorithms for backtrack search and branchandbound computation. Journal of the ACM, 40(3):765–789, 1993.
 [34] I. Kerenidis and A. Prakash. Quantum gradient descent for linear systems and least squares, 2017. arXiv:1704.04992.
 [35] I. Kerenidis and A. Prakash. A quantum interior point method for LPs and SDPs, 2018. arXiv:1808.09266.
 [36] R. Kothari. An optimal quantum algorithm for the oracle identification problem. In Proc. 31^{st} Annual Symp. Theoretical Aspects of Computer Science, pages 482–493, 2014. arXiv:1311.7685.
 [37] J. Lagarias, J. Reeds, H. Wright, and P. Wright. Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM Journal on Optimization, 9:112–147, 12 1998.
 [38] M. Lanzagorta and J. Uhlmann. Quantum algorithmic methods for computational geometry. Math. Struct. in Comp. Science, 20:1117–1125, 2010.
 [39] C. Lin and H. Lin. Upper bounds on quantum query complexity inspired by the ElitzurVaidman bomb tester. In Proc. 30^{th} Annual IEEE Conf. Computational Complexity, pages 537–566, 2015. arXiv:1410.0932.
 [40] R. Mladineo. An algorithm for finding the global maximum of a multimodal, multivariate function. In 12th International Symposium on Mathematical Programming, 1985.
 [41] A. Montanaro. Quantum speedup of Monte Carlo methods. Proc. Roy. Soc. Ser. A, 471(2181):20150301, 2015. arXiv:1504.06987.
 [42] A. Montanaro. Quantumwalk speedup of backtracking algorithms. Theory of Computing, 14(15):1–24, 2018. arXiv:1509.02374.
 [43] A. Montanaro. Quantum speedup of branchandbound algorithms. Phys. Rev. Research, 2(1):013056, 2020. arXiv:1906.10375.
 [44] J. Nelder and R. Mead. A simplex method for function minimization. Comput. J., 7:308–313, 1965.
 [45] J. Nocedal and S. Wright. Numerical Optimization. Springer, 2006.
 [46] S. Pijavskii. An algorithm for finding the absolute extremum of a function. USSR Computational Mathematics and Mathematical Physics, 12:57–67, 1972.
 [47] P. Rebentrost, M. Schuld, L. Wossnig, F. Petruccione, and S. Lloyd. Quantum gradient descent and Newton’s method for constrained polynomial optimization. New J. Phys., 21(7):073023, 2019. arXiv:1612.01789.
 [48] B. Shubert. A sequential method seeking the global maximum of a function. SIAM Journal of Numerical Analysis, 9:379–388, 1972.
 [49] S. Singer and S. Singer. Efficient implementation of the NelderMead search algorithm. Appl. Num. Anal. Comp. Math., 1(2):524–534, 2004.
 [50] V. Torczon. Multidirectional Search: A Direct Search Algorithm for Parallel Machines. PhD thesis, Rice University, 1989.
 [51] J. van Apeldoorn, A. Gilyén, S. Gribling, and R. de Wolf. Convex optimization using quantum oracles, 2018. arXiv:1809.00643.