Quantum speedup of branch-and-bound algorithms

06/25/2019 ∙ by Ashley Montanaro, et al. ∙ University of Bristol 0

Branch-and-bound is a widely used technique for solving combinatorial optimisation problems where one has access to two procedures: a branching procedure that splits a set of potential solutions into subsets, and a cost procedure that determines a lower bound on the cost of any solution in a given subset. Here we describe a quantum algorithm that can accelerate classical branch-and-bound algorithms near-quadratically in a very general setting. We show that the quantum algorithm can find exact ground states for most instances of the Sherrington-Kirkpatrick model in time O(2^0.226n), which is substantially more efficient than Grover's algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

Appendix A Example: Integer Linear Programming

To gain some intuition for how the results presented here could be applied, in this appendix we describe one simple and well-known application of branch-and-bound techniques: integer linear programming. An integer linear program (ILP) is a problem of the form:

minimise
subject to

where and

are vectors,

is a matrix, and inequalities are interpreted componentwise. Integer linear programming problems have many applications, including production planning, scheduling, capital budgeting and depot location [14].

We can solve ILPs using branch-and-bound as follows. We begin by finding a lower bound on the optimal solution to the ILP, by relaxing it to a standard linear program (LP) and solving the LP; that is, removing the constraint . This corresponds to the cost function. If the solution is integer-valued, we are done, as it corresponds to a valid solution to the ILP. Otherwise, consider an index such that the found solution value is not an integer. To implement branching, we consider the two LPs formed by introducing the constraints , . At least one of these must have the same optimal solution as the original ILP. We then repeat with these new LPs. An appealing aspect of this method is that the solution to the relaxation simultaneously tells us a lower bound on the cost, and a good variable to branch on.

The sequences of additional constraints specify subsets of potential solutions to the overall ILP. The branch and cost functions take this sequence as input and solve the resulting LP, to make a decision about which variable to branch on next, and compute a lower bound on cost, respectively. The complexity of the LP-solving step is polynomial in the input size, so the primary contribution to the overall runtime will in general be the exponential scaling in terms of the number of branching steps. A standard classical method could be used (e.g. the simplex algorithm), or one of the recently developed quantum algorithms for linear programming [11, 4, 3, 26, 10].

A particularly simple and elegant special case of this approach is the knapsack problem. Here we are given a list of items, each with weights and values , and an overall weight upper bound . We seek to find a subset of the items that maximises , given that . We can write this as an integer linear program as follows:

maximise
subject to

Each variable corresponds to whether the ’th item is included in the knapsack. Then the LP relaxation is simply to replace the constraint with the constraint for all . This is equivalent to allowing fractional amounts of each item to be included.

The branch-and-bound approach to solving ILPs can immediately also be applied to the generalisation to Mixed Integer Linear Programming, where only certain variables are constrained. Now we only branch on those variables which are forced to be integers. One can also apply it to “branch and cut” algorithms. In this approach, when the LP relaxation returns a non-integer-valued solution, one may also add a new constraint (hyperplane) which separates that solution from all integer-valued feasible solutions.

Appendix B Analysis of Algorithm 2

In this appendix we prove the correctness and the claimed runtime bound of Algorithm 2.

Theorem 1.

Let be the minimal cost of a valid solution, and let be the size of the truncated tree with cost bound . (If there is no solution, , and is the size of the whole tree.) Algorithm 2 uses

oracle calls, and except with failure probability at most , returns a solution with minimal cost, if one exists, and otherwise “no solution”.

Proof.

We first show that the algorithm succeeds with probability at least . The loop executes at most times, so each of Count and Search is used at most times. By a union bound, it is sufficient to pick to ensure that all the uses of Count and Search succeed, except with total probability at most . So we henceforth assume that Count and Search do always succeed.

If this is the case, we first observe that the algorithm always correctly outputs a minimal-cost solution, if one exists, or otherwise “no solution”. This is because at the final iteration (when ), if no solution has previously been found then Search will explore the entire tree and find a solution if one exists. To see that it outputs a minimal-cost solution, note that the binary search on using Search is over the range , and is no larger than the largest value of previously computed, so any solution with cost smaller than would have been found in a previous iteration.

It remains to prove the runtime bound. Let denote the size of the truncated tree with cost bound (so ). The first binary search (in part 2a) executes Count times, each iteration using queries; and the second binary search executes Search times, where each iteration uses queries. At each iteration of the loop, after the binary search using Count, by correctness of the quantum tree size estimation algorithm. Further, at the first iteration when (if such an iteration occurs), for all , Count does not return “contains more than nodes”. This implies that , because as the binary search terminated at cost , Count must have returned “contains more than nodes”. Note that this holds even though Count can return an arbitrary outcome when .

Therefore, at this iteration the tree truncated at cost contains a minimal-cost solution, which will be found by the binary search on using Search, and the algorithm will terminate. On the other hand, if there is no iteration such that , we must have . Combining these two claims, we have throughout the algorithm. The loop over exponentially increasing values of does not affect the overall complexity bound, so the overall complexity is

queries, as claimed. ∎

We remark that it seems that, in general, Algorithm 2 could not be replaced with simply using the Search subroutine with exponentially increasing values of the cost parameter (an approach taken in [34] for the special case of accelerating backtracking algorithms for the travelling salesman problem). This is because increasing the cost at which the tree is truncated by a constant factor could increase the size of the truncated tree substantially beyond .

Appendix C Truncated tree size bound for Sherrington-Kirkpatrick model

In this appendix, let denote the size of the tree corresponding to the classical branch-and-bound algorithm applied to find the ground-state energy of an Ising Hamiltonian corresponding to an matrix , using the bounding function described in the main text, where the tree is truncated at the optimal value .

We will prove the following result:

Theorem 2.

Let be an matrix corresponding to a Sherrington-Kirkpatrick model instance on spins. For all sufficiently large ,

The dominant term in the quantum complexity is the square root of the classical complexity, which is determined by , so Theorem 2 implies that the quantum branch-and-bound algorithm has an running time on 99% of Sherrington-Kirkpatrick model instances.

In order to prove Theorem 2, we will need two technical lemmas, proven in Appendix D.

Lemma 3.

Let . Let be continuous and 1-Lipschitz in each coordinate separately, i.e.  for all . Then

Lemma 3 was shown in [29] but with an incorrect constant. We will also need a bound on the expectation . A precise value for this is known as , but we will need a bound that holds for arbitrary :

Lemma 4.

Let be an matrix corresponding to a Sherrington-Kirkpatrick model instance on spins. For all , .

We are now able to prove Theorem 2. The basic strategy is to upper-bound the expected value of , using that (by linearity of expectation) this can be expressed as a sum over all bit-strings , , of the probability that the node corresponding to is contained within the truncated branch-and-bound tree. These bit-strings are precisely those such that Bound, and a tail bound can be used to upper-bound the probability that this event occurs.

There are two technical difficulties which need to be handled. First, this approach does not give a good upper bound in the case where is high, which can occur with non-negligible probability, leading to becoming large. We therefore handle this case separately and show that it occurs with low probability. Next, to find a tail bound on Bound, we need to compute expressions of the form ; although a limiting form for this is known [37, 42, 39], we will additionally need relatively tight bounds in the case . We therefore split into cases (where and we can use the precise limiting result) and (where we use Lemma 4).

Proof of Theorem 2.

Write , and let be an arbitrary value to be determined. We will upper-bound the probability that for some as follows, where we use the notation

for the indicator random variable which evaluates to 1 if

is true, and 0 if is false:

where the second inequality is Markov’s inequality and we use linearity of expectation in the second equality.

To upper-bound the last term, we use Lemma 3. We first observe that is 1-Lipschitz in each variable, as if we modify to produce by changing to for some pair ,

(2)

and by a similar argument . So Lemma 3 implies that

For this to be upper-bounded by a small constant (e.g. 0.005) we can take .

We next upper-bound the first term by bounding . We only need to consider in the maximisation, because when , trivially upper-bounding this probability by 1 already gives a sufficiently strong bound. Recall that

The function is 1-Lipschitz in each variable by a similar argument to (2). Thus, for any ,

(3)

First assume that , so as . For any , we have

(4)
(5)
(6)

where we use linearity of expectation to obtain the first expression, that , and the known limiting result  [37, 42, 39].

Writing , we have that

On the other hand, for , we follow a similar argument but apply the nonasymptotic result of Lemma 4 to bound , which implies that

(7)
(8)
(9)

In either case, we have

By (3), using and observing (see Figure 3) that for sufficiently large , so the right-hand side is negative as required, we have

So

observing that . It remains to determine upper bounds on the functions

(10)
(11)

This can easily be achieved numerically, giving (see Figure 4) the result that for and for . Hence

and to upper-bound the first term by 0.005, for sufficiently large one can take . This completes the proof. ∎

Figure 3: The functions , defined in (6), (9). for all , while for all .
Figure 4: The functions , defined in (10), (11). for all , while for all .

Appendix D Proofs of technical lemmas

In this appendix we prove Lemmas 3 and 4. We say that satisfies the bounded differences condition with constants , , if whenever and differ only in the ’th coordinate.

Lemma 5 (McDiarmid’s inequality or method of bounded differences [16, Corollary 5.2]).

If satisfies the bounded differences condition with constants , and are independent random variables, then

where .

Lemma 3 (restated).

Let . Let be continuous and 1-Lipschitz in each coordinate separately, i.e.  for all . Then

Proof.

For , , let be a Rademacher random variable, taking values with equal probability. Then define the sequence by . Let be defined by setting . Then changing one entry of can change by at most , so we can apply Lemma 5 with to obtain

As , the distribution of approaches a standard normal distribution for all . The lemma follows. ∎

Lemma 4 (restated).

Let be an matrix corresponding to a Sherrington-Kirkpatrick model instance on spins. For all , .

Proof of Lemma 4.

We start with

valid as is non-positive. Next, for any we have

using a union bound over and symmetry of the distribution of . By a tail bound on the normal distribution, we have

for all . So

where we use the bound for any in the second inequality. ∎

Appendix E Classical numerical branch-and-bound results

We implemented the classical branch-and-bound algorithm described in the main text, with cost function Bound, using a simple depth-first search procedure within the branch-and-bound tree which backtracks on nodes corresponding to partial solutions with an energy bound worse than the lowest energy seen thus far. For an S-K model instance described by a matrix , this gives an upper bound on the size of an optimally truncated branch-and-bound tree (equivalently, on the runtime of the best-first search algorithm applied to find the ground state energy, with cost function Bound).

This algorithm enabled instances on more than 50 spins to be solved within minutes on a standard laptop computer. We then carried out a least-squares fit on the log of the number of nodes explored, omitting small , to estimate the scaling of the algorithm with . Note that, due to finite-size effects, this may not be accurate for large ; however, it gives an indication of tree size scaling. The median normalised ground state energy found for the larger values of (e.g.  for ) seems to approach the limiting value relatively slowly. These results are consistent with heuristic finite-size results reported in [9] and were validated using exhaustive search for small .

Figure 5: Median tree size explored by classical branch-and-bound algorithm with depth-first strategy. 99 random instances generated for each . Fit is line .
Figure 6: Normalised ground state energy of instances of the S-K model. 99 random instances generated for each .