A Hybrid Quantum-Classical Paradigm to Mitigate Embedding Costs in Quantum Annealing---Abridged Version

07/30/2018
by   Alastair A. Abbott, et al.
0

Quantum annealing has shown significant potential as an approach to near-term quantum computing. Despite promising progress towards obtaining a quantum speedup, quantum annealers are limited by the need to embed problem instances within the (often highly restricted) connectivity graph of the annealer. This embedding can be costly to perform and may destroy any computational speedup. Here we present a hybrid quantum-classical paradigm to help mitigate this limitation, and show how a raw speedup that is negated by the embedding time can nonetheless be exploited in certain circumstances. We illustrate this approach with initial results on a proof-of-concept implementation of an algorithm for the dynamically weighted maximum independent set problem.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/12/2018

A Hybrid Quantum-Classical Paradigm to Mitigate Embedding Costs in Quantum Annealing

Despite rapid recent progress towards the development of quantum compute...
04/15/2019

Applications of Quantum Annealing in Statistics

Quantum computation offers exciting new possibilities for statistics. Th...
03/27/2018

Quantum speedup in stoquastic adiabatic quantum computation

Quantum computation provides exponential speedup for solving certain mat...
11/17/2021

Quantum-Assisted Support Vector Regression for Detecting Facial Landmarks

The classical machine-learning model for support vector regression (SVR)...
11/22/2021

A Quantum Annealing Approach to Reduce Covid-19 Spread on College Campuses

Disruptions of university campuses caused by COVID-19 have motivated str...
01/25/2021

QFold: Quantum Walks and Deep Learning to Solve Protein Folding

We develop quantum computational tools to predict how proteins fold in 3...
05/05/2021

Essentiality of the Non-stoquastic Hamiltonians and Driver Graph Design in Quantum Optimization Annealing

One of the distinct features of quantum mechanics is that the probabilit...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Quantum computation has the potential to revolutionise computer science, and as a consequence has received a great deal of attention from theorists and experimentalists alike. Although much progress has been made through the concerted efforts of the community, we are still some distance from being able to build sufficiently large-scale universal quantum computers to realise this potential [22].

More recently, however, significant progress has been made in the development of special-purpose quantum computers. This has been driven by the realisation that, by dropping the requirement of being able to efficiently simulate arbitrary computations and relaxing some of the constraints that make large-scale universal quantum computing difficult, such devices can be more easily engineered and scaled. With this approach it may be possible to exploit some of the capabilities of quantum computation to obtain lesser, but nevertheless practical, advantages in real-world applications. Quantum annealers, which solve particular optimisation problems, exemplify this approach, and significant progress has been made in recent years towards engineering moderately large-scale such devices [18]

. This approach has been pursued particularly zealously by D-Wave, who have developed quantum annealers with upwards of 2000 qubits 

[12] and are thus of sufficient size to tackle problems for which their performance can meaningfully be compared to classical computational approaches.

In this paradigm, however, it is a subtle problem to compare the performance of quantum solutions to classical ones, since the focus is on obtaining real-world gains in domains where heuristics tend to be at the core of the best classical approaches. Indeed, this issue is at the heart of recent debate surrounding the performance of D-Wave machines 

[7, 28]. In particular, instead of focusing on asymptotic analyses, one must compare the performance of classical and quantum devices empirically. But performing benchmarking fairly is difficult, especially when there is often debate as to which classical algorithm should be taken for comparison [19, 21, 27].

In this paper, motivated by the need to take into account the cost of classical processing in benchmarking quantum annealers, we propose a hybrid quantum-classical approach for developing algorithms that mitigates the cost of this processing. We focus on D-Wave’s quantum annealers where the process involves a costly classical “embedding” stage which maps an arbitrary problem instance into one compatible with D-Wave’s limited connectivity constraints. We then formulate a generic hybrid approach that mitigates this cost allowing any advantage present to be accessed more directly [5]. The embedding problem is time-consuming, and experimental studies indicate that its quality can have strong effects on performance [31, 33].

To illustrate this generic framework for hybrid computing we propose, we present a hybrid algorithm based around a D-Wave solution to the maximum-weight independent set (MWIS) problem. We present an overview of the results of an initial proof-of-principle implementation of this algorithm, showing a large improvement of the hybrid algorithm over a more standard quantum annealing approach, as well as comparing it to a classical algorithm.

2 D-Wave’s quantum annealing framework

2.1 Quantum annealing and quadratic unconstrained Boolean optimisation

Quantum annealing is a finite temperature implementation of adiabatic quantum computing [14], in which the optimisation problem to be solved is encoded into a Hamiltonian (the quantum operator corresponding to the system’s energy) such that the ground state of corresponds precisely to the solution to the problem (or one of them, if many exist). The computer is initially prepared in the ground state of a Hamiltonian , which is then slowly evolved into the target Hamiltonian . This computation can be described by the time-dependent Hamiltonian for , where and . is called the annealing time and, for D-Wave machines, the functions and give a close to a linear transition from to  [18]. If the computation is performed sufficiently slowly, the Adiabatic Theorem guarantees that the system will remain in a ground state of throughout the computation and the final state will thus correspond to an optimal solution to the problem at hand [14].

Quantum annealers implement specific, simple classes of Hamiltonians, such as the two-dimensional Ising spin Hamiltonians realised by D-Wave devices. This restriction means that D-Wave annealers are capable of solving natively the Quadratic Unconstrained Boolean Optimisation (QUBO) problem [8]. The QUBO problem is the task of finding the input that minimises a quadratic objective function of the form , where

is a vector of

binary variables and is an upper-triangular matrix of real numbers:

In the quantum annealing model of the QUBO problem, each corresponds to a qubit while defines the problem Hamiltonian . Crucially, the nonzero terms (for ) correspond to couplings between qubits and induce a graph called the logical graph representing interactions between qubits; the qubits in which the QUBO problem is represented over are called the logical qubits.

2.2 Hardware constraints and embeddings

The comparative ease in engineering devices which naturally solve the QUBO problem has been crucial for the recent experimental success of quantum annealing. Still, it remains exceedingly difficult to control interactions between qubits that are not physically near to one another, and as a result it is not possible to implement directly any instance of the QUBO problem. Instead, the couplings possible on a quantum annealer are specified by a physical graph , where is the set of physical qubits on the device, and an edge signifies that qubits and can be physically coupled [8].

The physical graphs implemented on D-Wave’s devices are Chimera graphs , which are grids of graphs [6]. Specifically, each qubit is coupled with four other qubits in the same block and two qubits in adjacent blocks (except for qubits in blocks on the edge of the grid, which are coupled to a single other block). The Chimera graph is crucially relatively sparse and near-to-planar, with qubits separated by paths of length up to . Since the logical graph for a QUBO problem instance will not, in general, be a subgraph of the physical graph , the problem instance on must be mapped to an equivalent one on . This process involves two steps: first, must be embedded in , and secondly the weights of the QUBO problem (i.e., the non-zero entries in ) must be adjusted so that valid solutions on are mapped to valid solutions on .

The embedding stage amounts to finding a (minor) embedding of into  [8], i.e., an embedding function such that i) the sets of vertices are disjoint, ii) for all , there is a subset of edges such that is connected and iii) if , then there exist such that , and is an edge in . The problem of finding a minor embedding is itself computationally difficult [8]. The embedding process may thus, in light of its computational difficulty, contribute significantly to the time required to solve a problem in practice. Currently, the standard approach to finding such an embedding is to use heuristic algorithms.

2.3 Benchmarking quantum annealers

It is not generally believed that an exponential speedup is possible for NP-hard problems such as the QUBO problem [2], and there has been much debate over whether or not quantum annealing provides any such speedup in practice [5, 27]

. Indeed, there is disagreement over what exactly constitutes a quantum “speedup” and how to determine if there is one. In this paper we will focus primarily on the empirical run-time performance in investigating whether a quantum speedup is present, rather than the (empirically estimated) scaling performance of quantum algorithms.

Good benchmarking needs to make use of fair and comprehensive metrics to determine the running time of both classical and quantum algorithms for a problem. In particular these need to properly take into account not only the “wall-time” of different stages of the quantum algorithm, but also its probabilistic nature. To understand how this can be done, we first need to outline the different stages of the quantum annealing process [20]: 1) Programming: The problem instance is loaded onto the annealing chip (QPU), which takes time ; 2) Annealing: The quantum annealing process is performed and then the physical qubits are measured to obtain a solution; this takes time ; 3) Repetition: Step 2 is performed times to obtain potential solutions. The quantum processing time (QPT) is thus With these considerations on hand, a relatively fair and robust way to measure the quantum processing time is the “time to solution” (TTS) metric [4, 27]

, which is based on the expected number of repetitions needed to obtain a valid solution with probability

(one typically takes ). If the probability per annealing sample of obtaining a solution is (which can be estimated empirically), then this is calculated as and the quantum processing time is thus calculated with this as . Existing investigations have primarily focused on comparing directly the QPT with the processing time of a classical algorithm in order to look for what we call a “raw quantum speedup”. However, it is essential to realise that the time used by the QPU and measured by the QPT refers only to a subset of the processing required to solve a given problem instance using a quantum annealer. Specifically, a complete quantum algorithm for a problem instance involves, as a minimum requirement, the following steps:

  1. Conversion: The problem instance must be converted into a QUBO instance , typically via a polynomial-time reduction taking time .

  2. Embedding: The QUBO problem must be embedded into the Chimera hardware graph taking time .

  3. Pre-processing: The embedded problem is pre-processed, which involves calculating (appropriately scaled) weights for the embedded QUBO problem, taking time .

  4. Quantum processing: The annealing process is performed on the QPU, taking time .

  5. Post-processing: The samples are post-processed to choose the best candidate solution, check its validity, and perform any other post-processing methods to improve the solution quality [20, 26] taking time . The QUBO solution must finally be converted back to a solution for the original problem .

The total processing time is thus

(1)

The realisation that these other steps must be included in the analysis is emphasised by the fact that in practical problems the embedding time often dominates the time used by the annealer itself. Previous investigations have largely avoided this by focusing on artificial problems “planted” in the Chimera graph so that no embedding is necessary [4, 13, 17, 27, 28]. Although finding a raw speedup in such situations is clearly a necessary condition for a quantum speedup, it is not sufficient for it to be present in practical problems.

To properly benchmark quantum annealing it is necessary to also compare fairly the quantum annealer to a suitable classical algorithm. Indeed, much of the controversy regarding potential speedups with quantum annealing has been due to the fact that quantum annealers have been compared against simulated annealing or simulated quantum annealing. Although such studies certainly have merit and such a speedup is certainly a necessary condition for a real quantum speedup, it has repeatedly been pointed out that classical annealing techniques are generally far from optimal and any observed speedups have disappeared when better classical algorithms were used [13]. In [27], this type of quantum speedup has been termed a “limited speedup”. Ideally, one should instead compare annealing against the best available classical algorithm for the problem to find a “potential quantum speedup”.

3 Hybrid quantum-classical computing

Much of the previous effort towards determining whether or not quantum annealing can, in practice, provide a computational speedup has focused on determining the existence of a raw quantum speedup, which does not take into account the associated classical processing that is inseparable from a quantum annealer. Such a raw speedup is certainly a necessary condition for practical quantum computational gains, and its study is therefore well justified. However, even if there is a raw speedup there are many reasons why this might not translate into a practical quantum speedup.

A practical speedup is possible for a problem if we are able to give a quantum algorithm such that , where is the classical processing time for the best available classical algorithm for the problem. From the definition of in (1), it is clear than, even if , the conversion, embedding and pre/post-processing may provide obstructions to obtaining a practical speedup. In practical terms, the pre- and post-processing tend to add relatively minor (or controllable) overheads, but the conversion and embedding costs pose more fundamental problems.

These difficulties in turning a raw quantum speedup into a practical advantage for practical problems have led to significant interest in “hybrid classical-quantum” approaches (also called “quassical” computations by Allen [5]). Hopefully, combining quantum annealing with classical algorithms may allow otherwise inaccessible speedups to be exploited. Several such hybrid approaches have aimed to overcome the resource limitation arising from the fact that practical problems typically require more qubits than are available on existing devices (as a result of the expansion in number of variables during the conversion stage discussed above) [25, 30].

3.1 Hybrid computing to negate embedding costs

Although hybrid approaches have also looked at improving the robustness and quality of embeddings [32], to the best of our knowledge such approaches have not been used to try and mitigate the cost of performing the embedding itself, which, we recall, is often prohibitive to any speedup. In this paper we propose a general hybrid approach to tackle precisely this problem. In particular we aim to show how a raw speedup that is negated by the embedding time (i.e., in particular when but ) can nonetheless be exploited to give a practical speedup to certain computational problems.

Our approach is motivated by another hybrid quantum-classical algorithmic proposal which predates the rise of quantum annealing and was introduced with the aim of exploiting Grover’s algorithm—the well-known black-box algorithm for quantum unordered database search [16]—in practical applications [23]. The crucial condition for such a problem to be amenable to this hybrid approach is that the repeated calls to the quantum annealer should be made with the same logical graph embedding, or at least permit an efficient method to construct the embedding for one call from the previous ones. If this condition is satisfied, the cost of the embedding, , can thus be spread out over the several calls, allowing a raw quantum speedup to be exploited.

In order to see how this hybrid approach can help exploit a quantum speedup, we will consider the following general description of a quantum annealing algorithm based on the hybrid approach described above (a more precise analysis would necessarily depend in part on the algorithm in question): some initial classical processing is performed, the embedding of a logical graph into the physical graph is computed, instances of a QUBO problem are solved on a quantum annealer, with some classical pre- and post-processing occurring between instances, and some final classical computation is optionally performed. More formally, let us call the overall problem the hybrid algorithm solves , and the problem instances that must be solved to do so, . Recall that the time to solve a single instance on an annealer is . As we noted earlier this is, in practical situations, generally dominated by the cost of the embedding and the quantum processing, so can be approximated, for simplicity, as

(2)

where we have explicitly included the dependence on the problem instance. The hybrid algorithm will thus take time

where encapsulates any initial and final classical processing associated with combining the solutions , and is the time of the classical calculation associated with each iteration, which we have assumed to be small compared to since this should simply encompass minor pre- and post-processing between annealing runs, and thus be negligible if the problem is amenable to the hybrid approach. Note that we have made use of the assumption that for , which is a criterion in the suitability of a problem for this hybrid approach.

We note immediately that a standard approach with a quantum annealer, performing the embedding for each instance , would take time Thus, since in practice is comparable, if not larger, than , we already have Although this conclusion may seem somewhat trivial, it is important in that it shows already how annealing can provide much larger practical gains for such complex algorithmic problems. More importantly, it may allow a raw quantum speedup to be exploited practically.

It is, of course, possible that for certain problems a much more efficient classical algorithm exists for solving when is large enough (e.g., there might be an efficient way to map solutions of to ). Such problems are thus not suitable for such a hybrid approach, and so are not of particular interest to us. Nonetheless, generally a classical algorithm for may be more intelligent than the standard approach as certain, presumably minor, parts of the computation are likely to be common to solving several . Specifically, we can thus rewrite , where is small compared to . The best classical algorithm can then, rather generally, be considered to take time

where . Crucially, unless the raw quantum speedup is small, we will also have . It is thus easy to see that, for large enough (i.e., number of to be solved), we will have , and thus the raw quantum speedup will translate into an absolute speedup for the hybrid algorithm.

4 Case study: Dynamically weighted maximum-weight independent set

To illustrate the proposed hybrid approach, we discuss in detail a concrete example both from a theoretical and experimental viewpoint.

4.1 (Dynamically weighted) Maximum-weight independent set

The core of the problem is the maximum-weighted independent (MWIS) set problem. Recall that an independent set of vertices of a graph is a set such that for all we have .

Maximum-Weight Independent Set (MWIS) Problem:
Input: A graph with positive vertex weights . Task: Find an independent set such that maximises over all independent sets of .

The general MWIS problem is NP-hard since it encompasses, by restriction, the well-studied non-weighted version [15]. One should note, however, that for graphs of bounded tree-width, the MWIS problem is polynomial-time solvable using standard dynamic programming techniques (see [24]).

Although the MWIS can be readily transformed into a QUBO problem (as we show below), by itself it is not directly suitable for the hybrid approach we proposed. However, a simple variation that we propose here is indeed suitable.

Dynamically Weighted Maximum-Weight Independent Set (DWMWIS) Problem:
Input: A graph with a set of weight functions where for . Task: Find independent sets that maximise for each .

This problem is to solve the MWIS problem on for each of the weight assignments . For we obtain again the MWIS problem, but for larger the problem is suitable for our hybrid approach.

4.2 Quantum solution

We now provide a QUBO formulation for the MWIS Problem. Fix an input graph with positive vertex weights . Let and let be a “penalty weight”. We build a QUBO matrix of dimension such that:

(3)
Theorem 1 ([3]).

The QUBO formulation given in (3) solves the MWIS Problem.

In order to adapt the MWIS solution above to the DWMWIS problem, note that the non-zero entries of the QUBO formulation (3) depend only on the structure of the graph and not on the weight function . Thus, in order to solve the DWMWIS problem, for each weight assignment the same embedding of the graph into the D-Wave physical graph can be used, meaning that a hybrid algorithm based around the MWIS solution above can readily be implemented. More specifically, following the hybrid algorithm described in Section 3.1 for instances (where each uses weight function ), we perform the embedding once (entailing a time ) and then solve the MWIS problem for each weight assignment (taking times ) using the QUBO solution outlined above.

4.3 Classical baseline

The main objective of studying the DWMWIS example in detail is to exhibit experimentally the advantage that the hybrid approach can provide over a standard annealing-based approach. Nonetheless, it is helpful to further compare this to the performance of a classical baseline algorithm for comparison and to help highlight this advantage, even if we do not necessarily expect to see an absolute quantum speedup from the hybrid algorithm.

To this end, for a given input graph with positive vertex weights , we construct a Binary Integer Programming (BIP) instance with binary variables as follows. To each vertex in we associate the binary variable , and for notational simplicity we will denote the collection of variables by a binary vector . We thus have the BIP problem instance:

(4)

Each constraint in (4) enforces the property that no adjacent vertices are chosen in the independent set while the objective function ensures an independent set with maximum sum value is chosen. Assuming we have the binary vector which yields the optimal value of objective function (4), we take to be the set of vertices selected as the maximum weighted independent set.

Theorem 2 ([3]).

The BIP formulation given in (4) solves the MWIS problem.

The classical baseline used in the analysis is based on an implementation of the BIP formulation in Sage Math [29], which has a well developed and optimised Mixed Integer Programming library. To ensure that a fair comparison with the hybrid algorithm is possible, we formulate the classical algorithm for the overall DWMWIS problem such that the set of constraints in the BIP formulation is only computed once.

4.4 Experimental definition and procedure

To study the performance of the hybrid DWMIWS algorithm in a practical setting, we made use of a D-Wave 2X quantum annealer with active physical qubits [9] to compare the performance of three algorithms on a selection DWMWIS problem instances: the “standard” quantum algorithm, in which the embedding is re-performed for each weight assignment; the hybrid DWMWIS algorithm; and the classical BIP-based solution described above. We present here a summary of the experimental procedure and results; a more detailed presentation and analysis is available in an extended version of the paper [3].

To this end we analyse the algorithms on a range of different graphs, in particular choosing graphs from a variety of common graph families with between 2 and 126 vertices. Each graph was used to generate a single DWMWIS problem instance with weight assignments, each randomly generated as floating point numbers rounded to 2 decimal places within the range . The problem instances were generated as standard adjacency list representations using SageMath [29] with random weights. The same procedure is used for the “standard” quantum algorithm, except the cost of the embedding is incurred for each weight assignment.

Since we are primarily interested in negating the impact of the embedding process in general applications, we made use of D-Wave’s heuristic embedding algorithm [11] to embed each logical graph in the physical graph. Each graph was embedded 10 times to estimate for each problem instance. Finally, our tests were run with D-Wave’s post-processing optimisation enabled. While this adds a small overhead in time, this is well within the spirit of hybrid quantum-classical computing, and allowed us to solve more problems. This post-processing method processes small batches of samples while the next batch is being processed [10]. This ensures that it only contributes a constant overhead in time for each MWIS problem instance independent of the number of samples (and thus ).

4.5 Results and analysis

For each DWMWIS problem instance (i.e., for each graph ) the times and were calculated, following the approach described in Section 3.1, as

where is the value for weight assignment and s. Both and are of the order of ms. Note that the processing time defined earlier is, for this approach to the DWMWIS problem, given by The classical time was taken as the processor time for the classical algorithm described above. The results are summarised in Figures 1(a) and 1(b), which show how the hybrid times compare to both and . Error bars are calculated from the observed variation in , the number of optimal solutions found , and the post-processing time . Of these, the error in is the dominant factor, and largely arises from the uncontrollability of the post-processing environment, which is performed remotely within the D-Wave processing pipeline. However, this variation did not result in any significant variation in success probability of the annealing, so it seems the amount of post-processing performed was constant.

(a) (b)
Figure 1: Plots of (a) an upper bound for against ; and (b) against for each DWMWIS problem instance. All times are in ms.

First and foremost, from the results shown in Figure 1(a) the extent of the advantage of the hybrid approach is evident. Indeed, this is to be expected given that, for a given DWMWIS problem, they differ (by definition) by . Although this might seem a trivial confirmation of this fact, the results help illustrate the extent of the advantage that the hybrid approach can have for such problems, a consequence of the absolute cost of the embedding. This is visible in Figure 2, showing as a function of the number of vertices in a graph.

Figure 2: Plot of graph order against the embedding time . Note the logarithmic scale in time.

From Figure 1(b) it is also evident that no absolute quantum speedup was observed using the hybrid algorithm, and indeed there is a vast difference in scale between and : the “hardest” problem was solved classically in less than 200ms, whereas the hybrid algorithm required almost 60 times as much time to solve it correctly. The inability to observe any raw speedup is hardly surprising when one notes that, even if and , the fact that ms means that that one would have ms. This programming time thus adds an essentially constant overhead, which would have less of an impact as larger problems (for which is much larger) become solvable.

Despite the absence of no overall speedup, it is interesting to examine the scaling behaviour of the hybrid approach, for which it will be useful to consider the “classical speedup ratio” . In Figure 3 we show the scaling behaviour of against two reasonable proxies of problem difficulty: the graph order , which is proportional to the problem size, and the classical time . While there is much uncertainty in the exact nature of the scaling, this results indicate that the Hybrid algorithm has a better scaling behaviour than the classical algorithm. This is more evident in Figure 4, where is plotted for specific graph families. Thus, although no quantum speedup was found, the results leave open the possibility that such a speedup will be attainable in the future on larger devices with better control of the qubits, although many unknowns may plausibly alter the scaling behaviour in the future.

(a) (b)
Figure 3: Logarithmic plots of the scaling behaviour of the classical speedup ratio for the DWMWIS problem instances: (a) graph order against ; and (b) classical time against .
(a) (b) (c)
Figure 4: Plots of the classical speedup ratio against for three families of graphs parameterised by : (a) the graphs; (b) the graphs; (c) the graphs.

Nevertheless, the experiment was a successful proof-of-concept for the hybrid paradigm we have presented. In particular, the hybrid algorithm we implemented provided large absolute gains over the standard quantum approach and showed good scaling behaviour. As larger and more efficient devices become available and more problems of practical interest are studied, it will become clearer if/when a quantum speedup might be obtainable in practise.

5 Conclusion

In this paper we presented a hybrid quantum-classical paradigm for quantum annealing algorithms aimed at countering the significant cost of the embedding process. This approach is not only a hybrid paradigm but serves equally as a guide to identifying problems that may be amenable to quantum annealing. In particular, we identify those problems that require solving a large number of related subproblems, each of which can be directed solved via annealing, may permit a hybrid approach. This is obtained by reusing and modifying embeddings for the related subproblems.

Our hybrid approach, along with its successful proof-of-principle implementation, sets the groundwork for addressing more complex problems of practical interest. Choosing correctly suitable problems is a major step in finding practical uses for quantum computers in the near term future, and with deft choices, quantum speedups from hybrid approaches might soon be realisable.

Acknowledgements

We thank N. Allen, C. McGeoch, K. Pudenz and S. Reinhardt for fruitful discussions and critical comments. This work has been supported in part by the Quantum Computing Research Initiatives at Lockheed Martin.

References