Solving Quadratic Unconstrained Binary Optimization with divide-and-conquer and quantum algorithms

01/19/2021 ∙ by Gian Giacomo Guerreschi, et al. ∙ Intel 0

Quadratic Unconstrained Binary Optimization (QUBO) is a broad class of optimization problems with many practical applications. To solve its hard instances in an exact way, known classical algorithms require exponential time and several approximate methods have been devised to reduce such cost. With the growing maturity of quantum computing, quantum algorithms have been proposed to speed up the solution by using either quantum annealers or universal quantum computers. Here we apply the divide-and-conquer approach to reduce the original problem to a collection of smaller problems whose solutions can be assembled to form a single Polynomial Binary Unconstrained Optimization instance with fewer variables. This technique can be applied to any QUBO instance and leads to either an all-classical or a hybrid quantum-classical approach. When quantum heuristics like the Quantum Approximate Optimization Algorithm (QAOA) are used, our proposal leads to a double advantage: a substantial reduction of quantum resources, specifically an average of  42 random 3-regular graphs, together with an improvement in the quality of the approximate solutions reached.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Many problems of practical interest are phrased in terms of the optimization of binary-variable models with a quadratic cost function, a class of problems usually referred to as Quadratic Unconstrained Binary Optimization (QUBO). Its application spans tasks as diverse as resources allocation, clustering, set partitioning, facility locations, various forms of assignment problems, sequencing and ordering problems

Kochenberger2014 ; Glover2018a . In physics, QUBO corresponds to the Ising model describing spins with 2-body interactions Baxter1982

. In addition, a few NP-hard problems are naturally expressed as QUBO and central to the theory of computational complexity

Feige1995 ; Lucas2014 . For its relevance, a vast number of algorithms have been developed to solve QUBO instances, ranging from exact solvers, to approximate solvers, to heuristics without performance guarantee Dunning2018 . Recently, quantum algorithms have joined the competition.

Quantum computing is a new paradigm to manipulate information based on quantum mechanics. It allows updating an exponentially large amount of information with a single operation and in ways that provide speedup for specific applications. Noticeable algorithms focus on integer factorization Shor1999 , database search Grover1997 , and simulation of physical Abrams1999 and chemical systems Cao2019 . The application to QUBO problems became central to quantum computing when adiabatic quantum optimization was proposed around the turn of the century Kadowaki1998 ; Farhi2001a and quantum annealers became the first large-scale quantum devices a dozen years later Boixo2014 ; Santra2014 ; Venturelli2015 . With the accelerating development of universal quantum computers, the Quantum Approximate Optimization Algorithm (QAOA) has been proposed to solve binary optimization by encoding the approximate solution as a variationally-optimized quantum state Farhi14_qaoa_orig ; Wecker16_training ; Guerreschi17_qaoa_opt ; Zhou2018 ; Shaydulin2019a ; Streif2019a .

QAOA is a leading candidate to achieve quantum advantage by solving a practical problem faster or with larger accuracy than classical alternatives. This expectation is based on the fact that QAOA requires relatively shallow circuits suitable for non-error corrected devices, and its variational nature mitigates the effect of systematic errors. Increasingly larger and more sophisticated experiments have been performed in the last few years Otterbach2017a ; Mueller2018 ; Pichler2018a ; Pagano2020 ; google2020b

. However, the number of qubits composing near-term quantum computers is expected to be one of the most severe limitations for running algorithms, arguably the most severe together with coherence time. Earlier estimates suggest that several hundreds of qubits are required for QAOA to compete with classical solvers

Guerreschi2019 or to generate outcome distributions beyond classical simulability Dalzell2020 . To the best of our knowledge, no approach has been considered so far to reduce the number of qubits required by QAOA. An interesting approach named Recursive QAOA Bravyi2020 reduces the number of qubits during the optimization of the circuit by identifying the strongest-correlated spins, but with this technique it is the size of the initial circuit that is the limiting factor in terms of number of qubits.

In this work we explore a divide-and-conquer approach with the goal of reducing the number of variables of QUBO instances. Since the number of variables is typically a key factor in determining the cost of solving a binary optimization instance, there is the possibility that the reduced instance may also be easier to solve than the original one. While this additional benefit would be expected when comparing instances drawn from the same complexity class, we observe that the modified instances belong to a broader class of optimization problems called Polynomial Unconstrained Binary Optimization (PUBO). As the name suggests, this class includes instances whose cost function has polynomial degree beyond quadratic and it is not a priori clear whether they are harder to solve than QUBO instances of the same size.

The technique we propose is based on community detection algorithms together with a novel improvement specialized for variable reduction. Our proposal can be used as the initial step of an all-classical approach, or as part of a mixed classical-quantum one. The latter takes advantage of the natural flexibility of QAOA which can be readily adapted from the solution of QUBO to that of PUBO. We observe an interesting situation if we compare the benefits provided by the divide-and-conquer step either to an exact classical solver or to QAOA. Our numerical experiments suggest that only the quantum heuristic algorithm takes advantage of the variable elimination. Specifically, we consider the graph partition problem called Max-Cut and two sets of instances corresponding to random 3- and 4-regular graphs. We show that the reduced instances require , respectively fewer variables. However the more compact formulation does not correspond to a faster solution when the exact solver akmaxsat Kugel2012 is used. On the contrary, it translates to better approximate solutions when QAOA is considered.

The paper is organized as follows. Section II covers the technical background: we introduce QUBO and its generalization to PUBO, then we explicitly rephrase Max-Cut as QUBO to set the stage for later results. We also discuss QAOA and how it can be used to approximately solve PUBO problems. All of these are known results. In Section III, we explain our proposal of using community detection algorithms to reduce the original problem to one with fewer variables. In Section IV, we quantify the effect of the divide-and-conquer approach in terms of variable elimination for the reduced instance. We also compute the cost of an all-classical implementation and compare it with the situation in which QAOA is applied to the reduced instance. This Section demonstrates a double advantage of using divide-and-conquer together with QAOA: fewer qubits are required and higher approximation ratio is reached. At the end we draw conclusions and present some open questions.

Ii Background and definitions

ii.1 QUBO and its generalization to PUBO

As the name suggest, QUBO represents optimization problems in which a quadratic function on binary variables has to be minimized over all possible assignments of its variables. We refer to the function to minimize as the cost function or energy, and it can be written as:

(1)

where represents the assignment of spins and indicates a pair of spins, those with indices . Spins have values , and coefficients are real. Here and in the following we will use the term variable and spin interchangeably.

It is useful to introduce a generalization of QUBO to Polynomial Unconstrained Binary Optimization (PUBO). This larger class relaxes the constraint of having terms involving at most two spins. Any PUBO cost function can be written as:

(2)

where indicates a group of spins (those with indices ) and coefficients are real. While the general formulation has terms, PUBOs of practical interest usually have terms involving only spins for , and therefore only a polynomial number of coefficients are non-zero.

ii.2 Max-Cut: a graph partition problem expressed as QUBO

Given a graph, consider the problem of assigning one of two colors to each of the vertices. This creates a partition of the graph into two sets of vertices. Every edge that connects vertices of different color is considered “cut” by the partition. The Max-Cut problem asks to find the maximum number of edges that can be cut by any of the possible partitions. Despite its apparent simplicity, Max-Cut is a NP-complete problem and no known algorithm, either classical or quantum, can solve its worst case in polynomial time Karp1972 ; Dunning2018 .

Max-Cut instances are naturally described in terms of QUBO. Assign to vertex the spin and consider the partition defined by the set of spins up, those with , and spins down, with . The quantity is null when and equal to 1 when , effectively determining if an edge between vertex and is cut. The cost function can be expressed as “minus” the number of cut edges:

(3)

with the summation being on the edges of the graphy to partition and being the total number of edges. It is clear by comparison with Eq. (1) that is of QUBO form.

By associating a weight to every edge, any QUBO without linear terms can be formulated as weighted Max-Cut in which the goal is maximizing the weight of cut edges instead of their number. For ease of description and visualization, we will present our divide-and-conquer approach in the context of Max-Cut by using terms and concepts from graph theory. The extension to QUBO with linear terms is straightforward and does not present any computational subtlety.

ii.3 Quantum Approximate Optimization Algorithm

The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum-classical algorithm designed to find approximate solutions of combinatorial optimzation problems by variationally improving quantum circuits Farhi14_qaoa_orig ; Wecker16_training ; Guerreschi17_qaoa_opt ; Zhou2018 ; Shaydulin2019a ; Streif2019a . QAOA is regarded as a strong candidate for near-term applications since it uses relatively shallow quantum circuits, its variational nature provides robustness to systematic errors, and the classical alternatives are costly even for the solution of relatively small instances of combinatorial problems. QAOA can be applied to any binary optimization and thus, in the language of this work, to any PUBO.

Central to QAOA are two quantum Hamiltonians, or quantum energy functions. The first is a direct translation of the classical function to minimize, , obtained by substituting the spin with the quantum operator (i.e. the Pauli matrix on qubit ). We denote this cost Hamiltonian by . The second is a driver Hamiltonian, required to be non-commuting with and typically chosen to correspond to the homogeneous field in the direction. Formally:

(4)
(5)

where and are the Pauli matrices and of qubit . The form of is completely analogue to Eq. (2).

The above Hamiltonians are used to characterize the quantum circuit of QAOA. Specifically, the circuit is formed by alternating applications of and . Parameters and may differ for each application . The QAOA quantum circuit can be seen as a preparation of the parametric state :

(6)

where and . When measured in the basis , the state returns an assignment of the original

spins of the PUBO instance. The assignment is not unique, but determined by the probability distribution

. By varying parameters

using a classical optimizer, the distribution can be changed to increase the probability of measuring assignments corresponding to low values of

. The figure of merit of the parameters’ optimization is usually chosen to be the expectation value of the energy over the distribution , namely . Other choices are possible Barkoutsos2020 ; Larkin2020a . When the exact solution is known, results are typically reported in term of the approximation ratio:

(7)

One of the major limitations of quantum hardware at this stage of development is the number of qubits composing the system. In the next Section, we introduce techniques to reduce the need of qubits for QAOA well below the number of original variables of Max-Cut or QUBO.

Iii Divide-and-Conquer applied to QUBO

iii.1 Community detection

Given a QUBO instance, consider its quadratic terms only and neglect for the moment the constant and linear terms. The interaction pattern among variables can be visualized as a graph in which each vertex correspond to a spin and each edge to a quadratic term. The hardness to solve the QUBO instance is related to this interaction graph and typical benchmarks require solving non-planar graphs with random edges. A powerful method to analyze the structure of a graph is to divide its vertices into communities. A community is a subset of vertices that are strongly connected among themselves while exhibiting a relatively small number of across-community edges. The communities are disjoint and their union includes all vertices of the original graph. This suggests a way to divide the original problem into a set of sub-problems by considering the original QUBO instance restricted to each community. Finally, one needs to consider the original inter-community edges while patching together the partial solutions.

There is not a single way to define the most desirable division in communities, but standard approaches tend to group vertices in such a way that the number of inter-community edges is minimized, as quantified by the modularity Fortunato2016 . In practice, all metrics include terms that oppose the tendency of collecting all vertices into a single community, and most methods return a small number of communities. In our case, we want to maximze the benefit of using the quantum heuristic QAOA to solve the reduced PUBO and, specifically, we want to minimize the number of qubits needed to encode the original problem. As we will explain in the next paragraphs, this is obtained not when the number of inter-community edges is minimized, but rather when the number of vertices with inter-community edges is minimized. Despite being related, we will show how an ad-hoc modification of standard community detection algorithms leads to further, substantial, qubit reduction.

Figure 1: Comparison between two different grouping of a 3-regular graph with vertices into three communities. On the left, the original graph. In the center, the grouping is provided by a standard community detection algorithm used as baseline. Specifically, it is the algorithm community_multilevel from iGraph igraph which has the goal of maximizing the graph modularity (strongly related to minimizing the number of across-community edges). On the right, the grouping after our ad-hoc algorithm aiming at reducing the number of vertices with across-community edges, which we call boundary vertices. In fact, it is the latter number that determines how many qubits are needed when encoding the MaxCat problem of the original graph into a many-body Hamiltonian. The number of qubits for this graph instance are reduced from 20 to 12 to 11.

The community detection algorithm that we adopt as baseline is community_multilevel as implemented in the iGraph library igraph . It is a bottom-up algorithm that starts from a fine-grained view of the graph and moves towards a coarse-grained view. Initially every vertex is its own community, then vertices are moved between communities to maximize the overall modularity score. When no single vertex movement can increase the modularity, every community is shrank to a single vertex and the process restarts. The algorithm stops when neither shrinking nor vertex movement further improve the modularity. The final result assigns a membership value to every vertex , indicating the community it belongs to. We denote the set of communities with .

Our ad-hoc improvement starts with the communities identified by community_multilevel. It uses a different score based on the concept of “boundary” vertices. Denoting the set of vertices by (a reminder that they play the role of spins in the QUBO formulation of Max-Cut), divide it in subsets according to the community membership. We define two subset per community, and , with being the community index:

(8)
(9)

These sets are pairwise disjoint and their union is . Intuitively, is the set of boundary vertices of community , i.e. those vertices that have at least one edge connecting them to vertices of different communities. is the complementary set restricted to community , and represents the “core” vertices of the community. One can think of the boundary vertices from all communities as forming the set .

Our ad-hoc improvement moves vertices between communities with the scope of minimizing the score . No multi-level strategy is used, so the algorithm stops when updating any single membership does not decrease . The score we use is , the maximum between the size of the overall boundary and the size of every community. As we will see in the next Section, corresponds to the number of qubits required to solve the original QUBO instance following the divide-and-conquer approach. The second term counteracts the tendency to form very large communities and avoid the risk that the computational bottlenecks becomes solving the subproblem for the largest community. We further elaborates on the reasons of this choice in Section VI.1. To help visualization, Fig. 1 shows a 3-regular graph with 20 vertices and two ways of dividing it into three communities.

iii.2 Divide-and-conquer

Using insight from community detection, we can write the energy function from Eq. (1) as:

(10)

where represents the set of communities, the membership of vertex , and the Kronecker symbol. The first term is the sum of intra-community interactions, while the second term represents the contribution from across-community interactions. By convention, we include all linear terms as part of the intra-community term and the constant term as part of the across-community term.

Recall that corresponds to the boundary spins of community , i.e. those that have connections with vertices in other communities, while are the remaining non-boundary spins of the community, also called its core. The energy can therefore be written as:

(11)

where both and have at most quadratic terms. We use notation to represent the assignment restricted to spins in respectively.

The ultimate goal is finding or approximating the ground state energy of . To this extent we notice that the lowest part of the spectrum is reproduced by an energy function on boundary spins alone. This can be achieved by eliminating the explicit dependency from the core variables of community . Consider substituting the contribution with:

(12)

The original QUBO energy is then written as a function of boundary variables alone:

(13)

Our proposal is to solve for the ground state energy of using classical solvers or quantum algorithms. A few considerations:

  • the ground state energy of exactly corresponds to that of ;

  • the maximization procedure to compute typically takes a negligible effort w.r.t. the solution of the full problem since it requires solving similar, much smaller problems with spins instead of and the cost is exponential in the number of spins. However, one has to solve one of the smaller problems for each assignment of the spins in ;

  • is an energy function with polynomial degree at most ;

  • when considering Max-Cut, due to the symmetry of the original energy , every term of the energy function involves only an even number of variables;

  • we provide a constructive way to build in the Section VI.1, following the approach presented in Sawaya2020 with a more efficient implementation.

Iv Results

iv.1 Community detection and variable elimination

Figure 2: (Left) Number of communities as a function of the number of vertices of the original graph. (Right)

Average number of vertices in a community. In both panels, each datapoint corresponds to the average of 100 instances and vertical bars represent one standard deviation. Three classes of graphs are shown: in blue random 3-regular graphs, in red random 4-regular graphs, and in yellow Erdos-Renyi graphs with

.

We begin by quantifying the outcome of the community detection step in terms of the number and size of the subproblems it creates. We consider interaction graphs belonging to two important classes of random graphs: -regular graphs in which each vertex has exactly edges, and Erdos-Renyi graphs in which an edge between any pair of vertices is present with probability . For this study we consider regular graphs with and Erdos-Renyi with . In particular, Max-Cut on random 3-regular graphs has been widely studied both classically Goemans1995 ; Halperin2004 and as benchmark of QAOA Farhi14_qaoa_orig ; Zhou2018 ; Guerreschi2019 .

Fig. 2 shows the slow increase in the number of communities identified by the community detection algorithm community_multilevel from iGraph igraph . The average size of the communities is simply the number of communities divided by the number of vertices of the original graph. We observe a markedly different behavior between regular graphs and Erdos-Renyi graphs that we associate with the number of edges: while -regular graphs of vertices have exactly edges, Erdos-Renyi graphs have an expected number of edges equal to . Erdos-Renyi graphs are therefore much more dense than -regular ones. For each graph class, we report a single dataset corresponding to the result of a standard community detection algorithm improved by our ad-hoc post-process. In terms of number of communities, the post-process does not change the number of communities apart from small-size effects for graphs with few tens of vertices. However, the post-process leads to measurable changes in the number of boundary vertices, as we analyze next.

Figure 3: (Left) Number of qubits needed to run QAOA for Max-Cut instances following the standard QAOA approach and for our proposal. Two classes of graphs are shown: in blue random 3-regular graphs, and in red random 4-regular graphs. The baseline is obtained by performing community detection with a modularity-based algorithm, the community_multilevel from iGraph igraph (dash lines). The improved results are obtained using our specialized post-process (full lines). (Right) Qubit reduction quantified as percentage of . It demonstrates a qubit reduction of more than 40% with respect to the standard approach for random 3-regular graphs. In both panels, each datapoint corresponds to the average of 100 instances and vertical bars represent one standard deviation. Results for Erdos-Renyi graphs with show a limited impact of the divide and conquer approach and no qubit reduction is present when the original graph has 40 or more vertices.

We compute the total number of boundary variables, i.e. , and relate it to the number of qubits required to apply QAOA to the original problem following the divide-and-conquer approach we are proposing. The number of available qubits is expected to be one of the major limitations of near-term devices. Reducing the qubit requirements of an algorithm would provide significant benefits both in terms of when the algorithm can be realized in practice (devices with fewer qubits are expected to be developed before devices with more qubits) and of the effective decoherence rate (smaller for a register with fewer qubits of given quality). Remarkably, in Section IV.3 we also show that QAOA returns solutions with higher approximation ratio when adopting the divide and conquer approach.

The reduction in the number of required qubits is quantified in Fig. 3, in which the original approach of one qubit per graph vertex (equivalently per binary variable) is compared with the divide-and-conquer approach based on standard community detection algorithms and our ad-hoc improvement. For random 3-regular graphs, the required number of qubits is reduced by and over a large range of graph sizes. For random 4-regular graphs the reduction is and respectively.

iv.2 Fully-classical exact solver

The divide-and-conquer approach reduces the size of the QUBO instance to be solved at the cost of solving a large number of partially quenched instances. These subinstances correspond to the restriction of the original QUBO to single communities of vertices whose boundary is quenched to specific assignments. An important question is whether this approach provides any advantage in terms of a faster solution of the original instance. To address this point, we identify the total cost of the divide-and-conquer strategy as the sum of four terms:

  1. Divide original graph into communities using standard algorithms for community detection. Optionally, update the division in communities by minimizing the number of vertices with inter-community connections instead of the number of edges between communities.

  2. Solve the partially quenched QUBO subinstances for every community and every assignments of the boundary variables.

  3. Combine the partial solutions to create the reduced PUBO instance whose solution exactly corresponds to the solution of the original instance.

  4. Solve the reduced PUBO instance.

In this work, we adopt akmaxsat Kugel2012 as the exact solver to be used in steps (2) and (4). akmaxsat is a state-of-the-art solver of the Max-SAT problem to which both QUBO and PUBO can be reduced. It has been previously used to benchmark both quantum annealers Santra2014 and QAOA algorithms Guerreschi2019 ; Larkin2020a .

Figure 4: Comparison between the time taken to solve the original QUBO problem directly with akmaxsat (red) and the time to perform the steps of the divide-and-conquer approach. In blue, the time needed to detect the communities using community_multilevel from iGraph and our ad-hoc improvement. In orange, the time taken to solve all partially-quenched subinstances (exhaustively, one per boundary assignment of each community) together with the time needed to construct the reduced PUBO instance. In green, the time taken by akmaxsat to solve the reduced PUBO. According to the list of the 4 terms characterizing the divide-and-conquer approach discuss in the main text, the orange line corresponds to the sum of terms (2) and (3), while the green line to term (4). The x-axis corresponds to the number of vertices of the graph to partition, here corresponding to the number of spins of the original PUBO instance. The y-axis reports the of the computational time in seconds. The datapoints are averages over 20 instances of Max-Cut for random 3-regular graphs, and the vertical bars correspond to one standard deviation.

A few considerations on the steps listed above. Related to (1), the cost of running the community detection algorithm scales favorably when compared to the rest of the protocol, even including the ad-hoc improvement. We confirm it with quantitative estimates below, but focus on the other contributions. Concerning (2), the procedure detailed in Section VI.1 requires the solution of an exponential number of subinstances: for each community , we have way of constrained the boundary variables, and for each choice we have to solve a QUBO instance on core variables. This may be unnecessary when we are interested in finding good approximations of the solution to the original QUBO and not its global solution. Other, computationally less demanding, approaches are possible as we will discuss below. As the starting point, here we solve each subinstances using akmaxsat, an exact solver for SAT optimization.

About (3), the creation of the reduced PUBO instance also requires exponential time. The approach we consider to construct the contribution of community scales as . In our study, we also need to express the PUBO in terms of the Conjunctive Normal Form (CNF) used by SAT optimizers. The method we follow to reduce PUBO to SAT instances is presented in Section VI.3 and requires clauses of variables each to represent a PUBO -body term. When an approximate solution of (2) is sufficient, both the derivation cost and the number of clauses in the CNF can be significantly reduced. Concerning (4), this is a natural place where classical solvers can be substituted by hybrid quantum-classical algorithms like QAOA. We explore this scenario in the next Section.

First of all, we consider whether an exhaustive implementation of step (2) leaves any possibility for the divide-and-conquer approach to be faster than the straightforward solution of the original instance. Fig. 4 reports (the base-10 logarithm of) the time to perform a few computations: in orange, the combined cost of step (2) and (3); in green the cost of step (4); in red, the cost of solving the original instance directly. While the red and green lines rely on a state-of-the-art SAT solver, the orange line is based on our own implementation and may thus be regarded as a upper bound. The difference between the red and green line is somewhat surprising since, for the same number of spins of the original instance (given by the horizontal axis), the reduced instance has actually fewer variables. However, the reduced instance has more clauses per variable and we observed in our numerical experiments that akmaxsat performs better for small density of clauses per variable. A different solver may alleviate the problem.

Figure 5: (Left) Computational time spent on the various step of the divide-and-conquer protocol described in the main text. According to the division in 4 steps, we have (1) in blue, (2)+(3) in orange, and (4) in green. The total is reported in red and substantially corresponds at to the time required to solve the reduced instance. Each datapoint corresponds to the average over 20 instances of Max-Cut for random 3-regular graphs. Error bars are avoided for readability, but can be found in Fig. 4. (Right) Stacked plot showing the time spent in the different steps of the protocol as fraction of the total.

We then estimate the cost of the complete divide-and-conquer protocol in Fig. 5. For small instances, the total cost is dominated by the solution of all subinstances. For large instances, starting at , the cost is dominated by the solution of the reduced instance. When compared with the cost of solving the original instance directly (see Fig. 4), it is clear that the exhaustive application of the divide-and-conquer approach is not providing a computational advantage.

To increase the competitiveness of the divide-and-conquer approach it is fundamental to address the cost of steps (2-4). One approach is solving the unconstrained subproblem once for each community, and fix the core variables to the corresponding best assignment. This reduces the cost of both (2) and (3): one needs to solve a single QUBO per community (despite on both boundary and core variables, instead of on core variables alone) and the reduced instance is obtained by fixing the assignment for the core variables. An additional advantage of this approximation is that the degree of the PUBO is actually not increased by the reduction and its derivation is straightforward. Therefore the reduced instance is also a QUBO and the divide-and-conquer procedure can be applied again. The main drawback is that the procedure does not guarantee that the global solution of the original instance is faithfully reproduced by the reduced instance. To mitigate this effect, once the assignment of the boundary variables is determined by the solution of the reduced instance, the core variables can be determined by solving the constrained subproblems and not assuming the assignment found in (2).

Concerning the steep rise of the cost of (4), we believe that this is due to the way we translate a PUBO instance to a SAT instance. In fact, the approach described in Section VI.3 requires clauses per -body term. In addition, numerical experiments on different graph types (specifically fully connected and Erdős-Renyi random graphs) suggest that akmaxsat performance degrades with increasing density of clauses per variable. It would be important to re-address the computational cost of step (4) using a solver specialized for PUBO instances.

iv.3 Quantum heuristic enhanced by classical divide-and-conquer

One of the main advantages of the divide-and-conquer approach is that at every step of the protocol one has to solve PUBO instances with fewer spins than the original one. This is particularly suitable to quantum heuristics, like QAOA, whose application is currently limited by the number of qubits available to NISQ devices. In addition, smaller instances often translates in shorter quantum circuits and this may benefit the overall fidelity of the quantum computation. The qubit reduction can be evaluated from the results of Section IV.1, and it reaches for random 3-regular graphs.

Without including the effect of noise in our study, we are interested in whether our proposal improve the quality of the QAOA solution. We consider three cases: solving the original QUBO instance with QAOA, applying divide-and-conquer by using an exhaustive approach to solve all partially-quenched subinstances (as described in the previous Section) and then use QAOA for the reduced PUBO instance, or fixing the core spins of each community to their value for the best assignment of the subinstance (see discussion at the end of Section IV.2) and then solve the reduced PUBO instance with QAOA.

The results are reported in Fig. 6 as the blue, orange and green lines respectively. They suggest that optimizing the reduced PUBO problem via QAOA leads to better quality of the approximate solution compared to the optimization of the original QUBO. The quality of solution is provided in terms of the approximation ratio in Eq. (7). For these experiments, we considered 20 instances of Max-Cut on random 3- and 4-regular graphs. Each instance (either the original or reduce one) was solved by optimizing the parameters with a global approach known as APOSMM (Asynchronously Parallel Optimization Solver for Finding Multiple Minima) Larson2018 that coordinates multiple runs of the local optimizer COBYLA from SciPy package 2020SciPy-NMeth , starting from random initial parameter values.

Figure 6: Performance of QAOA as quantified by the approximation ratio at the end of a global parameter optimization. Three situations are considered: solving the original QUBO (blue line), the reduced PUBO after exhaustive solution of all partially-quenched subinstances (orange), and the reduced PUBO obtained by fixing the core spins of each community separately (green). Each datapoint corresponds to the average over 20 instances of Max-Cut for random 3-regular (left) and 4-regular (right) graphs. The vertical bars correspond to one standard deviation.

Irrespective of the situation, the global optimization has a total budget of 10,000 function evaluations per instance and each QAOA circuit has depth . As noted above, Fig. 6 suggests that QAOA works more effectively for the reduced PUBO instances than for the original QUBO one. On the contrary, Fig. 4 shows that akmaxsat takes longer to solve the reduced instances instead of the original ones. It would be interesting to observe if this behavior is reproduced when other classical solvers are considered.

In addition, we observe the good performance of QAOA after non-exact reduction for random 3-regular graphs. By fixing the spins of each community core, we are giving up the exact correspondence between the original and reduced ground state energy, with the latter being an upper bound of the former. This is most probably the case in our study since has symmetry (see Eq. (10) and (11) and consider that the original QUBO has no linear terms) and fixing the core spins implies an arbitrary selection of one of the degenerate ground states of the subinstances. Since there are at least two equivalent ground states per community, the probability of fixing all cores according to the global solution is low. Indeed it is not even guaranteed that the global solution corresponds to a minimum of the original instance when restricted to a single community. We remark that the approximation ratio is computed with respect to the original minimum energy, indicating that QAOA’s absolute performance is enhanced by the approximated reduction. Finally, as suggested at the end of the previous Section, even better results may be obtained by optimizing over the core spins while keeping all boundary spins fixed to the best assignment found by QAOA for the reduced instance.

V Discussion

We have demonstrated the application of divide-and-conquer techniques to solve arbitrary Quadratic Unconstrained Binary Optimization (QUBO) instances by solving smaller Polynomial Unconstrained Binary Optimization (PUBO) instances with fewer variables. To compute the advantage of our proposal, we consider the Max-Cut problem on random 3-regular and 4-regular graphs. While a fully-classical solution based on the exact solver for Max-SAT problem called akmaxsat seems not to benefit from the variable elimination, quantum heuristic algorithms do. In particular, we applied the Quantum Approximate Optimization Algorithm (QAOA) to both the original and reduced instances and find that the latter ones not only require fewer qubits for random 3-regular graphs (respectively fewer qubits for 4-regular ones), but also return a considerably improved approximation ratio.

We believe that reaching the scientific and technological goal of demonstrating quantum advantage for practical applications needs to consider quantum and classical processors not as competitors, but as complementary. This view is intrinsic in the formulation of variational quantum algorithms which require several iterations of a classical optimizer, but can be pushed even further. Our proposal uses classical pre-process to manipulate the problem instance and make it more suitable to the quantum algorithm. While classical solvers may also benefit from this pre-process, and this would be a very desirable outcome towards the end goal of solving problems of practical interest, we observe a situation in which the reduced instances were not easier to solve with a certain classical method than the original ones. If confirmed, the different impact of the pre-processing step may contribute to reduce the current performance gap between quantum and classical solvers (despite one cannot rule out the opposite situation). Finally, we expect that post-process may unlock further benefits for quantum algorithms.

In the context of this study, several questions are still open. Just to name a few: Is there a classical solver for the PUBO problem which takes less time on the reduced instances than on the original ones? The time cost strongly depends on the implementation details of the algorithms, can a better implementation change the fractional cost reported in Fig. 5(right)? QAOA may be used to solve the partially-quenched instances too, what would the performance be in this case? Quantum algorithms for binary optimization is a rich topic and we expect that the stream of exciting results will continue in the next several years.

Vi Methods

vi.1 Derivation of the PUBO energy function of a single community

Here we discuss a constructive method to derive as defined in Eq. (12). The method we chose is based on the framework introduced in reference Sawaya2020 , which has the more ambitious goal of converting Hamiltonians (i.e. quantum energy functions) of arbitrary quantum -level systems to Hamiltonians of quantum spins (or qubits). We will rephrase the method in the context of classical energy functions and spins.

Consider the assignment for the spins in . While keeping the boundary spins fixed, we vary the core spins of community and determine the value of by minimization. Such value is the energy of assignment , but can also be seen as the real number :

(14)

We repeat a similar minimization for all assignments of the boundary spins and determine all corresponding values. In an explicit way, it is clear that:

(15)
(16)

where the Kronecker delta notation has been generalized to vectors and then expressed as product of quadratic factors. By carrying out the products and summation, the standard PUBO form of

as a polynomial in is derived.

It is important to comment on the computational cost of this derivation. The above expression has terms, each corresponding to products of 2-term factors (i.e. factors like ). If expanded by exhaustive enumeration, we have to sum contributions. In general this cost is much less than that of solving the original problem exhaustively since typically . In addition, most of the coefficients typically cancel out, returning a PUBO with a relatively low degree. In the next Section we present a connection to the Walsh-Hadamard transform that allows to reduce the cost from to . This corresponds to a logarithmic overhead with respect to the task of finding all the values since we have of them.

vi.2 Encoding diagonal operators using Walsh-Hadamard transform

An encoding procedure is required to express a diagonal Hermitian operator as a linear combination of product of qubit operators. We follow the approach described in Sawaya2020 , but provide a more efficient way to compute the coefficients of linear combination. In the context of this paper, refer to Eq. (15) of the main text and notice that we are looking for a quantum Hamiltonian of the form:

Its eigenstates are trivially given by the computational basis states of

qubits, and the eigenvalues are explicitly provided as the

real numbers . In this Section we use the standard notation adopted by the quantum information community and rephrase the desired Hamiltonian as an arbitrary, diagonal operator on qubits:

(17)

The projector can be rewritten as:

(18)
(19)
(20)
(21)

where the first line has been obtained using:

(22)

the third line by adopting the shorthand notation (and similarly for ), and the fourth by explicit computation and the standard definition of scalar product .

Substituting inside the expression of , one ontains an expression for the coefficients of the expansion of as a linear combination of products of Z Pauli matrices:

(23)

The definition of makes it clear that the coefficient of the linear combinations are given by the Walsh-Hadamard transform of the diagonal entries of , namely . One can then take advantage of the fast Walsh-Hadamard transform to compute in time with being the dimensionality of , here being .

vi.3 Reduce PUBO instances to SAT instances

Polynomial Unconstrained Binary Optimization (PUBO) and Maximum Satisfiability (SAT) instances are important classes of optimization problems involving binary variables. Both classes are NP-complete problems and it is possible to express equivalent instances in either of the two formulations. Here we describe the reduction method we used to convert minimization of PUBO into maximization of SAT. This can be seen as a generalization of the reduction of Max-Cut to Max-2-SAT described in reference Gramm2003 .

The reduction requires a SAT variable for each PUBO spin and each -body PUBO interaction is translated into SAT clauses of variables.

The transformation from PUBO to SAT can be obtained for each term separately. AlgorithmVI.3 describes what SAT clauses need to be added to take into account the body term . It also updates the quantity offset corresponding to a constant value added to the SAT to faithfully reproduce the PUBO values.

[h] Transform PUBO terms to SAT clauses

1:
2:
3:
4:
5: order of PUBO term
6:offset same as after transformation of previous term
7:for  do
8:     
9:      True
10:     for  do
11:         if  then
12:              
13:         else
14:              
15:              
16:         end if
17:     end for
18:     if  then
19:         add clause with weight
20:     end if
21:end for
22:offset offset

The algorithm starts with a few initialization: is the order of the spin term, offset is either 0 or the value after the previous translation. The main loop explores all the assignments of the spins sequentially (line 3). For each, a tentative clause is constructed as the negative of the spin assignment (line 5:13). The utility variable (initialized in line 4) computes whether the spin term returns a positive or negative contribution to the PUBO value (line 11). If the contribution is negative, the mirror clause is added to the SAT instance with weight (lines 14:16). After line 17, the clauses added to the SAT have a cumulative value of for all variable assignments apart from those for which the spin term is negative. In that case, the cumulative value is . The offset moves the two SAT values to the desired , respectively , value.

Apart from a constant offset, the case for is presented below for direct inspection:

A few observations. 1) If the PUBO term has a non-unit coefficient , all weights for the SAT clauses and the offset are multiplied by . 2) Several SAT solvers search for the maximum value of the SAT instance while PUBO solvers search for the minimum value of the PUBO instance. Inverting the sign of all weights and the offset accounts for this difference. 3) Certain SAT solvers have constraints on the weight values, for example akmaxsat requires the weight to be a positive integer. These constraints can be accounted by a suitable rescaling of the weights and offset. If the coefficient of the PUBO term is negative , the condition in line 14 is changed to “if then” and a factor is added to and the offset contribution.

Acknowledgements.
The author thanks Jesmin Jahan Tithi for discussion on community detection algorithms and their implementation.

References