I Introduction
Many problems of practical interest are phrased in terms of the optimization of binaryvariable models with a quadratic cost function, a class of problems usually referred to as Quadratic Unconstrained Binary Optimization (QUBO). Its application spans tasks as diverse as resources allocation, clustering, set partitioning, facility locations, various forms of assignment problems, sequencing and ordering problems
Kochenberger2014 ; Glover2018a . In physics, QUBO corresponds to the Ising model describing spins with 2body interactions Baxter1982. In addition, a few NPhard problems are naturally expressed as QUBO and central to the theory of computational complexity
Feige1995 ; Lucas2014 . For its relevance, a vast number of algorithms have been developed to solve QUBO instances, ranging from exact solvers, to approximate solvers, to heuristics without performance guarantee Dunning2018 . Recently, quantum algorithms have joined the competition.Quantum computing is a new paradigm to manipulate information based on quantum mechanics. It allows updating an exponentially large amount of information with a single operation and in ways that provide speedup for specific applications. Noticeable algorithms focus on integer factorization Shor1999 , database search Grover1997 , and simulation of physical Abrams1999 and chemical systems Cao2019 . The application to QUBO problems became central to quantum computing when adiabatic quantum optimization was proposed around the turn of the century Kadowaki1998 ; Farhi2001a and quantum annealers became the first largescale quantum devices a dozen years later Boixo2014 ; Santra2014 ; Venturelli2015 . With the accelerating development of universal quantum computers, the Quantum Approximate Optimization Algorithm (QAOA) has been proposed to solve binary optimization by encoding the approximate solution as a variationallyoptimized quantum state Farhi14_qaoa_orig ; Wecker16_training ; Guerreschi17_qaoa_opt ; Zhou2018 ; Shaydulin2019a ; Streif2019a .
QAOA is a leading candidate to achieve quantum advantage by solving a practical problem faster or with larger accuracy than classical alternatives. This expectation is based on the fact that QAOA requires relatively shallow circuits suitable for nonerror corrected devices, and its variational nature mitigates the effect of systematic errors. Increasingly larger and more sophisticated experiments have been performed in the last few years Otterbach2017a ; Mueller2018 ; Pichler2018a ; Pagano2020 ; google2020b
. However, the number of qubits composing nearterm quantum computers is expected to be one of the most severe limitations for running algorithms, arguably the most severe together with coherence time. Earlier estimates suggest that several hundreds of qubits are required for QAOA to compete with classical solvers
Guerreschi2019 or to generate outcome distributions beyond classical simulability Dalzell2020 . To the best of our knowledge, no approach has been considered so far to reduce the number of qubits required by QAOA. An interesting approach named Recursive QAOA Bravyi2020 reduces the number of qubits during the optimization of the circuit by identifying the strongestcorrelated spins, but with this technique it is the size of the initial circuit that is the limiting factor in terms of number of qubits.In this work we explore a divideandconquer approach with the goal of reducing the number of variables of QUBO instances. Since the number of variables is typically a key factor in determining the cost of solving a binary optimization instance, there is the possibility that the reduced instance may also be easier to solve than the original one. While this additional benefit would be expected when comparing instances drawn from the same complexity class, we observe that the modified instances belong to a broader class of optimization problems called Polynomial Unconstrained Binary Optimization (PUBO). As the name suggests, this class includes instances whose cost function has polynomial degree beyond quadratic and it is not a priori clear whether they are harder to solve than QUBO instances of the same size.
The technique we propose is based on community detection algorithms together with a novel improvement specialized for variable reduction. Our proposal can be used as the initial step of an allclassical approach, or as part of a mixed classicalquantum one. The latter takes advantage of the natural flexibility of QAOA which can be readily adapted from the solution of QUBO to that of PUBO. We observe an interesting situation if we compare the benefits provided by the divideandconquer step either to an exact classical solver or to QAOA. Our numerical experiments suggest that only the quantum heuristic algorithm takes advantage of the variable elimination. Specifically, we consider the graph partition problem called MaxCut and two sets of instances corresponding to random 3 and 4regular graphs. We show that the reduced instances require , respectively fewer variables. However the more compact formulation does not correspond to a faster solution when the exact solver akmaxsat Kugel2012 is used. On the contrary, it translates to better approximate solutions when QAOA is considered.
The paper is organized as follows. Section II covers the technical background: we introduce QUBO and its generalization to PUBO, then we explicitly rephrase MaxCut as QUBO to set the stage for later results. We also discuss QAOA and how it can be used to approximately solve PUBO problems. All of these are known results. In Section III, we explain our proposal of using community detection algorithms to reduce the original problem to one with fewer variables. In Section IV, we quantify the effect of the divideandconquer approach in terms of variable elimination for the reduced instance. We also compute the cost of an allclassical implementation and compare it with the situation in which QAOA is applied to the reduced instance. This Section demonstrates a double advantage of using divideandconquer together with QAOA: fewer qubits are required and higher approximation ratio is reached. At the end we draw conclusions and present some open questions.
Ii Background and definitions
ii.1 QUBO and its generalization to PUBO
As the name suggest, QUBO represents optimization problems in which a quadratic function on binary variables has to be minimized over all possible assignments of its variables. We refer to the function to minimize as the cost function or energy, and it can be written as:
(1) 
where represents the assignment of spins and indicates a pair of spins, those with indices . Spins have values , and coefficients are real. Here and in the following we will use the term variable and spin interchangeably.
It is useful to introduce a generalization of QUBO to Polynomial Unconstrained Binary Optimization (PUBO). This larger class relaxes the constraint of having terms involving at most two spins. Any PUBO cost function can be written as:
(2) 
where indicates a group of spins (those with indices ) and coefficients are real. While the general formulation has terms, PUBOs of practical interest usually have terms involving only spins for , and therefore only a polynomial number of coefficients are nonzero.
ii.2 MaxCut: a graph partition problem expressed as QUBO
Given a graph, consider the problem of assigning one of two colors to each of the vertices. This creates a partition of the graph into two sets of vertices. Every edge that connects vertices of different color is considered “cut” by the partition. The MaxCut problem asks to find the maximum number of edges that can be cut by any of the possible partitions. Despite its apparent simplicity, MaxCut is a NPcomplete problem and no known algorithm, either classical or quantum, can solve its worst case in polynomial time Karp1972 ; Dunning2018 .
MaxCut instances are naturally described in terms of QUBO. Assign to vertex the spin and consider the partition defined by the set of spins up, those with , and spins down, with . The quantity is null when and equal to 1 when , effectively determining if an edge between vertex and is cut. The cost function can be expressed as “minus” the number of cut edges:
(3) 
with the summation being on the edges of the graphy to partition and being the total number of edges. It is clear by comparison with Eq. (1) that is of QUBO form.
By associating a weight to every edge, any QUBO without linear terms can be formulated as weighted MaxCut in which the goal is maximizing the weight of cut edges instead of their number. For ease of description and visualization, we will present our divideandconquer approach in the context of MaxCut by using terms and concepts from graph theory. The extension to QUBO with linear terms is straightforward and does not present any computational subtlety.
ii.3 Quantum Approximate Optimization Algorithm
The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantumclassical algorithm designed to find approximate solutions of combinatorial optimzation problems by variationally improving quantum circuits Farhi14_qaoa_orig ; Wecker16_training ; Guerreschi17_qaoa_opt ; Zhou2018 ; Shaydulin2019a ; Streif2019a . QAOA is regarded as a strong candidate for nearterm applications since it uses relatively shallow quantum circuits, its variational nature provides robustness to systematic errors, and the classical alternatives are costly even for the solution of relatively small instances of combinatorial problems. QAOA can be applied to any binary optimization and thus, in the language of this work, to any PUBO.
Central to QAOA are two quantum Hamiltonians, or quantum energy functions. The first is a direct translation of the classical function to minimize, , obtained by substituting the spin with the quantum operator (i.e. the Pauli matrix on qubit ). We denote this cost Hamiltonian by . The second is a driver Hamiltonian, required to be noncommuting with and typically chosen to correspond to the homogeneous field in the direction. Formally:
(4)  
(5) 
where and are the Pauli matrices and of qubit . The form of is completely analogue to Eq. (2).
The above Hamiltonians are used to characterize the quantum circuit of QAOA. Specifically, the circuit is formed by alternating applications of and . Parameters and may differ for each application . The QAOA quantum circuit can be seen as a preparation of the parametric state :
(6) 
where and . When measured in the basis , the state returns an assignment of the original
spins of the PUBO instance. The assignment is not unique, but determined by the probability distribution
. By varying parametersusing a classical optimizer, the distribution can be changed to increase the probability of measuring assignments corresponding to low values of
. The figure of merit of the parameters’ optimization is usually chosen to be the expectation value of the energy over the distribution , namely . Other choices are possible Barkoutsos2020 ; Larkin2020a . When the exact solution is known, results are typically reported in term of the approximation ratio:(7) 
One of the major limitations of quantum hardware at this stage of development is the number of qubits composing the system. In the next Section, we introduce techniques to reduce the need of qubits for QAOA well below the number of original variables of MaxCut or QUBO.
Iii DivideandConquer applied to QUBO
iii.1 Community detection
Given a QUBO instance, consider its quadratic terms only and neglect for the moment the constant and linear terms. The interaction pattern among variables can be visualized as a graph in which each vertex correspond to a spin and each edge to a quadratic term. The hardness to solve the QUBO instance is related to this interaction graph and typical benchmarks require solving nonplanar graphs with random edges. A powerful method to analyze the structure of a graph is to divide its vertices into communities. A community is a subset of vertices that are strongly connected among themselves while exhibiting a relatively small number of acrosscommunity edges. The communities are disjoint and their union includes all vertices of the original graph. This suggests a way to divide the original problem into a set of subproblems by considering the original QUBO instance restricted to each community. Finally, one needs to consider the original intercommunity edges while patching together the partial solutions.
There is not a single way to define the most desirable division in communities, but standard approaches tend to group vertices in such a way that the number of intercommunity edges is minimized, as quantified by the modularity Fortunato2016 . In practice, all metrics include terms that oppose the tendency of collecting all vertices into a single community, and most methods return a small number of communities. In our case, we want to maximze the benefit of using the quantum heuristic QAOA to solve the reduced PUBO and, specifically, we want to minimize the number of qubits needed to encode the original problem. As we will explain in the next paragraphs, this is obtained not when the number of intercommunity edges is minimized, but rather when the number of vertices with intercommunity edges is minimized. Despite being related, we will show how an adhoc modification of standard community detection algorithms leads to further, substantial, qubit reduction.
The community detection algorithm that we adopt as baseline is community_multilevel as implemented in the iGraph library igraph . It is a bottomup algorithm that starts from a finegrained view of the graph and moves towards a coarsegrained view. Initially every vertex is its own community, then vertices are moved between communities to maximize the overall modularity score. When no single vertex movement can increase the modularity, every community is shrank to a single vertex and the process restarts. The algorithm stops when neither shrinking nor vertex movement further improve the modularity. The final result assigns a membership value to every vertex , indicating the community it belongs to. We denote the set of communities with .
Our adhoc improvement starts with the communities identified by community_multilevel. It uses a different score based on the concept of “boundary” vertices. Denoting the set of vertices by (a reminder that they play the role of spins in the QUBO formulation of MaxCut), divide it in subsets according to the community membership. We define two subset per community, and , with being the community index:
(8)  
(9) 
These sets are pairwise disjoint and their union is . Intuitively, is the set of boundary vertices of community , i.e. those vertices that have at least one edge connecting them to vertices of different communities. is the complementary set restricted to community , and represents the “core” vertices of the community. One can think of the boundary vertices from all communities as forming the set .
Our adhoc improvement moves vertices between communities with the scope of minimizing the score . No multilevel strategy is used, so the algorithm stops when updating any single membership does not decrease . The score we use is , the maximum between the size of the overall boundary and the size of every community. As we will see in the next Section, corresponds to the number of qubits required to solve the original QUBO instance following the divideandconquer approach. The second term counteracts the tendency to form very large communities and avoid the risk that the computational bottlenecks becomes solving the subproblem for the largest community. We further elaborates on the reasons of this choice in Section VI.1. To help visualization, Fig. 1 shows a 3regular graph with 20 vertices and two ways of dividing it into three communities.
iii.2 Divideandconquer
Using insight from community detection, we can write the energy function from Eq. (1) as:
(10) 
where represents the set of communities, the membership of vertex , and the Kronecker symbol. The first term is the sum of intracommunity interactions, while the second term represents the contribution from acrosscommunity interactions. By convention, we include all linear terms as part of the intracommunity term and the constant term as part of the acrosscommunity term.
Recall that corresponds to the boundary spins of community , i.e. those that have connections with vertices in other communities, while are the remaining nonboundary spins of the community, also called its core. The energy can therefore be written as:
(11) 
where both and have at most quadratic terms. We use notation to represent the assignment restricted to spins in respectively.
The ultimate goal is finding or approximating the ground state energy of . To this extent we notice that the lowest part of the spectrum is reproduced by an energy function on boundary spins alone. This can be achieved by eliminating the explicit dependency from the core variables of community . Consider substituting the contribution with:
(12) 
The original QUBO energy is then written as a function of boundary variables alone:
(13) 
Our proposal is to solve for the ground state energy of using classical solvers or quantum algorithms. A few considerations:

the ground state energy of exactly corresponds to that of ;

the maximization procedure to compute typically takes a negligible effort w.r.t. the solution of the full problem since it requires solving similar, much smaller problems with spins instead of and the cost is exponential in the number of spins. However, one has to solve one of the smaller problems for each assignment of the spins in ;

is an energy function with polynomial degree at most ;

when considering MaxCut, due to the symmetry of the original energy , every term of the energy function involves only an even number of variables;

we provide a constructive way to build in the Section VI.1, following the approach presented in Sawaya2020 with a more efficient implementation.
Iv Results
iv.1 Community detection and variable elimination
We begin by quantifying the outcome of the community detection step in terms of the number and size of the subproblems it creates. We consider interaction graphs belonging to two important classes of random graphs: regular graphs in which each vertex has exactly edges, and ErdosRenyi graphs in which an edge between any pair of vertices is present with probability . For this study we consider regular graphs with and ErdosRenyi with . In particular, MaxCut on random 3regular graphs has been widely studied both classically Goemans1995 ; Halperin2004 and as benchmark of QAOA Farhi14_qaoa_orig ; Zhou2018 ; Guerreschi2019 .
Fig. 2 shows the slow increase in the number of communities identified by the community detection algorithm community_multilevel from iGraph igraph . The average size of the communities is simply the number of communities divided by the number of vertices of the original graph. We observe a markedly different behavior between regular graphs and ErdosRenyi graphs that we associate with the number of edges: while regular graphs of vertices have exactly edges, ErdosRenyi graphs have an expected number of edges equal to . ErdosRenyi graphs are therefore much more dense than regular ones. For each graph class, we report a single dataset corresponding to the result of a standard community detection algorithm improved by our adhoc postprocess. In terms of number of communities, the postprocess does not change the number of communities apart from smallsize effects for graphs with few tens of vertices. However, the postprocess leads to measurable changes in the number of boundary vertices, as we analyze next.
We compute the total number of boundary variables, i.e. , and relate it to the number of qubits required to apply QAOA to the original problem following the divideandconquer approach we are proposing. The number of available qubits is expected to be one of the major limitations of nearterm devices. Reducing the qubit requirements of an algorithm would provide significant benefits both in terms of when the algorithm can be realized in practice (devices with fewer qubits are expected to be developed before devices with more qubits) and of the effective decoherence rate (smaller for a register with fewer qubits of given quality). Remarkably, in Section IV.3 we also show that QAOA returns solutions with higher approximation ratio when adopting the divide and conquer approach.
The reduction in the number of required qubits is quantified in Fig. 3, in which the original approach of one qubit per graph vertex (equivalently per binary variable) is compared with the divideandconquer approach based on standard community detection algorithms and our adhoc improvement. For random 3regular graphs, the required number of qubits is reduced by and over a large range of graph sizes. For random 4regular graphs the reduction is and respectively.
iv.2 Fullyclassical exact solver
The divideandconquer approach reduces the size of the QUBO instance to be solved at the cost of solving a large number of partially quenched instances. These subinstances correspond to the restriction of the original QUBO to single communities of vertices whose boundary is quenched to specific assignments. An important question is whether this approach provides any advantage in terms of a faster solution of the original instance. To address this point, we identify the total cost of the divideandconquer strategy as the sum of four terms:

Divide original graph into communities using standard algorithms for community detection. Optionally, update the division in communities by minimizing the number of vertices with intercommunity connections instead of the number of edges between communities.

Solve the partially quenched QUBO subinstances for every community and every assignments of the boundary variables.

Combine the partial solutions to create the reduced PUBO instance whose solution exactly corresponds to the solution of the original instance.

Solve the reduced PUBO instance.
In this work, we adopt akmaxsat Kugel2012 as the exact solver to be used in steps (2) and (4). akmaxsat is a stateoftheart solver of the MaxSAT problem to which both QUBO and PUBO can be reduced. It has been previously used to benchmark both quantum annealers Santra2014 and QAOA algorithms Guerreschi2019 ; Larkin2020a .
A few considerations on the steps listed above. Related to (1), the cost of running the community detection algorithm scales favorably when compared to the rest of the protocol, even including the adhoc improvement. We confirm it with quantitative estimates below, but focus on the other contributions. Concerning (2), the procedure detailed in Section VI.1 requires the solution of an exponential number of subinstances: for each community , we have way of constrained the boundary variables, and for each choice we have to solve a QUBO instance on core variables. This may be unnecessary when we are interested in finding good approximations of the solution to the original QUBO and not its global solution. Other, computationally less demanding, approaches are possible as we will discuss below. As the starting point, here we solve each subinstances using akmaxsat, an exact solver for SAT optimization.
About (3), the creation of the reduced PUBO instance also requires exponential time. The approach we consider to construct the contribution of community scales as . In our study, we also need to express the PUBO in terms of the Conjunctive Normal Form (CNF) used by SAT optimizers. The method we follow to reduce PUBO to SAT instances is presented in Section VI.3 and requires clauses of variables each to represent a PUBO body term. When an approximate solution of (2) is sufficient, both the derivation cost and the number of clauses in the CNF can be significantly reduced. Concerning (4), this is a natural place where classical solvers can be substituted by hybrid quantumclassical algorithms like QAOA. We explore this scenario in the next Section.
First of all, we consider whether an exhaustive implementation of step (2) leaves any possibility for the divideandconquer approach to be faster than the straightforward solution of the original instance. Fig. 4 reports (the base10 logarithm of) the time to perform a few computations: in orange, the combined cost of step (2) and (3); in green the cost of step (4); in red, the cost of solving the original instance directly. While the red and green lines rely on a stateoftheart SAT solver, the orange line is based on our own implementation and may thus be regarded as a upper bound. The difference between the red and green line is somewhat surprising since, for the same number of spins of the original instance (given by the horizontal axis), the reduced instance has actually fewer variables. However, the reduced instance has more clauses per variable and we observed in our numerical experiments that akmaxsat performs better for small density of clauses per variable. A different solver may alleviate the problem.
We then estimate the cost of the complete divideandconquer protocol in Fig. 5. For small instances, the total cost is dominated by the solution of all subinstances. For large instances, starting at , the cost is dominated by the solution of the reduced instance. When compared with the cost of solving the original instance directly (see Fig. 4), it is clear that the exhaustive application of the divideandconquer approach is not providing a computational advantage.
To increase the competitiveness of the divideandconquer approach it is fundamental to address the cost of steps (24). One approach is solving the unconstrained subproblem once for each community, and fix the core variables to the corresponding best assignment. This reduces the cost of both (2) and (3): one needs to solve a single QUBO per community (despite on both boundary and core variables, instead of on core variables alone) and the reduced instance is obtained by fixing the assignment for the core variables. An additional advantage of this approximation is that the degree of the PUBO is actually not increased by the reduction and its derivation is straightforward. Therefore the reduced instance is also a QUBO and the divideandconquer procedure can be applied again. The main drawback is that the procedure does not guarantee that the global solution of the original instance is faithfully reproduced by the reduced instance. To mitigate this effect, once the assignment of the boundary variables is determined by the solution of the reduced instance, the core variables can be determined by solving the constrained subproblems and not assuming the assignment found in (2).
Concerning the steep rise of the cost of (4), we believe that this is due to the way we translate a PUBO instance to a SAT instance. In fact, the approach described in Section VI.3 requires clauses per body term. In addition, numerical experiments on different graph types (specifically fully connected and ErdősRenyi random graphs) suggest that akmaxsat performance degrades with increasing density of clauses per variable. It would be important to readdress the computational cost of step (4) using a solver specialized for PUBO instances.
iv.3 Quantum heuristic enhanced by classical divideandconquer
One of the main advantages of the divideandconquer approach is that at every step of the protocol one has to solve PUBO instances with fewer spins than the original one. This is particularly suitable to quantum heuristics, like QAOA, whose application is currently limited by the number of qubits available to NISQ devices. In addition, smaller instances often translates in shorter quantum circuits and this may benefit the overall fidelity of the quantum computation. The qubit reduction can be evaluated from the results of Section IV.1, and it reaches for random 3regular graphs.
Without including the effect of noise in our study, we are interested in whether our proposal improve the quality of the QAOA solution. We consider three cases: solving the original QUBO instance with QAOA, applying divideandconquer by using an exhaustive approach to solve all partiallyquenched subinstances (as described in the previous Section) and then use QAOA for the reduced PUBO instance, or fixing the core spins of each community to their value for the best assignment of the subinstance (see discussion at the end of Section IV.2) and then solve the reduced PUBO instance with QAOA.
The results are reported in Fig. 6 as the blue, orange and green lines respectively. They suggest that optimizing the reduced PUBO problem via QAOA leads to better quality of the approximate solution compared to the optimization of the original QUBO. The quality of solution is provided in terms of the approximation ratio in Eq. (7). For these experiments, we considered 20 instances of MaxCut on random 3 and 4regular graphs. Each instance (either the original or reduce one) was solved by optimizing the parameters with a global approach known as APOSMM (Asynchronously Parallel Optimization Solver for Finding Multiple Minima) Larson2018 that coordinates multiple runs of the local optimizer COBYLA from SciPy package 2020SciPyNMeth , starting from random initial parameter values.
Irrespective of the situation, the global optimization has a total budget of 10,000 function evaluations per instance and each QAOA circuit has depth . As noted above, Fig. 6 suggests that QAOA works more effectively for the reduced PUBO instances than for the original QUBO one. On the contrary, Fig. 4 shows that akmaxsat takes longer to solve the reduced instances instead of the original ones. It would be interesting to observe if this behavior is reproduced when other classical solvers are considered.
In addition, we observe the good performance of QAOA after nonexact reduction for random 3regular graphs. By fixing the spins of each community core, we are giving up the exact correspondence between the original and reduced ground state energy, with the latter being an upper bound of the former. This is most probably the case in our study since has symmetry (see Eq. (10) and (11) and consider that the original QUBO has no linear terms) and fixing the core spins implies an arbitrary selection of one of the degenerate ground states of the subinstances. Since there are at least two equivalent ground states per community, the probability of fixing all cores according to the global solution is low. Indeed it is not even guaranteed that the global solution corresponds to a minimum of the original instance when restricted to a single community. We remark that the approximation ratio is computed with respect to the original minimum energy, indicating that QAOA’s absolute performance is enhanced by the approximated reduction. Finally, as suggested at the end of the previous Section, even better results may be obtained by optimizing over the core spins while keeping all boundary spins fixed to the best assignment found by QAOA for the reduced instance.
V Discussion
We have demonstrated the application of divideandconquer techniques to solve arbitrary Quadratic Unconstrained Binary Optimization (QUBO) instances by solving smaller Polynomial Unconstrained Binary Optimization (PUBO) instances with fewer variables. To compute the advantage of our proposal, we consider the MaxCut problem on random 3regular and 4regular graphs. While a fullyclassical solution based on the exact solver for MaxSAT problem called akmaxsat seems not to benefit from the variable elimination, quantum heuristic algorithms do. In particular, we applied the Quantum Approximate Optimization Algorithm (QAOA) to both the original and reduced instances and find that the latter ones not only require fewer qubits for random 3regular graphs (respectively fewer qubits for 4regular ones), but also return a considerably improved approximation ratio.
We believe that reaching the scientific and technological goal of demonstrating quantum advantage for practical applications needs to consider quantum and classical processors not as competitors, but as complementary. This view is intrinsic in the formulation of variational quantum algorithms which require several iterations of a classical optimizer, but can be pushed even further. Our proposal uses classical preprocess to manipulate the problem instance and make it more suitable to the quantum algorithm. While classical solvers may also benefit from this preprocess, and this would be a very desirable outcome towards the end goal of solving problems of practical interest, we observe a situation in which the reduced instances were not easier to solve with a certain classical method than the original ones. If confirmed, the different impact of the preprocessing step may contribute to reduce the current performance gap between quantum and classical solvers (despite one cannot rule out the opposite situation). Finally, we expect that postprocess may unlock further benefits for quantum algorithms.
In the context of this study, several questions are still open. Just to name a few: Is there a classical solver for the PUBO problem which takes less time on the reduced instances than on the original ones? The time cost strongly depends on the implementation details of the algorithms, can a better implementation change the fractional cost reported in Fig. 5(right)? QAOA may be used to solve the partiallyquenched instances too, what would the performance be in this case? Quantum algorithms for binary optimization is a rich topic and we expect that the stream of exciting results will continue in the next several years.
Vi Methods
vi.1 Derivation of the PUBO energy function of a single community
Here we discuss a constructive method to derive as defined in Eq. (12). The method we chose is based on the framework introduced in reference Sawaya2020 , which has the more ambitious goal of converting Hamiltonians (i.e. quantum energy functions) of arbitrary quantum level systems to Hamiltonians of quantum spins (or qubits). We will rephrase the method in the context of classical energy functions and spins.
Consider the assignment for the spins in . While keeping the boundary spins fixed, we vary the core spins of community and determine the value of by minimization. Such value is the energy of assignment , but can also be seen as the real number :
(14) 
We repeat a similar minimization for all assignments of the boundary spins and determine all corresponding values. In an explicit way, it is clear that:
(15)  
(16) 
where the Kronecker delta notation has been generalized to vectors and then expressed as product of quadratic factors. By carrying out the products and summation, the standard PUBO form of
as a polynomial in is derived.It is important to comment on the computational cost of this derivation. The above expression has terms, each corresponding to products of 2term factors (i.e. factors like ). If expanded by exhaustive enumeration, we have to sum contributions. In general this cost is much less than that of solving the original problem exhaustively since typically . In addition, most of the coefficients typically cancel out, returning a PUBO with a relatively low degree. In the next Section we present a connection to the WalshHadamard transform that allows to reduce the cost from to . This corresponds to a logarithmic overhead with respect to the task of finding all the values since we have of them.
vi.2 Encoding diagonal operators using WalshHadamard transform
An encoding procedure is required to express a diagonal Hermitian operator as a linear combination of product of qubit operators. We follow the approach described in Sawaya2020 , but provide a more efficient way to compute the coefficients of linear combination. In the context of this paper, refer to Eq. (15) of the main text and notice that we are looking for a quantum Hamiltonian of the form:
Its eigenstates are trivially given by the computational basis states of
qubits, and the eigenvalues are explicitly provided as the
real numbers . In this Section we use the standard notation adopted by the quantum information community and rephrase the desired Hamiltonian as an arbitrary, diagonal operator on qubits:(17) 
The projector can be rewritten as:
(18)  
(19)  
(20)  
(21) 
where the first line has been obtained using:
(22) 
the third line by adopting the shorthand notation (and similarly for ), and the fourth by explicit computation and the standard definition of scalar product .
Substituting inside the expression of , one ontains an expression for the coefficients of the expansion of as a linear combination of products of Z Pauli matrices:
(23) 
The definition of makes it clear that the coefficient of the linear combinations are given by the WalshHadamard transform of the diagonal entries of , namely . One can then take advantage of the fast WalshHadamard transform to compute in time with being the dimensionality of , here being .
vi.3 Reduce PUBO instances to SAT instances
Polynomial Unconstrained Binary Optimization (PUBO) and Maximum Satisfiability (SAT) instances are important classes of optimization problems involving binary variables. Both classes are NPcomplete problems and it is possible to express equivalent instances in either of the two formulations. Here we describe the reduction method we used to convert minimization of PUBO into maximization of SAT. This can be seen as a generalization of the reduction of MaxCut to Max2SAT described in reference Gramm2003 .
The reduction requires a SAT variable for each PUBO spin and each body PUBO interaction is translated into SAT clauses of variables.
The transformation from PUBO to SAT can be obtained for each term separately. AlgorithmVI.3 describes what SAT clauses need to be added to take into account the body term . It also updates the quantity offset corresponding to a constant value added to the SAT to faithfully reproduce the PUBO values.
[h]
The algorithm starts with a few initialization: is the order of the spin term, offset is either 0 or the value after the previous translation. The main loop explores all the assignments of the spins sequentially (line 3). For each, a tentative clause is constructed as the negative of the spin assignment (line 5:13). The utility variable (initialized in line 4) computes whether the spin term returns a positive or negative contribution to the PUBO value (line 11). If the contribution is negative, the mirror clause is added to the SAT instance with weight (lines 14:16). After line 17, the clauses added to the SAT have a cumulative value of for all variable assignments apart from those for which the spin term is negative. In that case, the cumulative value is . The offset moves the two SAT values to the desired , respectively , value.
Apart from a constant offset, the case for is presented below for direct inspection:
A few observations. 1) If the PUBO term has a nonunit coefficient , all weights for the SAT clauses and the offset are multiplied by . 2) Several SAT solvers search for the maximum value of the SAT instance while PUBO solvers search for the minimum value of the PUBO instance. Inverting the sign of all weights and the offset accounts for this difference. 3) Certain SAT solvers have constraints on the weight values, for example akmaxsat requires the weight to be a positive integer. These constraints can be accounted by a suitable rescaling of the weights and offset. If the coefficient of the PUBO term is negative , the condition in line 14 is changed to “if then” and a factor is added to and the offset contribution.
Acknowledgements.
The author thanks Jesmin Jahan Tithi for discussion on community detection algorithms and their implementation.References

[1]
Gary Kochenberger, Jin Kao Hao, Fred Glover, Mark Lewis, Zhipeng Lü,
Haibo Wang, and Yang Wang.
The unconstrained binary quadratic programming problem: A survey.
Journal of Combinatorial Optimization
, 28(1):58–81, 2014.  [2] Fred Glover, Gary Kochenberger, and Yu Du. Quantum Bridge Analytics I: a tutorial on formulating and using QUBO models. arXiv:1811.11538, 2018.
 [3] Rodney J. Baxter. Exactly Solved Model in Statistica. Academic Press, 1982.
 [4] Uriel Feige and Michel Goemanst. Approximating the Value of Two Prover Proof Systems, With Applications to AMX 2SAT and MAX DICUT. Proceedings Third Israel Symposium on the Theory of Computing and Systems, pages 182–189, 1995.
 [5] Andrew Lucas. Ising formulations of many NP problems. Frontiers in Physics, 2(February):1–15, 2014.
 [6] Iain Dunning, Swati Gupta, and John Silberholz. What works best when? A systematic evaluation of heuristics for MaxCut and QUBO. INFORMS Journal on Computing, 30(3):608–624, 2018.
 [7] Peter W. Shor. Polynomialtime algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Review, 41(2):303–332, 1999.
 [8] Lov K. Grover. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters, 79(2):325–328, 1997.

[9]
Daniel S. Abrams and Seth Lloyd.
Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors.
Physical Review Letters, 83(24):5162–5165, dec 1999.  [10] Yudong Cao, Jonathan Romero, Jonathan P. Olson, Matthias Degroote, Peter D. Johnson, Mária Kieferová, Ian D. Kivlichan, Tim Menke, Borja Peropadre, Nicolas P.D. Sawaya, Sukin Sim, Libor Veis, and Alán AspuruGuzik. Quantum Chemistry in the Age of Quantum Computing. Chemical Reviews, 119(19):10856–10915, 2019.
 [11] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse Ising model. Physical Review E, 58(5):5355, 1998.
 [12] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Joshua Lapan, Andrew Lundgren, and Daniel Preda. A quantum adiabatic evolution algorithm applied to random instances of an NPcomplete problem. Science, 292(5516):472–475, apr 2001.
 [13] Sergio Boixo, Troels F. Rønnow, Sergei V. Isakov, Zhihui Wang, David Wecker, Daniel A. Lidar, John M. Martinis, and Matthias Troyer. Evidence for quantum annealing with more than one hundred qubits. Nature Physics, 10(February):218, 2014.
 [14] Siddhartha Santra, Gregory Quiroz, Greg Ver Steeg, and Daniel A. Lidar. Max 2SAT with up to 108 qubits. New Journal of Physics, 16(4):045006, apr 2014.
 [15] Davide Venturelli, Salvatore Mandrà, Sergey Knysh, Bryan O’Gorman, Rupak Biswas, and Vadim N. Smelyanskiy. Quantum optimization of fully connected spin glasses. Physical Review X, 5:031040, 2015.
 [16] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm. arXiv:1411.4028, 2014.
 [17] D. Wecker, M. B. Hastings, and M. Troyer. Training a quantum optimizer. Phys. Rev. A, 94(2):022309, 2016.
 [18] G. Giacomo Guerreschi and M. Smelyanskiy. Practical optimization for hybrid quantumclassical algorithms. arXiv:1701.01450, 2017.
 [19] Leo Zhou, ShengTao Wang, Soonwon Choi, Hannes Pichler, and Mikhail D. Lukin. Quantum Approximate Optimization Algorithm: Performance, Mechanism, and Implementation on NearTerm Devices. Physical Review X, 10(2):021067, 2018.
 [20] Ruslan Shaydulin, Ilya Safro, and Jeffrey Larson. Multistart Methods for Quantum Approximate Optimization. 2019 IEEE High Performance Extreme Computing Conference, HPEC 2019, 2019.
 [21] Michael Streif and Martin Leib. Training the Quantum Approximate Optimization Algorithm without access to a Quantum Processing Unit. arXiv:1908.08862, 2019.
 [22] J. S. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier, E. Schuyler Fried, S. Hong, P. Karalekas, C. B. Osborn, A. Papageorge, E. C. Peterson, G. Prawiroatmodjo, Nick Rubin, C. A. Ryan, D. Scarabelli, M. Scheer, E. A. Sete, P. Sivarajah, R. S. Smith, A. Staley, N. Tezak, W. J. Zeng, A. Hudson, B. R. Johnson, M. Reagor, M. P. da Silva, and Chad T. Rigetti. Unsupervised machine learning on a hybrid quantum computer. arXiv:1712.05771, 2017.
 [23] Peter Müller, Marc Ganzhorn, Andreas Fuhrer, Stefan Filipp, Kristan Temme, Andrew Cross, Ivano Tavernelli, Abhinav Kandala, Jay M. Gambetta, John Smolin, Lev S. Bishop, Walter Riess, Gian Salis, Panagiotis Barkoutsos, Antonio Mezzacapo, Daniel J. Egger, Nikolaj Moll, and Jerry M. Chow. Quantum optimization using variational algorithms on nearterm quantum devices. Quantum Science and Technology, 3:030503, 2018.
 [24] Hannes Pichler, Shengtao Wang, Leo Zhou, Soonwon Choi, and Mikhail D. Lukin. Quantum optimization for maximum independent set using Rydberg atom arrays. arXiv:1808.10816, 2018.
 [25] Guido Pagano, Aniruddha Bapat, Patrick Becker, Katherine S. Collins, Arinjoy De, Paul W. Hess, Harvey B. Kaplan, Antonis Kyprianidis, Wen Lin Tan, Christopher Baldwin, Lucas T. Brady, Abhinav Deshpande, Fangli Liu, Stephen Jordan, Alexey V. Gorshkov, and Christopher Monroe. Quantum approximate optimization of the longrange Ising model with a trappedion quantum simulator. Proceedings of the National Academy of Sciences of the United States of America, 117(41):25396–25401, 2020.
 [26] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Sergio Boixo, Michael Broughton, Bob B. Buckley, David A. Buell, Brian Burkett, Nicholas Bushnell, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Sean Demura, Andrew Dunsworth, Edward Farhi, Austin Fowler, Brooks Foxen, Craig Gidney, Marissa Giustina, Rob Graff, Steve Habegger, Matthew P. Harrigan, Alan Ho, Sabrina Hong, Trent Huang, L. B. Ioffe, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Cody Jones, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Seon Kim, Paul V. Klimov, Alexander N. Korotkov, Fedor Kostritsa, David Landhuis, Pavel Laptev, Mike Lindmark, Martin Leib, Erik Lucero, Orion Martin, John M. Martinis, Jarrod R. McClean, Matt McEwen, Anthony Megrant, Xiao Mi, Masoud Mohseni, Wojciech Mruczkiewicz, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Florian Neukart, Hartmut Neven, Murphy Yuezhen Niu, Thomas E. O’Brien, Bryan O’Gorman, Eric Ostby, Andre Petukhov, Harald Putterman, Chris Quintana, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Andrea Skolik, Vadim Smelyanskiy, Doug Strain, Michael Streif, Kevin J. Sung, Marco Szalay, Amit Vainsencher, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, and Leo Zhou. Quantum Approximate Optimization of NonPlanar Graph Problems on a Planar Superconducting Processor. arXiv:2004.04197, 2020.
 [27] Gian Giacomo Guerreschi and Anne Y. Matsuura. QAOA for MaxCut requires hundreds of qubits for quantum speedup. Scientific Reports, 9:6903, 2019.
 [28] Alexander M. Dalzell, Aram W. Harrow, Dax Enshan Koh, and Rolando L. la Placa. How many qubits are needed for quantum computational supremacy? Quantum, 4:264, 2020.
 [29] Sergey Bravyi, Alexander Kliesch, Robert Koenig, and Eugene Tang. Obstacles to Variational Quantum Optimization from Symmetry Protection. Physical Review Letters, 125:260505, 2020.
 [30] Adrian Kügel. Improved exact solver for the weighted MaxSAT problem. Proc. Pragmatics of SAT Workshop (POS10), 8:15–27, 2012.
 [31] Karp R. M. Reducibility among combinatorial problems. In R. E. Miller, J. W. Thatcher, and J. D. Bohlinger, editors, Complexity of Computer Computations. The IBM Research Symposia Series, pages 85–103. Springer, Boston, MA, 1972.
 [32] Panagiotis Kl. Barkoutsos, Giacomo Nannicini, Anton Robert, Ivano Tavernelli, and Stefan Woerner. Improving Variational Quantum Optimization using CVaR. Quantum, 4:256, 2020.
 [33] Jason Larkin, Matías Jonsson, Daniel Justice, and Gian Giacomo Guerreschi. Evaluation of Quantum Approximate Optimization Algorithm based on the approximation ratio of single samples. arXiv:2006.04831, 2020.
 [34] Santo Fortunato and Darko Hric. Community detection in networks: A user guide. Physics Reports, 659:1–44, 2016.
 [35] Gabor Csardi and Tamas Nepusz. The igraph software package for complex network research. InterJournal, Complex Systems:1695, 2006.
 [36] Nicolas P. D. Sawaya, Tim Menke, Thi Ha Kyaw, Sonika Johri, Alán AspuruGuzik, and Gian Giacomo Guerreschi. Resourceefficient digital quantum simulation of dlevel systems for photonic, vibrational, and spins Hamiltonians. npj Quantum Information, 6:49, 2020.
 [37] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal Of The Association for Computing Machines, 42(6):1115–1145, 1995.
 [38] Eran Halperin, Dror Livnat, and Uri Zwick. MAX CUT in cubic graphs. Journal of Algorithms, 53(2):169–185, 2004.
 [39] Jeffrey Larson and Stefan M. Wild. Asynchronously parallel optimization solver for finding multiple minima. Mathematical Programming Computation, 10(3):303–332, 2018.
 [40] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261–272, 2020.
 [41] Jens Gramm, Edward A. Hirsch, Rolf Niedermeier, and Peter Rossmanith. Worstcase upper bounds for MAX2SAT with an application to MAXCUT. Discrete Applied Mathematics, 130:139–155, 2003.
Comments
There are no comments yet.