1 Introduction
As the challenge of scaling traditional transistorbased Central Processing Unit (CPU) technology continues to increase, experimental physicists and hightech companies have begun to explore radically different computational technologies, such as adiabatic quantum computers (AQCs) [1], gatebased quantum computers [2, 3, 4], CMOS annealers [5, 6, 7], neuromorphic computers [8, 9, 10], memristive circuits [11, 12], and optical parametric oscillators [13, 14, 15]. The goal of all of these technologies is to leverage the dynamical evolution of a physical system to perform a computation that is challenging to emulate using traditional CPU technology (e.g., the simulation of quantum physics) [16]. Despite their entirely disparate physical implementations, AQCs, CMOS annealers, memristive circuits, and optical parametric oscillators are unified by a common mathematical abstraction known as the Ising model, which has been widely adopted by the physics community for the study of naturally occurring discrete optimization processes [17]. Furthermore, this kind of “Ising machine” [13, 14] is already commercially available with more than 2000 decision variables in the form of AQCs developed by DWave Systems [18].
The emergence of physical devices that can quickly solve Ising models is particularly relevant to the constraint programming, artificial intelligence and operations research communities, because the impetus for building these devices is to perform discrete optimization. As this technology matures, it may be possible for this specialized hardware to rapidly solve challenging combinatorial problems, such as MaxCut
[19] or MaxClique [20]. Preliminary studies have suggested that some classes of Constraint Satisfaction Problems may be effectively encoded in such devices because of their combinatorial structure [21, 22, 23, 24]. Furthermore, an Ising model coprocessor could have significant impacts on solution methods for a variety of fundamental combinatorial problem classes, such as MAXSAT [25, 26, 27] and integer programming [28]. At this time, however, it remains unclear how established optimization algorithms should leverage this emerging technology. This paper helps to address this gap by highlighting the key concepts and hardware limitations that an algorithm designer needs to understand to engage in this emerging and exciting computational paradigm.Similar to an arithmetic logic unit (ALU) or a graphics processing unit (GPU), this work proposes the idea of an Ising processing unit (IPU) as the computational abstraction for wide variety of physical devices that perform optimization of Ising models. This work begins with a brief introduction to the IPU abstraction and its mathematical foundations in Section 2. Then the additional challenges of working with realworld hardware are discussed in Section 3 and an overview of previous benchmarking studies and solution methods are presented in Section 4. Finally, a detailed benchmarking study of a DWave 2X IPU is conducted in Section 5, which highlights the current capabilities of such a device. The contributions of this work are as follows,
Note that, due to the maturity and commercial availability of the DWave IPU, this work often refers to that architecture as an illustrative example. However, the methods and tools proposed herein are applicable to all emerging IPU hardware realizations, to the best of our knowledge.
2 A Brief Introduction to Ising Models
This section introduces the notations of the paper and provides a brief introduction to Ising models, the core mathematical abstraction of IPUs. The Ising model refers to the class of graphical models where the nodes, , represent spin variables (i.e., ) and the edges, , represent interactions of spin variables (i.e., ). A local field is specified for each node, and an interaction strength is specified for each edge. Given these data, the energy of the Ising model is defined as,
(1) 
Applications of the Ising model typically consider one of two tasks. First, some applications focus on finding the lowest possible energy of the Ising model, known as a ground state. That is, finding the globally optimal solution of the following binary quadratic optimization problem:
(2)  
Second, other applications are interested in sampling from the Boltzmann distribution of the Ising model’s states:
(3) 
where is a parameter representing the effective temperature of the Boltzmann distribution [29]
. It is valuable to observe that in the Boltzmann distribution, the lowest energy states have the highest probability. Therefore, the task of sampling from a Boltzmann distribution is similar to the task of finding the lowest energy of the Ising model. Indeed, as
approaches 0, the sampling task smoothly transforms into the aforementioned optimization task. This paper focuses exclusively on the mathematical program presented in (2), the optimization task.Frustration:
The notion of frustration is common in the study of Ising models and refers to any instance of (2) where the optimal solution, , satisfies the property,
(4) 
A canonical example is the following three node problem:
(5) 
Observe that, in this case, there are a number of optimal solutions such that but none such that . Note that frustration has important algorithmic implications as greedy algorithms are sufficient for optimizing Ising models without frustration.
Gauge Transformations:
A valuable property of the Ising model is the gauge transformation, which characterizes an equivalence class of Ising models. For illustration, consider the optimal solution of Ising model , . One can construct a new Ising model where the optimal solution is the same, except that for a particular node is as follows:
(6a)  
(6b) 
where indicates the neighboring edges of node . This to manipulation is referred to as a gauge transformation. Given a complete source state and a complete target state , this transformation is generalized to all of by,
(7a)  
(7b) 
It is valuable to observe that by using this gauge transformation property, one can consider the class of Ising models where the optimal solution is
or any arbitrary vector of
values without loss of generality.Bijection of Ising and Boolean Optimization:
It is also useful to observe that there is a bijection between Ising optimization (i.e., ) and Boolean optimization (i.e., ). The transformation of to is given by,
(8a)  
(8b) 
and the inverse to is given by,
(9a)  
(9b) 
Consequently, any results from solving Ising models are also immediately applicable to the following class of Boolean optimization problems:
(10)  
The Ising model provides a clean mathematical abstraction for understanding the computation that IPUs perform. However, in practice, a number of hardware implementation factors present additional challenges for computing with IPUs.
3 Features of Analog Ising Processing Units
The core inspiration for developing IPUs is to take advantage of the natural evolution of a discrete physical system to find highquality solutions to an Ising model [1, 13, 6, 11]. Consequently, to the best of our knowledge, all IPUs developed to date are analog machines, which present a number of challenges that the optimization community is not accustomed to considering.
Effective Temperature:
The ultimate goal of IPUs is to solve the optimization problem (2) and determine the globally optimal solution to the input Ising model. In practice, however, a variety of analog factors preclude IPUs from reliably finding globally optimal solutions. As a firstorder approximation, current IPUs behave like a Boltzmann sampler (3) with some hardwarespecific effective temperature, [30]. It has also been observed that the effective temperature of an IPU can vary around a nominal value based on the Ising model that is being executed [31]. This suggests that the IPU’s performance can change based on the structure of the problem input.
Environmental Noise:
One of the primary contributors to the sampling nature of IPUs are the environmental factors. All analog machines are subject to faults due to environmental noise; for example, even classical computers can be affected by cosmic rays. However, given the relative novelty of IPUs, the effects of environmental noise are noticeable in current hardware. The effects of environmental noise contribute to the perceived effective temperature of the IPU.
Coefficient Biases:
Once an Ising model is input into an IPU, its coefficients are subject to at least two sources of bias. The first source of bias is a model programming error that occurs independently each time the IPU is configured for a computation. This bias is often mitigated by programming the IPU multiple times with an identical input and combining the results from all executions. The second source of bias is a persistent coefficient error, which is an artifact of the IPU manufacturing and calibration process. Because this bias is consistent across multiple IPU executions, this source of bias is often mitigated by performing multiple gauge transformations on the input and combining the results from all executions.
Problem Coefficients:
In traditional optimization applications, the problem coefficients are often rescaled to best suit floatingpoint arithmetic. Similarly, IPUs have digitaltoanalog converters that can encode a limited number of values; typically these values are represented as numbers in the range of 1 to 1. Some IPUs allow for hundreds of steps within this range,
[1, 6] whereas others support only the discrete set of {1, 0, 1} [13]. In either case, the mathematical Ising model must be rescaled into the IPU’s operating range. However, this mathematically equivalent transformation can result in unexpected side effects because the coefficients used in the IPU hardware are perturbed by a constant amount of environmental noise and hardware bias, which can outweigh small rescaled coefficient values.Topological Limitations:
Another significant feature of IPUs is a restricted set of variable products. In classical optimization (e.g., (2)), it is assumed that every variable can interact with every other variable, that is, an Ising model where an edge connects every pair of variables. However, because of the hardware implementation of an IPU, it may not be possible for some variables to interact. For example, the current DWave IPUs are restricted to the chimera topology, which is a twodimensional lattice of unit cells, each of which consist of a 4by4 bipartite graph (e.g., see Figure 1). In addition to these restrictions, fabrication errors can also lead to random failures of nodes and edges in the IPU hardware. Indeed, as a result of these minor imperfections, every DWave IPU developed to date has a unique topology [32, 33, 34]. Research and development of algorithms for embedding various kinds of Ising models into a specific IPU topology is still an active area of research [21, 35, 36, 37].
3.1 Challenges of Benchmarking Ising Processing Units
These analog hardware features present unique challenges for benchmarking IPUs that fall roughly into three categories: (1) comparing to established benchmark libraries; (2) developing Ising model instance generators for testing and; (3) comparing with classical optimization methods.
Benchmark Libraries:
Research and development in optimization algorithms has benefited greatly from standardized benchmark libraries [38, 39, 40]. However, direct application of these libraries to IPUs is out of scope in the near term for the following reasons: (1) the Ising model is a binary quadratic program, which is sufficiently restrictive to preclude the use of many standard problem libraries; (2) even in cases where the problems of interest can be mapped directly to the Ising model (e.g., MaxCut, MaxClique), the task of embedding given problems onto the IPU’s hardware graph can be prohibitive [41]; and (3) even if an embedding can be found, it is not obvious that the problem’s coefficients will be amenable to the IPU’s operating range.
Instance Generation Algorithms:
Due to these challenges, the standard practice in the literature is to generate a collection of instances for a given IPU and use these cases for the evaluation of that IPU [34, 42, 43, 33]. The hope being that these instances provide a reasonable proxy for how realworld applications might perform on such a device.
Comparison with Classical Algorithms:
Because of the radically different hardware of CPUs vs IPUs and the stochastic nature of the IPUs, conducting a fair comparison of these two technologies is not immediately clear [44, 45, 43]. Indeed, comparisons of DWave’s IPU with classical algorithms have resulted in vigorous discussions about what algorithms and metrics should be used to make such comparisons [46, 34, 47]
. It is widely accepted that IPUs do not provide optimality guarantees and are best compared to heuristic methods (e.g. local search) in terms of runtime performance. This debate will most likely continue for several years. In this work, our goal is not to answer these challenging questions but rather to highlight that commercial mixed integer programming solvers are valuable and important tools for exploring these questions.
4 A Review of Ising Processing Unit Benchmarking Studies
Due to the challenges associated with mapping established optimization test cases to specific IPU hardware [41], the IPU benchmarking community has adopted the practice of generating Ising model instances on a casebycase basis for specific IPUs [34, 42, 43, 33] and evaluating these instances on a variety of solution methods. The following subsections provide a brief overview of the instance generation algorithms and solution methods that have been used in various IPU benchmarking studies. The goals of this review are to: (1) reveal the lack of consistency across current benchmarking studies; (2) highlight the omission of integer programming methods in all of the recent publications and; (3) motivate the numerical study conducted in this work.
4.1 Instance Generation Algorithms
The task of IPU instance generation amounts to finding interesting values for and in (1). In some cases the procedures for generating these values are elaborate [33, 48] and are designed to leverage theoretical results about Ising models [42]. A brief survey reveals five primary problem classes in the literature, each of which is briefly introduced. For a detailed description, please refer to the source publication of the problem class.
Random (RANk and RANFk):
To the best of our knowledge, this general class of problem was first proposed in [27] and was later refined into the RANk problem in [34]. The RANk problem consists simply of assigning each value of to 0 and each value of uniformly at random from the set
(11) 
The RANFk problem is a simple variant of RANk where the values of are also selected uniformly at random from (11). As we will later see, RAN1 and RANF1, where , are an interesting subclass of this problem.
Frustrated Loops (FLk and FCLk):
The frustrated loop problem was originally proposed in [42] and then later refined to the FLk problem in [48]. It consists of generating a collection of random cycles in the IPU graph. In each cycle, all of the edges are set to except one random edge, which is set to to produce frustration. A scaling factor, , is used to control how many random cycles should be generated, and the parameter determines how many cycles each edge can participate in. A key property of the FLk generation procedure is that two globally optimal solutions are maintained at and [48]. However, to obfuscate this solution, a gauge transformation is often applied to make the optimal solution a random assignment of .
A variant of the frustrated loop problem is the frustrated cluster loop problem, FCLk [43]. The FCLk problem is inspired by the chimera network topology (i.e., Figure 1). The core idea is that tightly coupled variables (e.g., in Figure 1) should form a cluster where all of the variables take the same value. This is achieved by setting all of the values of within the cluster to . For the remaining edges between clusters, the previously described frustrated cycles generation scheme is used. Note that a polynomial time algorithm is known for solving the FCLk problem class on chimera graphs [45].
It is worthwhile to mention that the FLk and FCLk instance generators are solving a cycle packing problem on the IPU graph. Hence, the randomized algorithms proposed in [42, 43] are not guaranteed to find a solution if one exists. In practice, this algorithm fails for the highly constrained settings of and .
WeakStrong Cluster Networks (WSCNs):
The WSCN problem was proposed in [33] and is highly specialized to the chimera network topology. The basic building block of a WSCN is a pair of spin clusters in the chimera graph (e.g., and in Figure 1). In the strong cluster the values of are set to the strong force parameter sf and in the weak cluster the values of are set to the weak force parameter wf. All of the values of within and between this cluster pair are set to . Once a number of weakstrong cluster pairs have been placed, the strong clusters are connected to each other using random values of . The values of sf = and wf = 0.44 are recommended by [33]. The motivation for the WSCN design is that the clusters create deep local minima that are difficult for local search methods to escape.
4.2 Solution Methods
Once a collection of Ising model instances have been generated, the next step in a typical benchmarking study is to evaluate those instances on a variety of solution methods, including the IPU, and compare the results. A brief survey reveals five primary solution methods in the literature, each of which is briefly introduced. For a detailed description, please refer to the source publications of the solution method.
Simulated Annealing:
The most popular stawman solution method for comparison is Simulated Annealing [49]. Typically the implementation only considers a neighborhood of single variable flips and the focus of these implementations is on computational performance (e.g. using GPUs for acceleration). The search is run until a specified time limit is reached.
Large Neighborhood Search:
The stateoftheart metaheuristic for solving Ising models on the chimera graphs is a Large Neighborhood Search (LNS) method called the HamzeFreitasSelby (HFS) algorithm [50, 51]. The core idea of this algorithm is to extract low treewidth subgraphs of the given Ising model and then use dynamic programming to compute the optimal configuration of these subgraphs. This extract and optimize process is repeated until a specified time limit is reached. A key to this method’s success is the availability of a highly optimized opensource C implementation [52].
Integer Programming:
Previous works first considered integer quadratic programming [27]
and quickly moved to integer linear programming
[53, 54] as a solution method. The mathematical programming survey [55] provides a useful overview of the advantages and disadvantages of various integer programming (IP) formulations.Based on some preliminary experiments with different formulations, this work focuses on the following integer linear programming formulation of the Ising model, transformed into the Boolean variable space:
(12a)  
s.t.:  
(12b)  
where the application of (8) leads to,
(13a)  
(13b)  
(13c) 
In this formulation, the binary quadratic program defined in (10) is converted to a binary linear program by lifting the variable products into a new variable and adding linear constraints to capture the conjunction constraints. Preliminary experiments of this work confirmed the findings of [55], that this binary linear program formulation is best on sparse graphs, such as the hardware graphs of current IPUs.
Adiabatic Quantum Computation:
An adiabatic quantum computation (AQC) [56] is a method for solving an Ising model via a quantum annealing process [57]. This solution method has two notable traits: (1) the AQC dynamical process features quantum tunneling [58], which can help it to escape from local minima; (2) it can be implemented in hardware (e.g. the DWave IPU).
Quantum Monte Carlo:
Quantum Monte Carlo (QMC) is a probabilistic algorithm that can be used for simulating large quantum systems. QMC is a very computationally intensive method [59, 33] and thus the primary use of QMC is not to compare runtime performance but rather to quantify the possible value of an adiabatic quantum computation that could be implemented in hardware at some point in the future.
4.3 Overview
To briefly summarize a variety of benchmarking studies, Table 1 provides an overview of the problems and solution methods previous works have considered. Although there was some initial interest in integer programming models [27, 53, 54], more recent IPU benchmark studies have not considered these solution methods and have focused exclusively on heuristic methods. Furthermore, there are notable inconsistencies in the type of problems being considered. As indicated by the last row in Table 1, the goal of this work is revisit the use of IP methods for benchmarking IPUs and to conduct a thorough and sidebyside study of all problem classes and solution methods proposed in the literature. Note that, because this paper focuses exclusively on the quality and runtime of the Ising model optimization task (2), the study of SA and QMC are omitted as they provide no additional insights over the LNS [48] and AQC [33] methods.
5 A Study of Established Methods
This section conducts an indepth computational study of the established instance generation algorithms and solution methods for IPUs. The first goal of this study is to understand what classes of problems and parameters are the most challenging, as such cases are preferable for benchmarking. The second goal is to conduct a validation study of a DWave 2X IPU, to clearly quantify its solution quality and runtime performance. This computational study is divided into two phases. First, a broad parameter sweep of all possible instance generation algorithms is conducted and a commercial mixedinteger programming solver is used to filter out the easy problem classes and parameter settings. Second, after the most challenging problems have been identified, a detailed study is conducted to compare and contrast the three disparate solution methods IP, LNS, and AQC.
Throughout this section, the following notations are used to describe the algorithm results: denotes the objective value of the best feasible solution produced by the algorithm within the time limit, denotes the value of the best lower bound produced by the algorithm within the time limit, denotes the algorithm runtime in seconds^{1}^{1}1For MIP solvers, the runtime includes the computation of the optimally certificate., denotes that the algorithm hit a time limit of 600 seconds, denotes the mean of a collection of values,
denotes the standard deviation of a collection of values, and
denotes the maximum of a collection of values.Computation Environment:
The classical computing algorithms are run on HPE ProLiant XL170r servers with dual Intel 2.10GHz CPUs and 128GB memory. After a preliminary comparison of CPLEX 12.7 [61] and Gurobi 7.0 [62], no significant difference was observed. Thus, Gurobi was selected as the commercial MixedInteger Programming (MIP) solver and was configured to use one thread. The highly specialized and optimized HFS algorithm [52] is used as an LNSbased heuristic and also uses one thread.
The IPU computation is conducted on a DWave 2X [63] adiabatic quantum computer (AQC). This computer has a 12by12 chimera cell topology with random omissions; in total, it has 1095 spins and 3061 couplers and an effective temperature of depending on the problem being solved [64, 65]. Unless otherwise noted, the AQC is configured to produce 10,000 samples using a 5microsecond annealing time per sample and a random gauge transformation every 100 samples. The best sample is used in the computation of the upper bound value. The reported runtime of the AQC reflects the amount of time used on the IPU hardware; it does not include the overhead of communication or scheduling of the computation, which adds an overhead of about three seconds.
All of the software used in this benchmarking study is available as opensource via: bqpjson, a languageindependent JSONbased Ising model exchange format designed for benchmarking IPU hardware; dwig, algorithms for IPU instance generation; bqpsolvers, tools for encoding bqpjson data into various optimization formulations and solvers.^{2}^{2}2The source code is available at https://github.com/lanlansi/ under the repository names bqpjson, dwig and bqpsolvers.
5.1 Identifying Challenging Cases
Broad Parameter Sweep:
In this first experiment, we conduct a parameter sweep of all the inputs to the problem generation algorithms described in Section 4.1. Table 2 provides a summary of the input parameters for each problem class. The values of each parameter are encoded with the following triple:
. When two parameters are required for a given problem class, the cross product of all parameters is used. For each problem class and each combination of parameter settings, 250 random problems are generated in order to produce a reasonable estimate of the average difficulty of that configuration. Each problem is generated using all of the decision variables available on the IPU. The computational results of this parameter sweep are summarized in Table
3.Problem  First Param.  Second Param. 

RANk  NA  
RANFk  NA  
FLk  
FCLk  
WSCN 
The results presented in Table 3 indicate that, at this problem size, all variants of the FL, FCL, and WSCN problems are easy for modern MIP solvers. This is a stark contrast to [33], which reported runtimes around 10,000 seconds when applying Simulated Annealing to the WSCN problem. Furthermore, this result suggests that these problems classes are not ideal candidates for benchmarking IPUs. In contrast, the RAN and RANF cases consistently hit the runtime limit of the MIP solver, suggesting that these problems are more useful for benchmarking. This result is consistent with a similar observation in the SAT community, where random SAT problems are known to be especially challenging [66, 67]. To get a better understanding of these RAN problem classes, we next perform a detailed study of these problems for various values of the parameter .
Problem  Cases  

RAN  1250  1095  3061  TO  —  TO 
RANF  1250  1095  3061  TO  —  TO 
FL  6944  1008  2126  1.82  1.06  16.80 
FCL  8347  888  2282  4.19  2.81  41.40 
WSCN  30250  949  2313  0.25  0.87  17.90 
The RAN and RANF Problems:
In this second experiment, we focus on the RANk and RANFk problems and conduct a detailed parameter sweep of . To accurately measure the runtime difficulty of the problem, we also reduce the size of the problem from 1095 variables to 194 variables so that the MIP solver can reliably terminate within a 600 second time limit. The results of this parameter sweep are summarized in Table 4.
The results presented in Table 4 indicate that (1) as the value of increases, both the RAN and RANF problems become easier; and (2) the RANF problem is easier than the RAN problem. The latter is not surprising because the additional linear coefficients in the RANF problem break many of the symmetries that exist in the RAN problem. These results suggest that it is sufficient to focus on the RAN1 and RANF1 cases for a more detailed study of IPU performance. This is a serendipitous outcome for IPU benchmarking because restricting the problem coefficients to reduces artifacts caused by noise and the numeral precision of the analog hardware.
Cases  

Problems of Increasing k  RANk  RANFk  
1  250  194  528  340.0  195.0  TO  14.10  15.20  82.70 
2  250  194  528  89.3  64.3  481  2.97  3.41  22.70 
3  250  194  528  64.8  28.3  207  1.67  1.48  10.70 
4  250  194  528  58.0  29.5  250  1.25  0.83  6.10 
5  250  194  528  49.0  23.0  131  1.12  0.77  6.98 
6  250  194  528  49.0  22.4  119  1.05  0.59  4.47 
7  250  194  528  45.0  22.8  128  1.04  0.75  7.60 
8  250  194  528  44.8  23.7  121  1.01  0.62  5.43 
9  250  194  528  42.3  22.3  110  0.98  0.60  5.08 
10  250  194  528  39.8  22.1  107  0.91  0.43  3.09 
5.2 An IPU Evaluation using RAN1 and RANF1
Now that the RAN1 and RANF1 problem classes have been identified as the most interesting for IPU benchmarking, we perform two detailed studies on these problems using all three algorithmic approaches (i.e., AQC, LNS, and MIP). The first study focuses on the scalability trends of these solution methods as the problem size increases, whereas the second study focuses on a runtime analysis of the largest cases that can be evaluated on a DWave 2X IPU hardware.
Scalability Analysis:
In this experiment, we increase the problem size gradually to understand the scalability profile of each of the solution methods (AQC, LNS, and MIP). The results are summarized in Table 5. Focusing on the smaller problems, where the MIP solver provides an optimality proof, we observe that both the AQC and the LNS methods find the optimal solution in all of the sampled test cases, suggesting that both heuristic solution methods are of high quality. Focusing on the larger problems, we observe that, in just a few seconds, both AQC and LNS find feasible solutions that are of higher quality than what the MIP solver can find in 600 seconds. This suggests that both methods are producing highquality solutions at this scale. As the problem size grows, a slight quality discrepancy emerges favoring LNS over AQC; however, this discrepancy in average solution quality is less than 1% of the best known value.
AQC  LNS  MIP  
Cases  
RAN1 Problems of Increasing Size  
250  30  70  44  3.53  44  10  44  44  0.05 
250  69  176  110  3.57  110  10  110  110  0.48 
250  122  321  199  3.60  199  10  199  199  15.90 
250  194  528  325  3.64  325  10  325  327  340.00 
250  275  751  462  3.68  462  10  461  483  TO 
250  375  1030  633  3.73  633  10  629  673  TO 
250  486  1337  821  3.77  822  10  814  881  TO 
250  613  1689  1038  3.77  1039  10  1021  1116  TO 
250  761  2114  1296  3.76  1297  10  1262  1401  TO 
250  923  2578  1574  3.77  1576  10  1525  1713  TO 
250  1095  3061  1870  3.80  1873  10  1806  2045  TO 
RANF1 Problems of Increasing Size  
250  30  70  53  3.53  53  10  53  53  0.02 
250  69  176  127  3.56  127  10  127  127  0.13 
250  122  321  229  3.61  229  10  229  229  0.67 
250  194  528  370  3.66  370  10  370  370  14.10 
250  275  751  526  3.71  526  10  526  527  128.00 
250  375  1030  719  3.76  719  10  719  727  471.00 
250  486  1337  934  3.81  934  10  933  954  588.00 
250  613  1689  1179  3.82  1179  10  1178  1211  TO 
250  761  2114  1472  3.82  1472  10  1470  1520  TO 
250  923  2578  1786  3.82  1787  10  1778  1856  TO 
250  1095  3061  2121  3.86  2122  10  2110  2212  TO 
Detailed Runtime Analysis:
Given that both the AQC and the LNS solution methods have very similar solution qualities, it is prudent to perform a detailed runtime study to understand the quality vs. runtime tradeoff. To develop a runtime profile of the LNS algorithm, the solver’s runtime limit is set to values ranging from 0.01 to 10.00 seconds. In the case of the AQC algorithm, the number of requested samples is set to values ranging from 10 to 10,000, which has the effect of scaling the runtime of the IPU process. The results of this study are summarized in Figure 2. Note that the stochastic sampling nature of the IPU results in some noise for small numbers of samples. However, the overall trend is clear.
The results presented in Figure 2 further illustrate that (1) the RAN problem class is more challenging than the RANF problem class, and (2) regardless of the runtime configuration used, the LNS heuristic slightly outperforms the AQC; however, the average solution quality is always within 1% of each other. Combining all of the results from this section provides a strong validation that even if the DWave 2X IPU cannot guarantee a globally optimal solution, it produces high quality solutions reliably across a wide range of inputs.
6 Conclusion
This work introduces the idea of Ising processing units (IPUs) as a computational abstraction for emerging physical devices that optimize Ising models. It highlights a number of unexpected challenges in using such devices and proposes commercial mixedinteger programming solvers as a tool to help improve validation and benchmarking.
A baseline study of the DWave 2X IPU suggests that the hardware specific instance generation is a reasonable strategy for benchmarking IPUs. However, finding a class of challenging randomly generated test cases is nontrivial and an open problem for future work. The study verified that at least one commercially available IPU is already comparable to current stateoftheart classical methods on some classes of problems (e.g. RAN and RANF). Consequently, as this IPU’s hardware increases in size, one would expect that it could outperform stateoftheart classical methods because of its parallel computational nature and become a valuable coprocessor in hybridoptimization algorithms.
Overall, we find that the emergence of IPUs is an interesting development for the optimization community and warrants continued study. Considerable work remains to determine new challenging classes of test cases for validating and benchmarking IPUs. We hope that the technology overview and the validation study conducted in this work will assist the optimization research community in exploring IPU hardware platforms and will accelerate the development of hybridalgorithms that can effectively leverage these emerging technologies.
References
 [1] Johnson, M.W., Amin, M.H.S., Gildert, S., Lanting, T., Hamze, F., Dickson, N., Harris, R., Berkley, A.J., Johansson, J., Bunyk, P., Chapple, E.M., Enderud, C., Hilton, J.P., Karimi, K., Ladizinsky, E., Ladizinsky, N., Oh, T., Perminov, I., Rich, C., Thom, M.C., Tolkacheva, E., Truncik, C.J.S., Uchaikin, S., Wang, J., Wilson, B., Rose, G.: Quantum annealing with manufactured spins. Nature 473(7346) (May 2011) 194–198
 [2] International Business Machines Corporation: Ibm building first universal quantum computers for business and science. Published online at https://www03.ibm.com/press/us/en/pressrelease/51740.wss (2017) Accessed: 04/28/2017.
 [3] Mohseni, M., Read, P., Neven, H., Boixo, S., Denchev, V., Babbush, R., Fowler, A., Smelyanskiy, V., Martinis, J.: Commercialize quantum technologies in five years. Nature 543 (2017) 171–174
 [4] Chmielewski, M., Amini, J., Hudek, K., Kim, J., Mizrahi, J., Monroe, C., Wright, K., Moehring, D.: Cloudbased trappedion quantum computing. In: APS Meeting Abstracts. (2018)
 [5] Yamaoka, M., Yoshimura, C., Hayashi, M., Okuyama, T., Aoki, H., Mizuno, H.: 24.3 20kspin ising chip for combinational optimization problem with cmos annealing. In: 2015 IEEE International SolidState Circuits Conference  (ISSCC) Digest of Technical Papers. (Feb 2015) 1–3
 [6] Yoshimura, C., Yamaoka, M., Aoki, H., Mizuno, H.: Spatial computing architecture using randomness of memory cell stability under voltage control. In: 2013 European Conference on Circuit Theory and Design (ECCTD). (Sept 2013) 1–4
 [7] Fujitsu: Digital annealer. Published online at http://www.fujitsu.com/global/digitalannealer/ (May 2018) Accessed: 02/26/2019.
 [8] Modha, D.S.: Introducing a braininspired computer. Published online at http://www.research.ibm.com/articles/brainchip.shtml (2017) Accessed: 04/28/2017.
 [9] Davies, M., Srinivasa, N., Lin, T., Chinya, G., Cao, Y., Choday, S.H., Dimou, G., Joshi, P., Imam, N., Jain, S., Liao, Y., Lin, C., Lines, A., Liu, R., Mathaikutty, D., McCoy, S., Paul, A., Tse, J., Venkataramanan, G., Weng, Y., Wild, A., Yang, Y., Wang, H.: Loihi: A neuromorphic manycore processor with onchip learning. IEEE Micro 38(1) (January 2018) 82–99
 [10] Schuman, C.D., Potok, T.E., Patton, R.M., Birdwell, J.D., Dean, M.E., Rose, G.S., Plank, J.S.: A survey of neuromorphic computing and neural networks in hardware. arXiv preprint arXiv:1705.06963 (2017)

[11]
Caravelli, F.:
Asymptotic behavior of memristive circuits and combinatorial optimization (2017)
 [12] Fabio L. Traversa, M.D.V.: Memcomputing integer linear programming (2018)
 [13] McMahon, P.L., Marandi, A., Haribara, Y., Hamerly, R., Langrock, C., Tamate, S., Inagaki, T., Takesue, H., Utsunomiya, S., Aihara, K., et al.: A fullyprogrammable 100spin coherent ising machine with alltoall connections. Science (2016) aah5178
 [14] Inagaki, T., Haribara, Y., Igarashi, K., Sonobe, T., Tamate, S., Honjo, T., Marandi, A., McMahon, P.L., Umeki, T., Enbutsu, K., Tadanaga, O., Takenouchi, H., Aihara, K., Kawarabayashi, K.i., Inoue, K., Utsunomiya, S., Takesue, H.: A coherent ising machine for 2000node optimization problems. Science 354(6312) (2016) 603–606
 [15] Kielpinski, D., Bose, R., Pelc, J., Vaerenbergh, T.V., Mendoza, G., Tezak, N., Beausoleil, R.G.: Information processing with largescale optical integrated circuits. In: 2016 IEEE International Conference on Rebooting Computing (ICRC). (Oct 2016) 1–4
 [16] Feynman, R.P.: Simulating physics with computers. International Journal of Theoretical Physics 21(6) (1982) 467–488
 [17] Brush, S.G.: History of the lenzising model. Rev. Modern Phys. 39 (Oct 1967) 883–893
 [18] DWave Systems Inc.: Customers. Published online at https://www.dwavesys.com/ourcompany/customers (2017) Accessed: 04/28/2017.
 [19] Haribara, Y., Utsunomiya, S., Yamamoto, Y. In: A Coherent Ising Machine for MAXCUT Problems: Performance Evaluation against Semidefinite Programming and Simulated Annealing. Springer Japan, Tokyo (2016) 251–262
 [20] Lucas, A.: Ising formulations of many np problems. Frontiers in Physics 2 (2014) 5
 [21] Bian, Z., Chudak, F., Israel, R.B., Lackey, B., Macready, W.G., Roy, A.: Mapping constrained optimization problems to quantum annealing with application to fault diagnosis. Frontiers in ICT 3 (2016) 14
 [22] Bian, Z., Chudak, F., Israel, R., Lackey, B., Macready, W.G., Roy, A.: Discrete optimization using quantum annealing on sparse ising models. Frontiers in Physics 2 (2014) 56
 [23] Rieffel, E.G., Venturelli, D., O’Gorman, B., Do, M.B., Prystay, E.M., Smelyanskiy, V.N.: A case study in programming a quantum annealer for hard operational planning problems. Quantum Information Processing 14(1) (2015) 1–36
 [24] Venturelli, D., Marchand, D.J.J., Rojo, G.: Quantum annealing implementation of jobshop scheduling (2015)
 [25] de Givry, S., Larrosa, J., Meseguer, P., Schiex, T. In: Solving MaxSAT as Weighted CSP. Springer Berlin Heidelberg, Berlin, Heidelberg (2003) 363–376
 [26] Morgado, A., Heras, F., Liffiton, M., Planes, J., MarquesSilva, J.: Iterative and coreguided maxsat solving: A survey and assessment. Constraints 18(4) (2013) 478–534
 [27] McGeoch, C.C., Wang, C.: Experimental evaluation of an adiabiatic quantum system for combinatorial optimization. In: Proceedings of the ACM International Conference on Computing Frontiers. CF ’13, New York, NY, USA, ACM (2013) 23:1–23:11
 [28] Nieuwenhuis, R. In: The IntSat Method for Integer Linear Programming. Springer International Publishing, Cham (2014) 574–589
 [29] Zdeborova, L., Krzakala, F.: Statistical physics of inference: thresholds and algorithms. Advances in Physics 65(5) (2016) 453–552
 [30] Bian, Z., Chudak, F., Macready, W.G., Rose, G.: The ising model: teaching an old problem new tricks. Published online at https://www.dwavesys.com/sites/default/files/weightedmaxsat_v2.pdf (2010) Accessed: 04/28/2017.

[31]
Benedetti, M., RealpeGómez, J., Biswas, R., PerdomoOrtiz, A.:
Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning.
Phys. Rev. A 94 (Aug 2016) 022308 
[32]
Boixo, S., Ronnow, T.F., Isakov, S.V., Wang, Z., Wecker, D., Lidar, D.A.,
Martinis, J.M., Troyer, M.:
Evidence for quantum annealing with more than one hundred qubits.
Nat Phys 10(3) (Mar 2014) 218–224 Article.  [33] Denchev, V.S., Boixo, S., Isakov, S.V., Ding, N., Babbush, R., Smelyanskiy, V., Martinis, J., Neven, H.: What is the computational value of finiterange tunneling? Phys. Rev. X 6 (Aug 2016) 031015
 [34] King, J., Yarkoni, S., Nevisi, M.M., Hilton, J.P., McGeoch, C.C.: Benchmarking a quantum annealing processor with the timetotarget metric. arXiv preprint arXiv:1508.05087 (2015)
 [35] Boothby, T., King, A.D., Roy, A.: Fast clique minor generation in chimera qubit connectivity graphs. Quantum Information Processing 15(1) (January 2016) 495–508
 [36] Cai, J., Macready, W.G., Roy, A.: A practical heuristic for finding graph minors (2014)
 [37] Klymko, C., Sullivan, B.D., Humble, T.S.: Adiabatic quantum programming: minor embedding with hard faults. Quantum Information Processing 13(3) (2014) 709–729
 [38] Koch, T., Achterberg, T., Andersen, E., Bastert, O., Berthold, T., Bixby, R., Danna, E., Gamrath, G., Gleixner, A., Heinz, S., Lodi, A., Mittelmann, H., Ralphs, T., Salvagnin, D., Steffy, D., Wolter, K.: Miplib 2010: Mixed integer programming library version 5. Mathematical Programming Computation 3(2) (2011) 103–163
 [39] Gent, I.P., Walsh, T. In: CSPlib: A Benchmark Library for Constraints. Springer Berlin Heidelberg, Berlin, Heidelberg (1999) 480–481
 [40] Hoos, H.H., Stutzle, T.: Satlib: An online resource for research on sat (2000)
 [41] Coffrin, C., Nagarajan, H., Bent, R.: Challenges and Successes of Solving Binary Quadratic Programming Benchmarks on the DW2X QPU. Technical report, Los Alamos National Laboratory (LANL) (2016)
 [42] Hen, I., Job, J., Albash, T., Rønnow, T.F., Troyer, M., Lidar, D.A.: Probing for quantum speedup in spinglass problems with planted solutions. Phys. Rev. A 92 (Oct 2015) 042325
 [43] King, J., Yarkoni, S., Raymond, J., Ozfidan, I., King, A.D., Nevisi, M.M., Hilton, J.P., McGeoch, C.C.: Quantum annealing amid local ruggedness and global frustration (2017)
 [44] Mandrà, S., Zhu, Z., Wang, W., PerdomoOrtiz, A., Katzgraber, H.G.: Strengths and weaknesses of weakstrong cluster problems: A detailed overview of stateoftheart classical heuristics versus quantum approaches. Phys. Rev. A 94 (Aug 2016) 022337
 [45] Mandrà, S., Katzgraber, H.G., Thomas, C.: The pitfalls of planar spinglass benchmarks: Raising the bar for quantum annealers (again) (2017)
 [46] Aaronson, S.: Dwave: Truth finally starts to emerge. Published online at http://www.scottaaronson.com/blog/?p=1400 (May 2013) Accessed: 04/28/2017.
 [47] Aaronson, S.: Insert dwave post here. Published online at http://www.scottaaronson.com/blog/?p=3192 (March 2017) Accessed: 04/28/2017.
 [48] King, A.D., Lanting, T., Harris, R.: Performance of a quantum annealer on rangelimited constraint satisfaction problems. arXiv preprint arXiv:1502.02098 (2015)
 [49] Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598) (1983) 671–680
 [50] Hamze, F., de Freitas, N.: From fields to trees. In: Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence. UAI ’04, Arlington, Virginia, United States, AUAI Press (2004) 243–250
 [51] Selby, A.: Efficient subgraphbased sampling of isingtype models with frustration (2014)
 [52] Selby, A.: Qubochimera. https://github.com/alex1770/QUBOChimera (2013)
 [53] Puget, J.F.: Dwave vs cplex comparison. part 2: Qubo. Published online at https://www.ibm.com/developerworks/community/blogs/jfp/entry/d_wave_vs_cplex_comparison_part_2_qubo (2013) Accessed: 11/28/2018.
 [54] Dash, S.: A note on qubo instances defined on chimera graphs. arXiv preprint arXiv:1306.1202 (2013)
 [55] Billionnet, A., Elloumi, S.: Using a mixed integer quadratic programming solver for the unconstrained quadratic 01 problem. Mathematical Programming 109(1) (2007) 55–68
 [56] Farhi, E., Goldstone, J., Gutmann, S., Sipser, M.: Quantum computation by adiabatic evolution (2018)
 [57] Kadowaki, T., Nishimori, H.: Quantum annealing in the transverse ising model. Phys. Rev. E 58 (Nov 1998) 5355–5363
 [58] Farhi, E., Goldstone, J., Gutmann, S., Lapan, J., Lundgren, A., Preda, D.: A quantum adiabatic evolution algorithm applied to random instances of an npcomplete problem. Science 292(5516) (2001) 472–475
 [59] Nightingale, M.P., Umrigar, C.J.: Quantum Monte Carlo Methods in Physics and Chemistry (Nato Science Series C:). Springer (1998)
 [60] Parekh, O., Wendt, J., Shulenburger, L., Landahl, A., Moussa, J., Aidun, J.: Benchmarking adiabatic quantum optimization for complex network analysis (2015)
 [61] IBM ILOG CPLEX Optimizer. https://www.ibm.com/analytics/cplexoptimizer (Last 2010)
 [62] Gurobi Optimization, Inc.: Gurobi optimizer reference manual. Published online at http://www.gurobi.com (2014)
 [63] DWave Systems Inc.: The dwave 2x quantum computer technology overview. Published online at https://www.dwavesys.com/sites/default/files/DWave%202X%20Tech%20Collateral_0915F.pdf (2015) Accessed: 04/28/2017.
 [64] Vuffray, M., Misra, S., Lokhov, A., Chertkov, M.: Interaction screening: Efficient and sampleoptimal learning of ising models. In Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., eds.: Advances in Neural Information Processing Systems 29. Curran Associates, Inc. (2016) 2595–2603
 [65] Lokhov, A.Y., Vuffray, M., Misra, S., Chertkov, M.: Optimal structure and parameter learning of ising models (2016)
 [66] Mitchell, D., Selman, B., Levesque, H.: Hard and easy distributions of sat problems. In: Proceedings of the Tenth National Conference on Artificial Intelligence. AAAI’92, AAAI Press (1992) 459–465
 [67] Balyo, T., Heule, M.J.H., Jarvisalo, M.: Sat competition 2016: Recent developments. In: Proceedings of the ThirtyFirst National Conference on Artificial Intelligence. AAAI’17, AAAI Press (2017) 5061–5063
Comments
There are no comments yet.