QASA
Quantum Annealing Single-qubit Assessment (QASA)
view repo
As a wide variety of quantum computing platforms become available, methods for assessing and comparing the performance of these devices are of increasing interest and importance. Inspired by the success of single-qubit error rate computations for tracking the progress of gate-based quantum computers, this work proposes a Quantum Annealing Single-qubit Assessment (QASA) protocol for quantifying the performance of individual qubits in quantum annealing computers. The proposed protocol scales to large quantum annealers with thousands of qubits and provides unique insights into the distribution of qubit properties within a particular hardware device. The efficacy of the QASA protocol is demonstrated by analyzing the properties of a D-Wave 2000Q system, revealing unanticipated correlations in the qubit performance of that device. A study repeating the QASA protocol at different annealing times highlights how the method can be utilized to understand the impact of annealing parameters on qubit performance. Overall, the proposed QASA protocol provides a useful tool for assessing the performance of current and emerging quantum annealing devices.
READ FULL TEXT VIEW PDF
The RSA cryptosystem could be easily broken with large scale general pur...
read it
The use of quantum computing in graph community detection and regularity...
read it
Quantum computing is an information processing paradigm that uses
quantu...
read it
Pegasus is a graph which offers substantially increased connectivity bet...
read it
Quantum annealing was originally proposed as an approach for solving
com...
read it
We numerically test an optimization method for deep neural networks (DNN...
read it
Associative memory models, in theoretical neuro- and computer sciences, ...
read it
Quantum Annealing Single-qubit Assessment (QASA)
In the current era of Noisy Intermediate-Scale Quantum (NISQ) [Preskill2018quantumcomputingin] devices, measuring and tracking changes in the fidelity of quantum hardware platforms is essential to understanding the limitations of these devices and quantifying progress as these platforms continue to improve. Measuring the performance of gate-based quantum computers (QC) has been studied extensively through the topics of quantum characterization, verification, and validation (QCVV) [Eisert2020]. The scope of QCVV is broad and ranges from testing individual quantum operations (e.g., error rates of one- and two-qubit gates [Wright2019]), verifying small circuits (e.g., Randomized Benchmarking [PhysRevLett.106.180504, PhysRevA.77.012307], Gate Set Tomography [2009.07301]
), to full system-level protocols (e.g., quantum volume estimation
[PhysRevA.100.032328], random quantum circuits [Boixo2018]). Over the years these QCVV tools have become an invaluable foundation for benchmarking and measuring progress of quantum processors [ibm_qv], culminating with a quantum supremacy demonstration in 2019 [Arute2019].Interestingly, this large body of QCVV work cannot usually be applied to the assessment of quantum annealing (QA) computers, such as the quantum devices developed by D-Wave Systems [Johnson2011, dwave_docs]. The fundamental challenge in conducting characterization, verification, and validation of quantum annealing devices (QAVV) is that available hardware platforms only allow measuring the state of the system in a fixed basis (the so-called computational -basis) and at the completion of a specified annealing protocol. Consequently, the QA user can only observe a fairly limited projection of the quantum state that occurs during the hardware’s computation, which raises a variety of challenges for how to best conduct QAVV.
QAVV efforts began in earnest around 2010 with a number of quantum hardware validation efforts that were successful in demonstrating quantum state evolution in small systems with 8 to 20 qubits [PhysRevB.81.134510, PhysRevX.4.021041, Boixo2013, PhysRevB.87.020502].^{1}^{1}1Note that several of these works require hardware measurements that are not available to the users of current QA platforms. After these initial efforts, QAVV has focused almost exclusively on system-level benchmarks that consider transverse field Ising models with 100s to 1000s of qubits [Job_2018]. These system-level metrics of QA hardware platforms have generally shown strong qualitative agreement with idealized QA simulations [Boixo2014, PhysRevA.91.042314, Harris162, King2018]. However, identifying the root causes for the deviations from idealized QA simulations remains an open research topic.
This work is motivated by the observation that QAVV is lacking in component-level metrics that can be used for characterization, verification, and validation of individual components of large QA hardware platforms. Taking inspiration from the single-qubit error rate metrics developed in the QCVV literature, this work highlights the usefulness of conducting single-qubit fidelity assessment of individual qubits in a QA platform. The proposed protocol is able to extract key metrics of individual qubits, such as their effective temperature, noise, and bias, and is executed in parallel for all of the qubits, providing insights into the variability of qubit properties across an entire QA device. Systematically measuring these fine-grained single-qubit properties can assist in the calibration of idealized QA simulations that seek to emulate specific hardware devices and provides several key metrics of tracking technical improvements on QA hardware platforms over time (e.g., in relation to effective temperature and qubit noise properties).
This work begins by introducing the foundations of quantum annealing for a single qubit in Section 2 and derives an effective single-qubit model that can be reconstructed from the observations of a particular hardware device. Leveraging this building block, we then propose a full-chip single-qubit assessment protocol for quantum annealing in Section 3 and illustrate how such a protocol can uncover some surprising trends in system-wide qubit performance. A brief study on different annealing times highlights how qubit performance can be impacted by annealing procedure in Section 4. Section 5 concludes the paper with a discussion of the usefulness of the proposed protocol and future work.
The foundation of current quantum annealing platforms is the Ising model Hamiltonian [gallavotti2013statistical],
(1) |
where is the set of qubits and is the set of programmable interactions between qubits. The elementary unit of this model is a qubit
described by the standard vector of Pauli matrices
along the three spatial directions. The outcome of the quantum annealing process is specified by a binary variable
that takes a value or and corresponds to the observation of the spin projection in the computational basis denoted by . The final state of each qubit is influenced by user-specified values of local fields and two-qubit couplers. This model is interesting because it can readily encode challenging computational problems arising in the study of magnetic materials, machine learning, and optimization
[hopfield1982neural, panjwani1995markov, lokhov2018optimal, Kochenberger2014].The quantum annealing protocol strives to find the low-energy assignments to a user-specified
problem by conducting an analog interpolation process of the following transverse field Ising model Hamiltonian:
(2) |
The interpolation process starts with and ends with . The two interpolation functions and are designed such that and , that is, starting with a Hamiltonian dominated by and slowly transitioning to a Hamiltonian dominated by . In an idealized setting and when this transition process is sufficiently slow, the quantum annealing is referred to as adiabatic quantum computation. The adiabatic theorem states that if the interpolation is sufficiently slow and the quantum system is isolated, proposed QA protocol will always find the ground state (i.e., optimal solution) to the problem [quant-ph-0001106, PhysRevE.58.5355]. However, in existing QA hardware platforms, a wide variety of non-ideal properties can impact the results of a QA computation [PhysRevA.91.062320, Boixo2016, Smirnov_2018]. In particular, the D-Wave hardware documentation discusses five known sources of deviations from an ideal QA system called integrated control errors (ICE) [dwave_docs], which include: background susceptibility; flux noise; DAC quantization; I/O system effects; and variable scale across qubits.
In the spirit of conducting QAVV for the smallest possible component of a QA device, this work considers a variant of QA that is restricted to a single qubit. Specifically, it considers a system of the form
(3) |
Despite the simplicity of this model, the imperfections of real-world QA platforms make it a useful tool for assessing the performance of individual qubits in practice.
The measurement outcomes of a single-qubit quantum annealing experiment take the form of a probability distribution over the two possible observable projections
. This probability distribution can be fully characterized by a single parameter
, coined effective field, in the following manner:(4) |
The value of depends on the experiment’s input parameters and is, in particular, a function of the user input field . In the case of a classical magnet placed into a persistent external magnetic field in a thermal equilibrium at temperature , one will observe a linear relationship between the output and input fields of the form . This linear mapping is called a classical Gibbs distribution for a single spin. However, it was observed in [2012.08827] that the ICE effects of available QA hardware platforms result in an input/output relationship that is more complicated and is better described by a mixture of quantum Gibbs distributions, which is a generalization of its classical counterpart. The derivation provided in [2012.08827] proposes the following mixture of canonical density matrices,
(5) |
which describes a quantum spin in thermal equilibrium at temperature subject to a magnetic field with an adjustable component , uncontrollable components for bias , uniform binary noise of magnitude , and a transverse field of magnitude that is proportional to the input field. According to the density matrix from Eq. (5), the expected value of observing the spin along the -components is given by the standard quantum relation . Combining this expression with Eq. (4) results in the input/output field model,
(6) |
which depends on four parameters: the inverse temperature , the transverse field gain , the uncontrollable field bias
and standard deviation of the noise
. Notice that the model in Eq. (6) reduces to the simple classical Gibbs relationship when .Given that available QA hardware platforms only allow users to observe the system state in the computational basis (i.e., ), it is not immediately obvious how one might recover the four model parameters (i.e., ) proposed in Eq. (6). A key insight from [2012.08827] is that these four parameters can be inferred from the signatures that appear when running single-qubit annealing for different values of the input field as described in this section. At a high level the procedure consists of: (i) selecting a particular set of values that will be measured on the hardware, denoted by the set ; (ii) collecting samples from the hardware for each , which are used to estimate the empirical mean of the qubit’s spin ; (iii) using maximum likelihood estimation to recover the best fit values of given the observed relationship between and the empirical mean of the qubit. The final result of a particular instantiation of this procedure is presented in Figure 1.
In particular, the model parameters proposed in Eq. (6) can be estimated using the standard Maximum Likelihood Estimation (MLE) approach,
(7) |
where is the likelihood function of the four parameters , , , and . According to the effective single-qubit model, the probability of observing a configuration conditioned on a value of the input magnetic field is given by Eq. (4), where the effective field depends on the model parameters through Eq. (6). It is straightforward to derive the following likelihood function for this model,
(8) |
where denotes the empirical mean of the spin configuration on the input field . This likelihood function can easily be maximized using established numerical optimization techniques yielding the best fit parameters for the model.
It is important to briefly remark on the data requirements for an accurate estimation of the model parameters that are encoding subtle variations of , especially at large values. Note that, for a particular value of , one collects samples to extract a conditional expectation , which corresponds to an empirical effective field . For a particular value of , this estimator is subject to an accuracy limit due to finite sampling. For large values of , the probability of observing a qubit misaligned with the effective field decreases exponentially with the field’s intensity; see Eq. (4). Therefore, if one only expects to see 1 misaligned spin configuration in every observations, requiring millions of samples to have a confident estimation of . It is hence necessary to adjust these data collection requirements to be consistent with the QA hardware’s performance. This finite sampling accuracy challenge is addressed in this work by setting
to a level that provides tight confidence intervals (
) around the estimation of for the particular QA hardware that was considered, which resulted in .To make this single-qubit model-fitting procedure concrete, Figure 1 provides an example of performing the complete procedure on a representative qubit in a 2000Q D-Wave quantum annealing computer. In this example input fields (i.e., ) ranging from to are collected and used to recover the effective qubit model parameters using the MLE approach. The tight error bars on the points indicate that the values are recovered to a high accuracy, and the close alignment of the best-fit model (blue line) with the observed data illustrates how the effective single-qubit model is able to replicate the key features of the data. Intuitively, the effective inverse-temperature term sets the general slope of the input/output relationship between and , the bias term enables the model to not cross exactly through the origin, the noise term has the effect of lowering the slope near the origin, and the term flattens the curve for values of above . It is important to emphasize that this model (i.e., Eq. (6)) is only an effective model of the output distribution. It is not possible to definitively conclude from this experiment the underlying physical cause of these behaviors; however, it is clear that this model, with a transverse field component, is able to statistically reproduce what experimental observations show.
The central insight of this work is that the data collection procedure required for fitting the effective single-qubit model discussed in the previous section can be executed in parallel for every qubit in a QA hardware device. Consequently, we propose a Quantum Annealing Single-qubit Assessment (QASA) protocol that allows for a detailed characterization of the distribution of effective temperature, offset, noise, and saturation, across an entire QA hardware device. This enables QA users to quickly verify the level of consistency across the hardware’s qubits and to avoid or compensate for non-ideal qubits, adding a new procedure to the QAVV toolbox.
To demonstrate the efficacy of the QASA protocol for QAVV, this work analyzes a D-Wave 2000Q Quantum Annealer located at Los Alamos National Laboratory, known as DW_2000Q_LANL. This system implements a chimera graph [6802426], which consists of a grid of unit cells each containing 8 qubits (4 horizontal and 4 vertical), as illustrated in Figure 3. This architecture supports a maximum of 2048 qubits but this particular system only contains 2032 operational qubits, as the qubit yield in any current D-Wave device is around 99%. The system operates at a mean temperature around 15 mK, although this value fluctuates somewhat over time [PhysRevApplied.8.064025]. Throughout this work the output statistics are collected for 81 input values in the range of with a uniform step size of . The following annealing parameters are used unless specified otherwise: flux drift compensation is disabled, which prevents automatic corrections to input fields based on a calibration procedure that is run a few times each hour; the num reads is set to 10000, specifying the number of identical executions performed for a single programming cycle of the chip; and the annealing time is set to 1 . For each input value, the is estimated with identical executions to ensure high accuracy of the estimation. The following sections discuss the results of running this QAVV protocol on all 2032 qubits in the DW_2000Q_LANL system and show that this reveals unique insights into the characterization of this specific device.
Given that a prevailing assumption of QA modeling is that all of the qubits have identical properties [PhysRevApplied.8.064025, PhysRevA.94.022308, 10.3389/fict.2016.00023], it is important to investigate how reasonable this assumption is in practice. To that end, we begin by investigating the distribution of parameters output by the QASA protocol. Figure 2 presents the empirical distributions of individual qubit parameters across the entire DW_2000Q_LANL system. The first observation of these results is that there is a notable amount of heterogeneity in all of the recovered parameters across the qubits in the hardware. The second observation is that the variability in the parameter is particularly notable as, in the effective qubit model, is a scaling parameter occurring in the exponent of the density matrix (i.e., ). Hence, relatively small changes in can have a dramatic impact on the output statistics. It is possible that accounting for these variations in the values of different qubits could improve the accuracy of encoding practical problems into the hardware.
When looking closely at the distributions of and in Figure 2
, one can observe a slight skew in these parameters relative to a symmetric distribution. This highlights the potential of QASA protocol to identify outlier qubits, which may be preferable to avoid in applications seeking the best possible consistency or accuracy. Indeed, a deeper investigation into the
outliers identifies a particular area of the DW_2000Q_LANL system where the system is non-homogeneous; see the darkest values in Figure 2(c). One can also notice a few qubits in the distribution that have very low noise values (i.e., ). It is important to note that the spacing of the values that are employed in the QASA protocol determines the minimum level of noise that is detectable. Intuitively, the values that appear in Figure 1 must be spaced in a way that can detect a slight slope change near the origin to recover a suitable noise value. Very low noise can be mistaken for zero noise, if the slope change is too small to be accurately detected. In this work we generally found that a spacing of was sufficient to accurately recover the noise occurring in the DW_2000Q_LANL system. However, as QA hardware continues to improve, the spacing or density of data collection points may need to be adjusted to accurately measure finer noise effects.Given that there is some variability in the qubit parameters across the hardware platform, a natural follow-on investigation is whether these fluctuations have any dependence on the positioning in the system. To that end, Figure 3 presents the QASA results as a heat-map on a hardware layout of the chip where the qubit color indicates the value of each qubit’s parameter. It is important to highlight that this diagram of the hardware’s implementation is a dramatic simplification of the physical implementation where, for example, each qubit (represented by a node in this illustration) is implemented as a superconducting loop connected to a wide variety of control circuitry [Johnson2011, 6802426]. The first observation one can make from this spacial analysis is that there is no immediately obvious correlation in the recovered parameters of and . In particular, it does not appear that the chip is partitioned into cooler and warmer areas, which would be indicated by a spacial correlation in the parameter.
The most intriguing property revealed by Figure 3 is the appearance of horizontal and vertical stripes in the parameter. This particular structure was not anticipated by any previous work that we are aware of and is a novel insight made possible by the QASA protocol. Although pinpointing the root cause of these distinctions is outside the scope of this work, it seems likely that this effect is an artifact from some aspect of the hardware’s implementation, which is not readily available to QA users.
Inspired by the horizontal and vertical banding appearing in Figure 2(c), one’s understanding of the qubit parameter distributions can be enhanced by first categorizing the data into two groups, one for the horizontal qubits and another for the vertical qubits in the hardware graph. Figure 4 presents these two distributions for the and parameters. While both horizontal and vertical distributions for the and
parameters appear to have similar variance, the mean of each distribution is notably higher for the horizontal qubits. This result further emphasizes the observation from visual inspection of Figure
2(c) that the horizontal qubits have consistently higher and values in comparison with the vertical qubits.The notable change in the means of the distributions presented in Figure 4 suggests a systematic difference in the way that the horizontal and vertical qubits respond to inputs in the physical hardware. One possible explanation could be related to an asymmetry in the chip’s hardware layout or to the details of how global annealing control signals are delivered to the qubits, which are known to be shared among vertical and horizontal qubits [6802426]. Although we can only speculate on the possible root cause of this phenomenon, the QASA protocol nonetheless provides valuable insight into the heterogeneous features of the hardware platform, identifying key areas for further investigation.
At first glance the QASA protocol recovers seemingly fundamental properties of the qubits in a QA system (e.g., effective temperature, calibration offsets, -noise). It is thus tempting to suggest that these properties are intrinsic to the hardware’s implementation. However, it is important to emphasize that the single-qubit model leveraged by QASA is an effective model describing the statistical properties of a qubit’s behavior. Real world QA systems are extremely complex devices comprising hundreds of thousands of superconducting electronic components, each of which are influenced by various underlying mechanisms including the annealing scheduling, readout errors, freeze-out, decoherence, excitation, and tunneling [PhysRevA.92.052323, PhysRevA.91.062320, PhysRevApplied.8.064025]. The goal of QASA is not to characterize the precise details of the quantum dynamical interactions but instead to measure parameters that form an effective model encapsulating the combination of these effects. To highlight this point, in this section we revisit the QASA protocol while modifying the annealing schedule to show how it can impact the parameters of the effective qubit model.
As an illustrative demonstration, Figure 5 highlights how the QASA parameter distributions can be impacted by different annealing schedules. The approach is to repeat the QASA protocol while increasing the annealing time by two orders of magnitude, ranging from 1 to 125 . It is clear from these results that the parameters for and are largely invariant to this particular annealing parameter; however, there is a striking relationship in how increases logarithmically with anneal time, starting at a mean value of and ending with a value of . Intuitively, the dependence of on the annealing time parameter makes sense, as the adiabatic theorem indicates that annealing more slowly increases the likelihood that the system will stay in a ground state of the specified input model. This increased preference for ground states is equivalent to a lower effective qubit temperature and therefore a larger value. In particular, this result highlights that the effective temperature parameter recovered by the QASA protocol is not held constant by the operating temperature of the hardware device but is instead a feature arising from the complete annealing protocol.
There has been a vigorous debate in the literature around how the observed effective temperature in a QA device is related to its physical operating temperature [PhysRevApplied.8.064025, 10.3389/fict.2016.00023, 2012.08827]. The result presented in this section highlights some of the challenges in using the observed effective temperature for insights into the hardware’s operating temperature. However it is worth noting that the values recovered by the QASA protocol at 1 are remarkably consistent with the system’s measured running temperature of 15 mK [PhysRevApplied.8.064025].
In particular, it may be reasonable to extrapolate the line in Figure 4(a) to a value of 0 to recover a temperature measurement, omitting the impacts of annealing time. In any case, it is clear that the QASA protocol can provide valuable insight into how the system’s physical temperature is connected to an observed effective temperature.
It is important to briefly mention the parameter’s dependence on the anneal time. A first glance at Figure 4(d) appears to suggest that decreases with an increased annealing time. This would be an unexpected outcome from changing the annealing protocol. After reviewing the results in increased detail, we observed that this trend is due in part to an artifact of the particular points used by the QASA protocol in this work, which has a minimum input resolution of 0.025. Notice that, as annealing time increases, so does , causing the slope of the curve to increase (see Figure 1). This leaves fewer data points in the linear region of the curve, making the detection of subtle slope changes near the origin more challenging, especially for qubits with naturally low noise. Although identifying the root cause of this parameter trend requires further investigation, this highlights the possible need to tweak the QASA data collection parameters as QA hardware improves to ensure that sufficient data is collected to recover the key parameters of interest.
Although this section focused on a simple proof-of-concept demonstration using the annealing time parameter, the other scheduling features of QA hardware such as pausing [PhysRevApplied.14.014100, PhysRevApplied.11.044083], annealing offsets [PhysRevA.96.042322, Adame_2020], and custom annealing schedules [King2021, Venturelli2019, 10.1371/journal.pone.0244026] all suggest promising avenues for manipulating the effective qubit parameters recovered by the QASA protocol. To that end, we hope that the QASA protocol can provide a relatively fast QAVV assessment of how these operational parameters can impact qubit performance in practice.
Inspired by the effective single-qubit model proposed in [2012.08827], this work proposed the QASA protocol as a novel tool for conducting QAVV on emerging quantum annealing platforms. The results derived from running QASA on the DW_2000Q_LANL
system revealed a number of inconsistencies in qubit performance that were previously unknown, highlighting the usefulness of the proposed approach. The QASA protocol has further demonstrated its efficacy in this work by revealing, for the first time, an asymmetry in the performance of the qubits from the vertical and horizontal sections of the hardware graph considered herein. This observation provides a clear point for improving the fidelity and consistency of this particular QA hardware platform. In time, we hope that the QASA protocol will find a wide range of uses including: tracking the performance improvements of QA hardware platforms, helping hardware designers identify inconsistencies in specific QA devices, and supporting QA users in calibrating algorithms to specific hardware devices. To support that goal we have released the software that we developed to execute the QASA protocol as open-source, to benefit the broader community in conducting QAVV.
The natural next step for the QASA protocol is to explore how the data collection procedure can be optimized to reduce the amount of chip time required to accurately fit the effective single-qubit model. In this work we choose a uniform spacing in with a consistent number of samples for every input value. Upon conclusion of this work it is now clear that the MLE fitting model would likely benefit from a non-uniform spacing of that focuses data collections in the areas capturing the most pronounced signatures of , and . Reducing the number of samples collected at small values represents another obvious opportunity for reducing the amount of required data collection.
In the interest of making the QASA protocol as widely accessible as possible, the core software for data collection and model parameter fitting is released as open-source software at https://github.com/lanl-ansi/QASA. The software consists of two tools: (1) a Python script for extracting data from D-Wave quantum annealing platforms using the Ocean micro-client to collect and combine large numbers of hardware executions; and (2) a Julia-based tool for solving the MLE model for each qubit and building a table of the recovered single-qubit model parameters. The software is released under a flexible BSD license, which allows for modification, adaptation, and commercial reuse.
The raw data output by QA hardware devices is expensive to acquire and specific to each particular device implementation. Provided with the supplementary materials of this work are the raw data collected from the DW_2000Q_LANL system in the spring of 2021 and the resulting single-qubit model parameters that were recovered from that data. The data is provided as plain-text in the comma-separated value format (CSV) with a header indicating the value of each column. In the hardware output data the columns are indicated by: h the input parameter; samples the number of repeated executions for the given input parameter; and spin_id provides a count of the number of times qubit number id takes the value in the output. In principle, this raw data file can be combined with the open-source software to produce all of the model parameters presented in this work. However, for convenience, the single-qubit parameters recovered by the MLE model are also provided in the CSV format for analysis without the need of running the software on the provided raw data.
The authors would like to thank Tameem Albash, Mohammad Amin, Andrew Berkley, and Trevor Lanting for their input on preliminary versions of this work. The research presented in this work was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20210114ER and the Center for NonLinear Studies (CNLS). The computing resources used in this work were provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001.
Comments
There are no comments yet.