Many applications depend upon initialization of a quantum register into specific states. A good example of this is the class of Monte Carlo style quantum algorithms that can compute expectation values of functions over classical probability distributions with quadratic speedup over all-classical counterparts Stamatopoulos et al. (2019); Martin et al. (2019); Woerner and Egger (2019); Montanaro (2015); Harrow et al. (2009)
. Other examples include machine learning classical training data sets that are used as input to a quantum machine learning algorithmLloyd et al. (2014); Rebentrost et al. (2014); Harrow et al. (2009); Wiebe et al. (2012); Childs et al. (2017). Preparing the state corresponding to the data in both of these cases is in general an exponentially hard problem Plesch and Brukner (2011) that is often omitted from application performance analysis. However, if initialization is this difficult these applications may not be viable for quantum computing. To address this issue, we develop a technique for generating circuits that can prepare certain families of quantum states efficiently. These families of states may enable the execution of quantum algorithms that depend on classical input data on smaller, near-term machines.
We focus on smooth, differentiable, real-valued (SDR) functions with bounded derivatives, and show that they can be constructed efficiently with linear-depth quantum circuits. The primary contribution of this work is Algorithm 2, that combines a novel encoding of piecewise polynomial function approximators into matrix product state form, with existing techniques for MPS compression and low-rank MPS quantum gate extraction. The algorithm can be tuned, and different configurations have different impacts on error and scalability.
shows theoretically that these types of functions display important properties, among these an exponentially decreasing von-Neumann entropy as qubits are added to the states. As a result, they have efficient MPS representations with low bond-dimension.
Section V describes our techniques, including explicit piecewise polynomial MPS function approximation, MPS arithmetic, and variational MPS approximation with low, bounded bond-dimension. Leveraging an algorithm developed in Ran (2020) that constructs a linear-depth quantum circuit from a MPS, we show that we can construct quantum circuits that prepare our desired amplitude functions with a high degree of accuracy.
Section VI presents our algorithm for efficiently and scalably constructing the state preparation circuit corresponding to the functions we study. The algorithm combines techniques described in V, first approximating the target function as a piecewise polynomial, encoding each piece into an MPS, variationally compressing the MPS into one of low-rank, and extracting gates directly from this resulting state. This combination is computationally tractable, requiring computation, and is bottlenecked by the variational MPS compression. Stages of the algorithm can be modified as desired, for example modifying the functional form of each component of the function approximation. The result is a tunable algorithm of linear complexity in the size of the system that prepares approximated analytical quantum states with linear size and depth circuits.
Section VII analyzes the performance of our algorithm targeting representative examples of SDR functions – namely Gaussian, lognormal, and Lorentzian distributions. We show numerical analysis of the accuracy of our circuits for small system sizes, and demonstrate that the techniques can prepare states with good accuracy across a range of function parameter regions.
Ii Related Work
While several techniques for state preparation have been proposed, they are often expensive in either the classical compute requirements of the algorithm or the resulting quantum gate sequence. Three examples are: exact Schmidt-Decomposition Coles et al. (2018), recursive integrable distribution encoding Grover and Rudolph (2002), and quantum Generative Adversarial Networks (qGAN) Zoufal et al. (2019).
The most general techniques for state preparation rely primarily on iterative Schmidt-Decomposition algorithms that require exponential classical computation, scaling that is prohibitive for large states Coles et al. (2018). Early work from Grover Grover and Rudolph (2002) utilized a recursion of integral calculations that ultimately requires an exponential number of angles and classical calculations, even though the result is a potentially efficient quantum circuit García-Ripoll (2019). Others have proposed qGAN learning-based approaches Zoufal et al. (2019)
that rely on a classically expensive combination of a variational quantum circuit and a classical neural network. These techniques constructsized quantum circuits corresponding to the learned distributions, which have accuracy corresponding to the effectiveness of the overall learning technique.
The fundamental difference between our work and these presented methods is that our algorithm can construct accurate linear size and depth quantum circuits, and only requires linear classical computation time to do so.
This work develops a technique to construct quantum states in which each binary-indexed basis state corresponds to a coefficient that follows a specific amplitude function. We restrict our focus to smooth, differentiable, real-valued (SDR) functions that have bounded derivatives.
In general, for some SDR function with support over some domain we first discretize into points for a system of qubits, evenly placed across the domain. This uses a mapping from index to domain values :
where , is the width of the domain, and is the number of gridpoints. Our goal is then to construct quantum states defined as:
where we will use the big-endian binary encoding of a length binary string written as with each individual bit . Big-endian here defines the mapping of binary strings to integers as:
In this notation, we have that the function is the evaluation of the target SDR function evaluated at the domain value defined by the index induced by the binary string from equation 3. Written together:
iii.2 Smooth, differentiable functions
We are focusing in this work on smooth, differentiable, real-valued (SDR) functions, as these admit many properties that allow for their efficient construction inside a quantum machine. These functions have two properties that we will make use of:
discretization accuracy increases exponentially with the number of qubits included in the system, and
the maximum Von-Neumann entropy of the state grows much more slowly as qubits are added.
Because of the two properties derived explicitly in García-Ripoll (2019), we find a very useful scaling relationship: as the system scales up in qubit count, these functions admit efficient and accurate representations in their low-rank approximations, while the accuracy of the encoded state continues to exponentially increase.
Examples of these types of functions include probability distributions, particularly Gaussian, lognormal, and Lorentzian distributions. We will see that the accuracy of our techniques is dependent on the smoothness of these distributions, specifically with the upper bound on the derivative of the distribution on the domain of interest: for . For distributions that are relatively slowly changing, is small, which leads to a more exact representation of the discretized function in a low-rank approximation.
This relationship encourages the use of efficient representations of low-rank matrix approximations, and one that is particularly suited to this task is the matrix product state.
iii.3 Matrix product states
Existing literature surrounding these mathematical techniques is vast within the physics community Perez-Garcia et al. (2006); Schollwöck (2011); Zaletel et al. (2015) as well as within computational mathematics and scientific computing Oseledets (2011); Holtz et al. (2012); Lee and Cichocki (2015)
where these are referred to as tensor trains. Here we describe only properties relevant to this work.
A matrix product state (MPS) is a collection of tensors where, in general, tensors can be any collection of values containing distinct indices, referred to as
-th order tensors. Scalars, vectors, and matrices are considered, , and order tensors respectively. Within a matrix product state, we restrict ourselves to 3rd-order tensors , where is the number of allowed levels in our qubit model – here considered to be held at . Each will be referred to as a core of the MPS. The are known as bond dimensions Schollwöck (2011) or virtual dimensions that connect the matrices together. The maximum bond dimension is denoted as , a figure can be used to upper bound the computational complexity of many routines involving the MPS. In this work, we will write each of these tensors as to highlight the connection between the -th tensor and the binary digit contained in an indexing string. The MPS can then be written as:
where the outer summations run over all bond dimensions and over all binary strings . Left and right boundary bond dimensions are assumed to be equal to 1.
The efficiency of this representation can be seen by looking at it as an expression of values by writing them as a matrix product. In so doing, we only need to store values, While the MPS representation is capable of representing exactly any dimensional vector by allowing to grow unbounded, in our study of SDR functions we will find that holding allows for highly accurate, compact representation of SDR functions, while keeping memory cost to a modest .
iii.4 Tensor networks
(bottom) Left: inner product — Right: matrix product
MPS representations are just one among a family of these types of representations, where the order of all involved tensors can change. In general, these types of representations come with very useful graphical representations, and are described extensively in literature as tensor networks Biamonte and Bergholm (2017).
Tensors can be depicted as nodes in a graph with edges representing tensor indices. Figure 1 depicts single index vectors and two-index matrices in this fashion. This technique also allows us to express tensor contractions, which are generalizations of vector and matrix operations. Tensor contractions are operations in which one or more indices of two or more tensors are explicitly summed over, and removed from the network. As an example, the vector-vector inner product can be written as a tensor contraction of a single index, as can the matrix-matrix product.
Graphically, we describe contractions of indices by connecting the corresponding index-edge of the network together. The top of Figure 1 depicts the vector-vector inner product, and the bottom the matrix-matrix product. The matrix-matrix graphically can be understood by the labelling of the indices from left to right as: , including two indices for the right- and left- most indices of matrices and , respectively, and summing over them. Naturally these concepts generalize to any number of indices, and are used to simplify tensor networks while controlling for the complexity of the multiplications.
SDR functions have two desirable properties: as qubits are added to the system, discretization accuracy increases exponentially, while von-Neumann entropy (VNE) and therefore entanglement grows much more slowly. This can be shown by analyzing first the discretization error of a uniform gridding of the domain on which an SDR has support, followed by studying the VNE of the constructed state.
iv.1 Discretization error
The discretization error of the encoding of an SDR function into a uniform grid of points across a domain scales as . To see this, first let be the maximum value of the derivative for . For a uniform gridding of the domain, we have exact values on which is evaluated. Error arises when values are sampled that lie between any two gridpoints with , and the discretized function approximation
must be interpolated. Letting the maximum evaluation error occur atfor , it is well known that uniform gridding and forward-difference interpolation produces first-order error linear in the step size, which in this context scales inverse-exponentially with the number of qubits in our system: Trench (2013).
As a result, the discretization error asymptotically halves when moving from a system of to qubits, or equivalently the function approximation accuracy doubles.
iv.2 von-Neumann Entropy Bound
The von-Neumann Entropy (VNE) change of a system induced by adding qubits can be defined as the maximum VNE achieved by any Schmidt-decomposition partitioning of the system into subsystems of sizes and Nielsen and Chuang (2002), as:
is the -th Schmidt decomposition partitioning of the system .
This was studied extensively in García-Ripoll (2019), and a bound was derived for the entropy of adding a single extra qubit to a state as:
This bound on the added entropy contribution from growing a discretized SDR representation, along with the reduction in the discretization error of the same order , implies that these states scale very efficiently in their representation. One reason for this is based on the idea that VNE is a proxy for the amount of entanglement contained within a state Johri et al. (2017); García-Ripoll (2019); Nielsen and Chuang (2002), which would imply that growing these types of discrete function approximations requires a vanishingly footnotesize amount of entanglement.
|(a) Gaussian ,|
|(b) Lognormal ,|
|(c) Lorentzian ,|
|(d) Gaussian ,|
|(e) Lognormal ,|
|(f) Lorentzian ,|
iv.3 Accuracy of low- Approximate MPS
SDR functions can be connected with MPS representations through the concept of a low-rank matrix approximation. The maximum bond dimension of an MPS increases with the entanglement of a system Schollwöck (2011), and because SDR functions have a maximum VNE increase bounded inverse-exponentially by system size of the discrete approximation, these functions are accurately approximated by low- MPS forms.
The canonical algorithm for constructing an MPS representation of a target tensor is shown in Algorithm 1, and was designed originally in Oseledets (2011). At a high level, the algorithm sweeps through the individual elements
of the MPS, and at each site performs a singular value decomposition (SVD) of the target tensor. The resulting singular vectors are reshaped, labeled as the MPS core at this particular site.
Were we to perform a full SVD instead of -truncation in step 4 of Algorithm 1, the constructed MPS would be exact. If however we approximate the unfolding matrix at step , and leave out singular values that fall beneath some threshold , we incur an error where each is the individual truncated-SVD error given by in the algorithm, see Oseledets (2011) §2.2.
The optimality of the resulting MPS is given by the Eckart-Young theorem Eckart and Young (1936); Golub and Van Loan (2013), from which we know that the best rank approximation to a matrix is given by considering the first singular values and vectors, and the error under the Frobenius norm is equal to the total of omitted singular values:
With these conditions, we can estimate the accuracy of bounded- approximate MPS representations of SDR functions. Connecting the -truncated SVD error with the error from Algorithm 1, we see that the approximate MPS with bond dimension bounded at has Frobenius error upper bounded by the sum of all squared omitted singular values of the unfolding matrices:
Based on this, we conjecture and show empirically that the spectra of unfolding matrices decays exponentially for discretized SDR functions, potentially because the entropy grows inverse-exponentially as qubits are added. This implies the existence of accurate low MPS approximations, and we show high-accuracy approximations even for . Unfolding matrix spectra are shown in Figure 2, while numerical evidence supporting the exponential decay of equation (9) is shown in Figure 3.
We can estimate the accuracy of MPS approximations to SDR functions by modeling the spectra of unfolding matrices with exponential decay. Allowing for each unfolding matrix to follow a distinct exponential decay, we can formulate an exponential univariate multiple regression with the model shown in equation (9).
We have a two-parameter univariate exponential decay model for the spectrum of each , where the -th unfolding matrix spectrum is characterized by empirically fit parameters . Under this model, we can calculate the normalized upper bound of the error of an MPS approximation with bounded bond-dimension , shown in equation (10), where we have assumed for simplicity of analytics that all terms are approximately equal. This allows us to estimate that for and all , there will exist an MPS representation with greater than normalized accuracy.
iv.4 Numerical SDR Spectral Analysis
Marked data indicates empirically fit exponential decay rates across all unfolding matrix spectra for each SDR distribution, plotted against (a) decreasing standard deviation – or increased ”squeezing” and (b) increasing system size for fixed. Smooth lined data indicates the maximum derivative achieved for each of the distributions. For all distributions, as the function is squeezed the maximum derivative increases while the exponential decay rate of the singular values decreases. The lognormal distribution qualitatively changes shape around , leading to a change in the maximum derivative resulting in a phase change of the decay rate of the singular values. We see in (b) an increase in the rate of exponential decay as systems increase in size, indicating that MPS approximators remain effective as systems are increased in size.
The validity of a singular value decay rate following the model of equation (9) can be numerically estimated. Empirically, we find that the spectra are well modeled by this form, and estimated
values are often larger than this threshold. We highlight univariate Gaussian, lognormal, and Lorentzian distributions, as they are representative distributions commonly used in applications, and each is discretized across a bounded support interval. The probability density functions of these distributions are shown in equations (11), (12), and (13).
Figure 2 shows the unfolding matrix spectra for Gaussian, lognormal, and Lorentzian distributions with varying degrees of squeezing. Squeeze here refers to the inverse of the standard deviation of the distribution: . The term refers to how tightly centered the distributions are around the means. Loosely, as the standard deviations decrease for these functions, the maximum derivative of the functions obtained on the supported domains increases, in that tightly squeezed states with low have higher maximum derivatives than their high- counterparts. This likely contributes to the fact that low-rank MPS approximations have difficulty capturing these highly localized features.
The joint exponential decay rate is found by fitting the model of equation (9) to all of the composite spectra. We find that for these distributions, the decay rates are above
for all but the most ”squeezed” Gaussian distributions. Results are shown in Figure3. As these distributions become tighter with footnotesizeer standard deviation, these functions gain larger and larger derivatives through the supported domain, which likely prevents low-rank approximations from capturing these highly local features completely. Their unfolding matrix spectra decay slower and slower as well. Empirically, good MPS representations of these distributions can be formed with greater than accuracy as is predicted by our analytical model, so long as the standard deviations are moderately valued, holding the discretization domain constant. We also see that as systems increase in size, the value of the maximum derivative decreases, and the exponential decay rates actually increase. This indicates that MPS likely remain good if not better approximations for states discretized over large systems.
The core of our algorithm rests on four main techniques: piecewise polynomial function approximation, MPS arithmetic, iterative MPS compression, and quantum gate extraction from MPS representations.
v.1 Piecewise polynomial function approximation MPS
In many cases, matrix product states are used to encode low-rank approximations to data which do not have a known analytical form. In these cases MPS forms can be constructed using exact construction as in Algorithm 1. They can also be approximately constructed using algorithms that subsample a portion of the domain and interpolate Oseledets and Tyrtyshnikov (2010), extract dominant singular values exactly Lee and Cichocki (2014); Dolgov et al. (2014), or estimate the dominant singular values potentially with randomized algorithms Huber et al. (2017); Batselier et al. (2018); Che and Wei (2019). Recent work Dolgov et al. (2020) applies these techniques to develop a method in this fashion for sampling potentially exotic multivariate probability distributions.
In our case, we are presented with an analytical form of the state we are constructing. Many functions with analytic forms can be exactly written down in a matrix product state, as shown in Oseledets (2013). However, a technique to do so requires that these functions are rank- separable. This means that these functions can be written as equation (14), for some fixed value. Unfortunately, this property does not hold for many probability density functions. It does hold however for degree- polynomials, as in equation (15)
An explicit form of discretized functions of the form (15) can be written as:
where we have for the first and last tensors:
These MPS forms have bounded , see Oseledets (2013), §6.
Motivated by this, we derive novel MPS forms of piecewise polynomial (PP) functions with bounded support on a subregion of the gridded domain. Specifically, for a domain and subdomain such that , a polynomial function with support on domain can be written as:
Based on a binary encoding of the original domain, subdivide the domain into a set defined by support-bit ordered from the left. This creates different regions, each defined as:
Here we use the encoding provided in equation (1), where the polynomial is supported on the region indexed by , and assert that the last region is inclusively bounded. This creates a uniform gridding of the domain into evenly spaced partitions, each of which then supports a single function approximating polynomial.
The polynomial in a piecewise approximation over a support-bit divided domain can be referred to as , and can be written into an MPS as in form (17), with explicit zeroing out of the tensors that correspond to domain values outside support. To do this, we write out a binary representation of the index using exactly bits. Then, for each tensor from , we zero out the component of the tensor that is unsupported in the binary encoding of the domain. Explicitly, for all :
where the remaining for are all unchanged. Equation (21) enforces that the polynomial is zeroed out for any domain value that lies outside of the range .
With this explicit form of a bounded-support polynomial, we can write the total MPS of a piecewise polynomial function used as a function approximator. A piecewise polynomial approximation function with support on domain is constructed by subdividing into subregions, and fitting possibly-distinct piecewise polynomials to the function , with each polynomial supported on a single subregion. Together, this forms a piecewise polynomial approximation to :
Here we do not require the functions be continuous at the endpoints of subregions.
With a function approximation written down this way, we can iteratively construct a series of MPS forms corresponding to each of the approximating polynomials.
v.2 Matrix product state arithmetic
Once we form MPS forms, we can combine them using the arithmetic properties of matrix product states. Specifically, two matrix product states can be added together as:
where each term is a block diagonalization of the corresponding terms in each summand:
appropriately adjusting for single-dimensional row and column vector endpoint cores. Upon contraction, the result is the addition of the two encoded functions:
Using this property, we can combine the MPS forms defined in a piecewise polynomial function approximation. Each of the constituent MPS forms have a maximum bond-dimension defined by the degree of the encoded polynomial: . MPS addition in this way grows the rank of the resulting MPS by exactly the ranks of the constituent MPS. Because of this, the MPS formed by the addition of degree- piecewise polynomial MPS forms has rank .
v.3 Iterative MPS Compression
Large- matrix product states can be compressed into a lower- MPS by an iterative method, focusing on a single core at a time. Following Schollwöck (2011), the optimal approach to compress an MPS of rank into of rank is to begin with an ansatz for of rank , and change each core iteratively. For core , the update rule follows from the gradient of the overlap between both states, calculated with respect to core . This gradient is of the form:
which corresponds to a full pairwise contraction of the conjugate of each core in with the corresponding core of , omitting the -th core in . As such, above the index sets are defined as the ordered set . In graphical notation this is simplified as shown in Figure 4.
The iterative compression algorithm evaluates equation (23) at each site , and calculates the optimal core to replace the existing -th core. This calculation corresponds to solving a dimensional linear system, and using factorizations presented in Schollwöck (2011) can be performed in time. In practice then, this algorithm can be computationally bounded at time, where a fixed number of sweeps and solutions are performed over all cores.
v.4 Accurate linear-depth circuits
Once a suitable matrix product state has been constructed, a technique developed recently in Ran (2020) can be used to directly convert the MPS into a set of one and two-qubit gates. This is performed by calculating the “matrix product disentangler” with the property that it acts on the state encoded by the MPS and creates the vacuum state.
The procedure for construction of this unitary operator acts on each MPS core and forms two-qubit gate :
This forms half of the required elements for the two qubit operator , and the other half are chosen by selecting orthonormal vectors in the kernel of . This fills in the two qubit operator, and results in a unitary gate. Sites and are filled in similarly, adjusting for specific dimensional constraints. The result is a set of unitary quantum operations, of which are two qubit gates, that form a serialized circuit of depth linear in system size: . Details of the procedure are shown clearly in Ran (2020). These circuits can be decomposed into canonical gates using single qubit gates and two qubit gates, at a depth of , utilizing standard decompositions Coles et al. (2018).
Procedures in lines 1, 4, and 6 are constant time components. Each of the MPS encodings of line 4 can be performed in parallel, as they are each independent and are a constant time operations, following the explicit analytical form prescribed in equation (17) and truncating with equation (21). This is an important component of the algorithm, and the number of regions is a constant that often provides sufficient accuracy when chosen to be small (e.g. 8). MPS summation in line 6 is also constant-time as it is reorganizes the tensors into block diagonalizations of the constituent piecewise-supported MPS.
Constructing the piecewise approximation of the function in line 2 has complexity that reflects the method used to do the approximating. Each subregion is independent, so all approximations can be performed in parallel. A single approximation over region by a bounded degree- polynomial can be performed with complexity that scales with the number of distinctly sampled points on each subregion. For a gridding of domain into points, the least squares polynomial regression can be performed in , where both and are constants chosen and customizable to particular target functions.
The iterative MPS compression of line 7 is the dominant contributing factor to the complexity of the algorithm, requiring where we are targeting states with , reducing this to . This complexity bound arises as the compression can be simplified into computing the optimal single-tensor that solves distinct overlaps between a state and the optimized state, each of which amounts to solving a linear system, after accounting for useful properties afforded by the MPS representation Schollwöck (2011).
Gate extraction Ran (2020) is also a linear time operation, requiring the inversion of matrices to complete each quantum gate. This can be done naively in time, and at best Bürgisser et al. (2013). Once again, all matrices can be inverted in parallel, reducing this complexity to .
Altogether, iterative MPS compression is the dominant computational cost of the algorithm when simple function approximation and optimized matrix inversion techniques are used, resulting in a time complexity for targets of .
Vii Analysis and Results
vii.1 Performance analysis and Numerical Results
|(a) Gaussian SVD MPS|
|(b) Lognormal SVD MPS|
|(c) Lorentzian SVD MPS|
|(d) Gaussian Optimality Ratio|
|(e) Lognormal Optimality Ratio|
|(f) Lorentzian Optimality Ratio|
To evaluate the accuracy of our technique, for footnotesize systems we explicitly compare the constructed state with the target. This is done with the fidelity measure Nielsen and Chuang (2002), defined as:
where and . Using this measure, we can estimate how exactly the constructed states match the targets.
Figure 5 depicts the fidelities of circuits generated by Algorithm 2 plotted against the standard deviation of the targeted SDR distribution, in different regimes. We expect to see that as approaches 0 fidelity should decrease as the MPS is no longer able to accurately capture the large maximum upper bound on the derivative of the distribution. In Figure 5 this occurs for , closely matching predictions by the spectral analytical modeling in equation (10). This can be attributed to equation (7), where the added entropy of the state grows with the maximum derivative of the SDR function . For very squeezed states, this large effective constant dominates equation (7), and we find low- MPS states unable to accurately represent the function. Additionally, we find a region in which accuracy slightly decays as well. This reflects a fluctuation of the rank of the distributions in this region, beneath the upper bounds set by the derivatives. It is also indicative of error caused by the fixed region subdivision.
vii.2 Error Analysis
Error arises in three places within Algorithm 2: the piecewise polynomial approximation , the compression of the total MPS, and gate extraction. Empirically, we find that the dominant error comes from MPS compression, followed by the approximation error of the piecewise polynomial function (PP error). Gate extraction error is negligable. Figure 6 displays the decomposition of the total error in each constructed distribution and attributes each to its source. Among all constructed cases, MPS error contributes to on average, while PP and gate extraction error account for and on average, respectively. For the lognormal distribution, PP error contributes more significantly than in any other distribution, accounting for for qubit constructions.
The polynomial function approximation error can be decreased by increasing the degree of the fit polynomial. Doing so comes at the cost of increasing constants in the computational complexity of the entire procedure, and for large values relative to the system size the practical implementation complexity may begin to be affected. Figure 7 displays sensitivity of Algorithm 2 to the order of polynomial approximator used, studied for a single instantiation of each SDR function: . Accuracy increases monotonically for increasing approximation degree, with diminishing returns beginning at the second order. Even for this difficult set of squeezed functions, cubic polynomials are able to construct the states with over accuracy, and increasing to order polynomials increases the fidelity up to and for each of the Gaussian, lognormal, and Lorentzian distributions, respectively.
vii.3 Optimality Ratios
The scalability of the technique rests on the conjecture that MPS representations remain good approximators for the target functions as the system size scales up. Empirically this can be estimated by tracing the fidelity of the SVD MPS representations to the SDR function and scaling up system size. The SVD MPS can be determined with Algorithm 1, which does not include any form of function approximation as a component. Figure 8 shows these curves for a selection of standard deviations, and presents numerical support for the claim that these approximations remain consistently accurate for larger and larger systems. As a result, the gate extraction component of Algorithm 2 remains unaffected by an increase in the size of the system overall.
The main factor in the accuracy of Algorithm 2 is then the construction of the approximate MPS. In order to estimate how accurately the constructed state matches the optimal, we can use an optimality ratio metric:
For more squeezed states the absolute fidelity of the state constructed by Algorithm 2 remains high and the optimality gap remains constant or slightly improves. This is strong evidence that accuracy will scale well with larger system discretizations.
A primary application for this procedure is any Monte Carlo style quantum algorithm that estimates the value of an observable evaluated on classical probability distributions. Monte Carlo methods have been shown to have quadratic speedup in these domains Montanaro (2015), and to demonstrate this we discuss the Amplitude Estimation based financial derivatives pricing algorithm.
Many algorithms have been proposed in the risk analysis and financial derivatives pricing context Stamatopoulos et al. (2019); Woerner and Egger (2019); Rebentrost and Lloyd (2018); Rebentrost et al. (2018); Orus et al. (2019); Ramos-Calderer et al. (2019). Many of these are based around the same principle: after solving a variation of the Black-Scholes equation that dictates the movement of an asset, encode this solution as a probability distribution into a quantum state. Once this is complete, then a linear function is computed on this state, which represents a superposition over all values of this linear function, weighted by probability. This computes an expected value of this operator, which in many cases is formulated specifically to encode the price of a financial derivative of the asset. The expected value is evaluated using Amplitude Estimation Brassard et al. (2002), with error and convergence that scales as:
Where is the sampling error. Classical Monte Carlo techniques for the same function scale as , giving the quantum algorithm a quadratic speedup. Clearly though, there is dependence on , or the time required to encode this distribution. With Algorithm 2, this can be done in linear time with tuneable accuracy.
viii.1 Extensions and Future Work
One benefit of this work is that it extends to any real-valued functions that are well-approximated by a set of piecewise low-order polynomials. This work thus extends to cover classical input data sets, so long as the data can be well approximated with an analytical generating function. One example of this is image data, which often can be approximated or interpolated into an approximate analytical form Keys (1981); Lee et al. (1997); Hou and Andrews (1978); Parker et al. (1983). This in theory would allow for quantum states corresponding to image data to be efficiently constructed, given a method for multivariate MPS encoding.
In this work we develop an algorithm to prepare quantum states corresponding to smooth, differentiable, real-valued functions. The algorithm constructs a linear-depth circuit of arbitrary two-qubit quantum gates, and does so only requiring linear computation time. Evaluating the accuracy of this technique empirically on commonly used probability distributions shows that high degrees of accuracy are able to be obtained even for the most squeezed target functions. These techniques require computation that scales linearly with system size, and there is no evidence that accuracy decays as systems increase in size, showing promise for the scaling of these techniques to much larger systems.
-  (2018) Computing low-rank approximations of large-scale matrices with the tensor network randomized svd. SIAM Journal on Matrix Analysis and Applications 39 (3), pp. 1221–1244. Cited by: §V.1.
-  (2017) Tensor networks in a nutshell. arXiv preprint arXiv:1708.00006. Cited by: §III.4.
-  (2002) Quantum amplitude amplification and estimation. Contemporary Mathematics 305, pp. 53–74. Cited by: §VIII.
-  (2013) Algebraic complexity theory. Vol. 315, Springer Science & Business Media. Cited by: §VI.
-  (2019) Randomized algorithms for the approximations of tucker and the tensor train decompositions. Advances in Computational Mathematics 45 (1), pp. 395–428. Cited by: §V.1.
-  (2017) Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing 46 (6), pp. 1920–1950. Cited by: §I.
-  (2018) Quantum algorithm implementations for beginners. arXiv preprint arXiv:1804.03719. Cited by: §II, §II, §V.4.
-  (2020) Approximation and sampling of multivariate probability distributions in the tensor train decomposition. Statistics and Computing 30 (3), pp. 603–625. Cited by: §V.1.
Computation of extreme eigenvalues in higher dimensions using block tensor train format. Computer Physics Communications 185 (4), pp. 1207–1216. Cited by: §V.1.
-  (1936) The approximation of one matrix by another of lower rank. Psychometrika 1 (3), pp. 211–218. Cited by: §IV.3.
-  (2019) . arXiv preprint arXiv:1909.06619. Cited by: §II, §III.2, §IV.2.
-  (2013) Matrix computations 4th edition the johns hopkins university press. Baltimore, MD. Cited by: §IV.3.
-  (2002) Creating superpositions that correspond to efficiently integrable probability distributions. arXiv preprint quant-ph/0208112. Cited by: §II, §II.
-  (2009) Quantum algorithm for linear systems of equations. Physical review letters 103 (15), pp. 150502. Cited by: §I.
-  (2012) The alternating linear scheme for tensor optimization in the tensor train format. SIAM Journal on Scientific Computing 34 (2), pp. A683–A713. Cited by: §III.3.
-  (1978) Cubic splines for image interpolation and digital filtering. IEEE Transactions on acoustics, speech, and signal processing 26 (6), pp. 508–517. Cited by: §VIII.1.
-  (2017) A randomized tensor train singular value decomposition. In Compressed Sensing and its Applications, pp. 261–290. Cited by: §V.1.
-  (2017) Entanglement spectroscopy on a quantum computer. Physical Review B 96 (19), pp. 195136. Cited by: §IV.2.
-  (1981) Cubic convolution interpolation for digital image processing. IEEE transactions on acoustics, speech, and signal processing 29 (6), pp. 1153–1160. Cited by: §VIII.1.
-  (2014) Very large-scale singular value decomposition using tensor train networks. arXiv preprint arXiv:1410.6895. Cited by: §V.1.
-  (2015) Estimating a few extreme singular values and vectors for large-scale matrices in tensor train format. SIAM Journal on Matrix Analysis and Applications 36 (3), pp. 994–1014. Cited by: §III.3.
-  (1997) Scattered data interpolation with multilevel b-splines. IEEE transactions on visualization and computer graphics 3 (3), pp. 228–244. Cited by: §VIII.1.
Quantum principal component analysis. Nature Physics 10 (9), pp. 631–633. Cited by: §I.
-  (2019) Towards pricing financial derivatives with an ibm quantum computer. arXiv preprint arXiv:1904.05803. Cited by: §I.
-  (2015) Quantum speedup of monte carlo methods. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471 (2181), pp. 20150301. Cited by: §I, §VIII.
-  (2002) Quantum computation and quantum information. American Association of Physics Teachers. Cited by: §IV.2, §IV.2, §VII.1.
-  (2019) Quantum computing for finance: overview and prospects. Reviews in Physics, pp. 100028. Cited by: §VIII.
-  (2013) Constructive representation of functions in low-rank tensor formats. Constructive Approximation 37 (1), pp. 1–18. Cited by: §V.1.
-  (2010) TT-cross approximation for multidimensional arrays. Linear Algebra and its Applications 432 (1), pp. 70–88. Cited by: §V.1.
-  (2011) Tensor-train decomposition. SIAM Journal on Scientific Computing 33 (5), pp. 2295–2317. Cited by: §III.3, §IV.3, §IV.3.
-  (1983) Comparison of interpolating methods for image resampling. IEEE Transactions on medical imaging 2 (1), pp. 31–39. Cited by: §VIII.1.
-  (2006) Matrix product state representations. arXiv preprint quant-ph/0608197. Cited by: §III.3.
-  (2011) Quantum-state preparation with universal gate decompositions. Physical Review A 83 (3), pp. 032302. Cited by: §I.
-  (2019) Quantum unary approach to option pricing. arXiv preprint arXiv:1912.01618. Cited by: §VIII.
-  (2020) Encoding of matrix product states into quantum circuits of one-and two-qubit gates. Physical Review A 101 (3), pp. 032310. Cited by: §I, §V.4, §V.4, §VI.
-  (2018) Quantum computational finance: monte carlo pricing of financial derivatives. Physical Review A 98 (2), pp. 022321. Cited by: §VIII.
-  (2018) Quantum computational finance: quantum algorithm for portfolio optimization. arXiv preprint arXiv:1811.03975. Cited by: §VIII.
Quantum support vector machine for big data classification. Physical review letters 113 (13), pp. 130503. Cited by: §I.
-  (2011) The density-matrix renormalization group in the age of matrix product states. Annals of Physics 326 (1), pp. 96–192. Cited by: §III.3, §III.3, §IV.3, §V.3, §V.3, §VI.
-  (2019) Option pricing using quantum computers. arXiv preprint arXiv:1905.02666. Cited by: §I, §VIII.
-  (2013) Elementary differential equations with boundary value problems. Cited by: §IV.1.
-  (2012) Quantum algorithm for data fitting. Physical review letters 109 (5), pp. 050505. Cited by: §I.
-  (2019) Quantum risk analysis. npj Quantum Information 5 (1), pp. 1–8. Cited by: §I, §VIII.
-  (2015) Time-evolving a matrix product state with long-ranged interactions. Physical Review B 91 (16), pp. 165112. Cited by: §III.3.
-  (2019) Quantum generative adversarial networks for learning and loading random distributions. npj Quantum Information 5 (1), pp. 1–9. Cited by: §II, §II.