# Quantum Data Fitting Algorithm for Non-sparse Matrices

We propose a quantum data fitting algorithm for non-sparse matrices, which is based on the Quantum Singular Value Estimation (QSVE) subroutine and a novel efficient method for recovering the signs of eigenvalues. Our algorithm generalizes the quantum data fitting algorithm of Wiebe, Braun, and Lloyd for sparse and well-conditioned matrices by adding a regularization term to avoid the over-fitting problem, which is a very important problem in machine learning. As a result, the algorithm achieves a sparsity-independent runtime of O(κ^2√(N)polylog(N)/(ϵκ)) for an N× N dimensional Hermitian matrix F, where κ denotes the condition number of F and ϵ is the precision parameter. This amounts to a polynomial speedup on the dimension of matrices when compared with the classical data fitting algorithms, and a strictly less than quadratic dependence on κ.

## Authors

• 8 publications
• 4 publications
• 13 publications
• 14 publications
• ### Computing eigenvalues of matrices in a quantum computer

Eigenproblem arises in a large number of disciplines of sciences and eng...
12/17/2019 ∙ by Changpeng Shao, et al. ∙ 0

• ### Dequantizing the Quantum Singular Value Transformation: Hardness and Applications to Quantum Chemistry and the Quantum PCP Conjecture

The Quantum Singular Value Transformation (QSVT) is a recent technique t...
11/17/2021 ∙ by Sevag Gharibian, et al. ∙ 0

• ### Vandermonde with Arnoldi

Vandermonde matrices are exponentially ill-conditioned, rendering the fa...
11/22/2019 ∙ by Pablo D. Brubeck, et al. ∙ 0

• ### Quantum linear system solver based on time-optimal adiabatic quantum computing and quantum approximate optimization algorithm

We demonstrate that with an optimally tuned scheduling function, adiabat...
09/12/2019 ∙ by Dong An, et al. ∙ 0

• ### The power of block-encoded matrix powers: improved regression techniques via faster Hamiltonian simulation

We apply the framework of block-encodings, introduced by Low and Chuang ...
04/05/2018 ∙ by Shantanav Chakraborty, et al. ∙ 0

• ### Nonlinear transformation of complex amplitudes via quantum singular value transformation

Due to the linearity of quantum operations, it is not straightforward to...
07/22/2021 ∙ by Naixu Guo, et al. ∙ 0

• ### A Quantum-inspired Algorithm for General Minimum Conical Hull Problems

A wide range of fundamental machine learning tasks that are addressed by...
07/16/2019 ∙ by Yuxuan Du, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Quantum machine learning is an emerging research area in the intersection of quantum computing and machine learning Biamonte et al. (2017); Wittek (2014). In recent years, a number of quantum machine learning algorithms have been proposed, most of which could provide polynomial, sometimes exponential, speedup when compared with classical machine learning algorithms. This trend began with the breakthrough quantum algorithm of Harrow, Hassidim and Lloyd (HHL) Harrow et al. (2009) which solves a linear system with exponential acceleration over classical algorithms when the matrix

is sparse and well conditioned. More importantly, (revised versions of) HHL has been employed as a subroutine by many quantum machine learning algorithms in solving problems such as Quantum Support Vector Machine (QSVM)

Rebentrost et al. (2014), Quantum Recommendation Systems Kerenidis and Prakash (2016), and so on Schuld et al. (2016); Wiebe et al. (2014a, b); Kapoor et al. (2016); Zhao et al. (2015); Lloyd et al. (2014); Low et al. (2014); Rebentrost et al. (2016); Ciliberto et al. (2018).

In this paper, we are concerned with the Quantum Data Fitting (QDF) problem, whose goal is to find a quantum state proportional to the optimal fit parameter of the least squares fitting problem. It was shown in Wiebe et al. (2012) that by applying HHL algorithm, QDF problem can be solved in time , where , and denote the dimension, sparsity (the maximum number of nonzero elements in any given row or column), and condition number of , respectively, and is the maximum allowed distance between the output quantum state and the exact solution. Although the running time could be improved to via the simulation method of Childs (2010); Berry and Childs (2009) or using the method of Liu and Zhang (2015), the dependence over is at least linear, leading to a running time of at least for non-sparse matrices. Hence, it remains open whether it is possible, and how, to decrease the the dependence on for non-sparse matrices in solving QDF problems.

Another issue not addressed by the QDF algorithm proposed in Wiebe et al. (2012) is the over-fitting problem Hawkins (2004); i.e., in some cases while the fitting of existing data is significantly good, the prediction of future data may remain poor. In this paper, we consider the generalized standard technique for data fitting 111 In machine learning, least squares (LSQ) fitting is a standard technique for data fitting and often used interchangeably with the term data fitting.

, i.e., the regularized least squares fitting, also known as the ridge regression

Hoerl and Kennard (1970), by adding a regularization term to avoid the over-fitting problem. We propose a quantum data fitting algorithm for regularized least squares fitting problems with non-sparse matrices, with a running time of , a polynomial speedup (on the dimension ) over classical algorithms. The main result is given in Theorem 3.

### Related Works.

Recently, inspired by the quantum recommendation systems and based on the Quantum Singular Value Estimation (QSVE) subroutine Kerenidis and Prakash (2016), Wossnig, Zhao and Prakash (WZP) Wossnig et al. (2018) proposed a dense version of HHL. Recall that QSVE can only estimate the magnitude, but not the sign, of the eigenvalues of a Hermitian matrix. The key technique of WZP algorithm is to first call QSVE subroutines for matrices and , respectively, where is a relatively small number, and then compare the corresponding eigenvalues of these matrices to obtain the desired sign.

However, this technique has two potential disadvantages: 1) we need to construct two, instead of one, binary tree data structures as proposed in Kerenidis and Prakash (2016). Constructing these binary trees is time-consuming; it is linear in the number of non-zero elements of the matrix; 2) it becomes difficult to implement if is significantly large, as a small requires a high precision quantum computer to process. By comparison, in this paper, we recover the signs of eigenvalues of by using only one binary tree data structure for the matrix , where denotes the spectral norm of . Furthermore, we do not need to perform the comparison operation, which might introduce additional errors to the system.

It is worth noting that recently, Meng et al. Meng et al. (2018) and Yu et al. also Yu et al. (2017) proposed quantum ridge regression algorithms in the non-sparse cases. However, the algorithm in Yu et al. (2017) only works for low-rank matrices, while that in Meng et al. (2018) uses the same technique as WZP, thus having the same potential disadvantages as we pointed out above. Moreover, neither of them explore the impact of the hyper-parameter on the time complexity of the algorithm, like we do in the current paper.

## Ii Regularized Least Squares Fitting

The least squares fitting problem Wiebe et al. (2012) can be described as follows. Given a set of samples 222Here, we, following Wiebe et al. (2012), consider the case that the data points are scalar. However, if they are more general, e.g., vectors, then we can let each function be equaling to each element of the vector, to match the more common description of the least squares fitting problem., the goal is to find a parametric function to well approximate these points, where is the fit parameter. We assume that is linear in , but not necessarily so in . In other words,

 f(x,w):=n∑j=1wjfj(x) (1)

for some functions

. The objective is to minimize the sum of the distance between the fit function and the target outputs

and a regularization term, i.e.,

 minwm∑i=1|f(xi,w)−yi|2+γw†w=∥Fw−y∥2+γ∥w∥2, (2)

where is an matrix, , and denotes the hyper-parameter of the regularization term which is a common technique in machine learning. In this paper, we assume is given, and our task is to find the optimal . The solution to the regularized least squares fitting problem (2) is given by

 w∗=(F†F+γIn)−1F†y, (3)

where denotes the -by-identity matrix.

Note that we can assume without loss of generality that the matrix is Hermitian. Otherwise, define and . Then it is easy to check that satisfies Eq. (3) if and only if

 ~w∗=(~F†~F+γIm+n)−1~F†~y, (4)

where . In other words, for any non-Hermitian matrix, we can construct a Hermitian matrix which gives the same optimal solution in Eq. (3) by expanding the vector’s dimension Harrow et al. (2009).

## Iii Quantum Singular Value Estimation

Quantum Singular Value Estimation (QSVE) can be viewed as extending Phase Estimation Kitaev (1995) from unitary to nonunitary matrices, which is also the primary algorithm subroutine for our quantum data fitting algorithm. We briefly state it in the following:

Given a matrix which is stored in a classical binary tree data structure, an algorithm having quantum access to the data structure can create, in time polylog, the quantum state corresponding to each row of the matrix Kerenidis and Prakash (2016). Note also that if each element of is a complex number, the binary tree just stores its squared length in each leaf node.

###### Theorem 1.

Quantum Singular Value Estimation Kerenidis and Prakash (2016): Let be a matrix stored in the data structure presented above, and

be its singular value decomposition. For a precision parameter

, there is a quantum algorithm that performs the mapping such that for all

with probability at least

in time .

We see from Theorem 1 that the runtime of QSVE depends on the Frobenius norm , rather than the sparsity shown in HHL. This will also appear in our algorithm’s runtime.

## Iv Quantum Data Fitting Algorithm

For a Hermitian matrix with the spectral decomposition , its singular value decomposition is given by , where the left singular vectors are equal to depending on the signs of ; i.e., if , and otherwise.

Similar to Wossnig et al. (2018), QSVE in Theorem 1 will also serve as a key subroutine of our algorithm. The difference is, however, we are going to use the following lemma to recover the sign of eigenvalues of a Hermitian matrix.

###### Lemma 2.

Let be a Hermitian matrix with the spectral decomposition . Let be the spectral norm of , and the -by- identity matrix. For a precision parameter , by performing QSVE algorithm on the matrix , we can transform into such that for all with probability at least in time .

###### Proof.

The proof is quite straightforward. Since has the spectral decomposition , , where for all . By the definition of , eigenvalues of are all non-negative, meaning that is a positive semi-definite matrix. Therefore, the singular value decomposition of is the same as its spectral decomposition.

By performing QSVE on with the precision parameter , we obtain an estimation of such that for all , with probability at least in time . An estimation of of the original matrix is then obtained by subtracting from . Finally, the estimation error can be bounded the same as QSVE, because we have

 |¯¯¯¯¯λi−λi|=|(¯¯¯¯¯λi+∥F∥∗)−(λi+∥F∥∗)|=|¯¯¯¯¯^λi−^λi|≤ϵ. (5)

Now we consider the bound of to bound the time complexity.

 ∥^F∥F =√∑i(^λi)2=√∑i(λi+∥F∥∗)2 =√∑iλ2i+2∑iλi∥F∥∗+N∥F∥2∗ = ⎷∥F∥2F+(1+2E(λ)∥F∥∗)N∥F∥2∗ (6) ≤O(√N∥F∥∗), (7)

where and Eq.(7) follows from . This completes the proof. ∎

With this lemma, we propose our quantum data fitting algorithm as in the following theorem:

###### Theorem 3.

Let be the non-sparse Hermitian matrix described in the least squares fitting problem, its spectral decomposition, and its condition number. Assume that is stored in the classical binary tree data structure as in Kerenidis and Prakash (2016). For a precision parameter , Algorithm 1 outputs a quantum state w such that in time , where denotes the quantum state proportional to the optimal fit parameter in Eq. (3).

###### Proof.

The proof mainly contains correctness analysis and complexity analysis. First we give the proof of correctness, i.e., .

From Algorithm 1, we observe, after post-selection, that

 |w⟩=∑iβih(¯¯¯¯¯λi)|vi⟩|0⟩√∑i|βi|2(h(¯¯¯¯¯λi))2=∑iβih(¯¯¯¯¯λi)|vi⟩|0⟩√¯¯¯p, (9)

where and is defined as follows:

 h(λ):=Cλλ2+γ=√γλλ2+γ. (10)

Here, we take as an example (Other values of are similar). The ideal state should be

 |w∗⟩=∑iβih(λi)|vi⟩|0⟩√∑i|βi|2(h(λi))2=∑iβih(λi)|vi⟩|0⟩√p, (11)

where . Therefore, we have

 ∥|w⟩−|w∗⟩∥2 =∥∑iβi(h(¯¯¯¯¯λi)√¯¯¯p−h(λi)√p)|vi⟩|0⟩∥2 (12) =∑i|βi|2(h(λi)√p)2(h(¯¯¯¯¯λi)h(λi)⋅√p√¯¯¯p−1)2. (13)

We now bound and via the following lemma:

###### Lemma 4.

Let be defined as in (10). Then

 ∣∣h(¯¯¯λ)−h(λ)∣∣≤13ϵ|h(λ)|. (14)
###### Proof.

From the definition of and the fact that and for all , we have

 ∣∣h(¯¯¯λ)−h(λ)∣∣ ≤√γ∣∣((1+ϵ4)λ2−γ)(¯¯¯λ−λ)∣∣((1−ϵ4)λ2+γ)(λ2+γ) (15) ≤43⋅1|λ|⋅∣∣¯¯¯λ−λ∣∣⋅√γ|λ|λ2+γ (16) ≤43⋅κ∥F∥∗⋅δ⋅|h(λ)| (17) =13ϵ|h(λ)|, (18)

where Eq. (15) follows from and Eq. (16) from . This completes the proof of Lemma 4. ∎

From Lemma 4, we can obtain that for all

 ∣∣ ∣∣h(¯¯¯¯¯λi)h(λi)−1∣∣ ∣∣≤13ϵ. (19)

Thus

 1−13ϵ≤h(¯¯¯¯¯λi)h(λi)≤1+13ϵ. (20)

And then

 (21)

By substituting (20) and (21) into (13), we have

 ∥|w⟩−|w∗⟩∥ ≤max⎧⎨⎩1−1−13ϵ1+13ϵ,1+13ϵ1−13ϵ−1⎫⎬⎭ =23ϵ1−13ϵ≤ϵ. (22)

Next we give the proof of the time complexity. From Lemma 2, we know that in Algorithm 1, the QSVE subroutine runs in time

 O(√N∥F∥∗polylog(N)/δ)=O(κ√Npolylog(N)/ϵ). (23)

On the other hand, we consider the success probability of the post-selection process. In order to bound the maximal number of iterations, we need to compute the minimum of the rotation function which is related to the hyper-parameter . The image of as a function of is illustrated in Figure 1 333In general, we take . This is reasonable in machine learning area because too small values of lead to a negligible effect of regularization while too large values of result in the loss of useful information of the original problems, i.e., the so-called under-fitting Svergun (1992)., from which we see that, for , is given by

 (24)

Hence, using amplitude amplification Brassard et al. (2002), the number of iterations could be bounded as .

Furthermore, from the experience of machine learning, is usually taken in a logarithmic scale, e.g., 0.01, 0.1, 1, Montavon et al. (1998). Thus we take randomly according to a log-uniform distribution in its domain (Line 1 of Algorithm 1). We estimate the number of iterations as

 1ln∥F∥2∗−ln∥F∥2∗κ2∫ln∥F∥2∗ln∥F∥2∗κ2 max{κ√et∥F∥∗,∥F∥∗√et}dt =O(κ/logκ), (25)

where obeys a uniform distribution. Combining with (23), the totoal time complexity is . This completes the proof of Theorem 3. ∎

## V Further Discussions and Conclusions

In this paper, we proposed a quantum data fitting algorithm for regularized least squares fitting problem with non-sparse matrices, which achieves a runtime of , where the term is due to the random choice of hyper-parameter according to the log-uniform distribution in Algorithm 1. As the hyper-parameter is usually set empirically in machine learning, we let our algorithm generate it automatically. Of course, if one wants to set it manually, he can simply modify our algorithm by moving the first line into the Input.

The technique proposed in this paper could also be applied to HHL algorithm, which would have the same time complexity as WZP Wossnig et al. (2018). It is worth noting that our algorithm’s running time is actually related to the mean of the eigenvalues of , see Eq. (6). If is close to or all the eigenvalues are negative, then the running time is actually relatively small, e.g, maybe logarithmic on the matrix dimension . If , as shown in the case of in Eq. (4), or , then the running time is root quadratic on the matrix dimension, as stated in this paper. However, on the whole, the time complexity of our algorithm is still polynomial in the dimension of the data matrix, because it is derived from the Frobenius norm, or more precisely, from the binary tree data structure Kerenidis and Prakash (2016). Whether there exists a QDF algorithm which runs in logarithmic time on the dimension of non-sparse matrices is still need to be explored.

###### Acknowledgements.
We thank Prof. Sanjiang Li for helpful discussions and proofreading the manuscript. G. Li acknowledges the financial support from China Scholarship Council (No. 201806070139). This work was partly supported by the Australian Research Council (Grant No: DP180100691) and the Baidu-UTS collaborative project “AI meets Quantum: Quantum algorithms for knowledge representation and learning”.