Quantum Data Fitting Algorithm for Non-sparse Matrices

07/16/2019 ∙ by Guangxi Li, et al. ∙ University of Technology Sydney 0

We propose a quantum data fitting algorithm for non-sparse matrices, which is based on the Quantum Singular Value Estimation (QSVE) subroutine and a novel efficient method for recovering the signs of eigenvalues. Our algorithm generalizes the quantum data fitting algorithm of Wiebe, Braun, and Lloyd for sparse and well-conditioned matrices by adding a regularization term to avoid the over-fitting problem, which is a very important problem in machine learning. As a result, the algorithm achieves a sparsity-independent runtime of O(κ^2√(N)polylog(N)/(ϵκ)) for an N× N dimensional Hermitian matrix F, where κ denotes the condition number of F and ϵ is the precision parameter. This amounts to a polynomial speedup on the dimension of matrices when compared with the classical data fitting algorithms, and a strictly less than quadratic dependence on κ.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Quantum machine learning is an emerging research area in the intersection of quantum computing and machine learning Biamonte et al. (2017); Wittek (2014). In recent years, a number of quantum machine learning algorithms have been proposed, most of which could provide polynomial, sometimes exponential, speedup when compared with classical machine learning algorithms. This trend began with the breakthrough quantum algorithm of Harrow, Hassidim and Lloyd (HHL) Harrow et al. (2009) which solves a linear system with exponential acceleration over classical algorithms when the matrix

is sparse and well conditioned. More importantly, (revised versions of) HHL has been employed as a subroutine by many quantum machine learning algorithms in solving problems such as Quantum Support Vector Machine (QSVM)

Rebentrost et al. (2014), Quantum Recommendation Systems Kerenidis and Prakash (2016), and so on Schuld et al. (2016); Wiebe et al. (2014a, b); Kapoor et al. (2016); Zhao et al. (2015); Lloyd et al. (2014); Low et al. (2014); Rebentrost et al. (2016); Ciliberto et al. (2018).

In this paper, we are concerned with the Quantum Data Fitting (QDF) problem, whose goal is to find a quantum state proportional to the optimal fit parameter of the least squares fitting problem. It was shown in Wiebe et al. (2012) that by applying HHL algorithm, QDF problem can be solved in time , where , and denote the dimension, sparsity (the maximum number of nonzero elements in any given row or column), and condition number of , respectively, and is the maximum allowed distance between the output quantum state and the exact solution. Although the running time could be improved to via the simulation method of Childs (2010); Berry and Childs (2009) or using the method of Liu and Zhang (2015), the dependence over is at least linear, leading to a running time of at least for non-sparse matrices. Hence, it remains open whether it is possible, and how, to decrease the the dependence on for non-sparse matrices in solving QDF problems.

Another issue not addressed by the QDF algorithm proposed in Wiebe et al. (2012) is the over-fitting problem Hawkins (2004); i.e., in some cases while the fitting of existing data is significantly good, the prediction of future data may remain poor. In this paper, we consider the generalized standard technique for data fitting 111 In machine learning, least squares (LSQ) fitting is a standard technique for data fitting and often used interchangeably with the term data fitting.

, i.e., the regularized least squares fitting, also known as the ridge regression

Hoerl and Kennard (1970), by adding a regularization term to avoid the over-fitting problem. We propose a quantum data fitting algorithm for regularized least squares fitting problems with non-sparse matrices, with a running time of , a polynomial speedup (on the dimension ) over classical algorithms. The main result is given in Theorem 3.

Related Works.

Recently, inspired by the quantum recommendation systems and based on the Quantum Singular Value Estimation (QSVE) subroutine Kerenidis and Prakash (2016), Wossnig, Zhao and Prakash (WZP) Wossnig et al. (2018) proposed a dense version of HHL. Recall that QSVE can only estimate the magnitude, but not the sign, of the eigenvalues of a Hermitian matrix. The key technique of WZP algorithm is to first call QSVE subroutines for matrices and , respectively, where is a relatively small number, and then compare the corresponding eigenvalues of these matrices to obtain the desired sign.

However, this technique has two potential disadvantages: 1) we need to construct two, instead of one, binary tree data structures as proposed in Kerenidis and Prakash (2016). Constructing these binary trees is time-consuming; it is linear in the number of non-zero elements of the matrix; 2) it becomes difficult to implement if is significantly large, as a small requires a high precision quantum computer to process. By comparison, in this paper, we recover the signs of eigenvalues of by using only one binary tree data structure for the matrix , where denotes the spectral norm of . Furthermore, we do not need to perform the comparison operation, which might introduce additional errors to the system.

It is worth noting that recently, Meng et al. Meng et al. (2018) and Yu et al. also Yu et al. (2017) proposed quantum ridge regression algorithms in the non-sparse cases. However, the algorithm in Yu et al. (2017) only works for low-rank matrices, while that in Meng et al. (2018) uses the same technique as WZP, thus having the same potential disadvantages as we pointed out above. Moreover, neither of them explore the impact of the hyper-parameter on the time complexity of the algorithm, like we do in the current paper.

Ii Regularized Least Squares Fitting

The least squares fitting problem Wiebe et al. (2012) can be described as follows. Given a set of samples 222Here, we, following Wiebe et al. (2012), consider the case that the data points are scalar. However, if they are more general, e.g., vectors, then we can let each function be equaling to each element of the vector, to match the more common description of the least squares fitting problem., the goal is to find a parametric function to well approximate these points, where is the fit parameter. We assume that is linear in , but not necessarily so in . In other words,

(1)

for some functions

. The objective is to minimize the sum of the distance between the fit function and the target outputs

and a regularization term, i.e.,

(2)

where is an matrix, , and denotes the hyper-parameter of the regularization term which is a common technique in machine learning. In this paper, we assume is given, and our task is to find the optimal . The solution to the regularized least squares fitting problem (2) is given by

(3)

where denotes the -by-identity matrix.

Note that we can assume without loss of generality that the matrix is Hermitian. Otherwise, define and . Then it is easy to check that satisfies Eq. (3) if and only if

(4)

where . In other words, for any non-Hermitian matrix, we can construct a Hermitian matrix which gives the same optimal solution in Eq. (3) by expanding the vector’s dimension Harrow et al. (2009).

Iii Quantum Singular Value Estimation

Quantum Singular Value Estimation (QSVE) can be viewed as extending Phase Estimation Kitaev (1995) from unitary to nonunitary matrices, which is also the primary algorithm subroutine for our quantum data fitting algorithm. We briefly state it in the following:

Given a matrix which is stored in a classical binary tree data structure, an algorithm having quantum access to the data structure can create, in time polylog, the quantum state corresponding to each row of the matrix Kerenidis and Prakash (2016). Note also that if each element of is a complex number, the binary tree just stores its squared length in each leaf node.

Theorem 1.

Quantum Singular Value Estimation Kerenidis and Prakash (2016): Let be a matrix stored in the data structure presented above, and

be its singular value decomposition. For a precision parameter

, there is a quantum algorithm that performs the mapping such that for all

with probability at least

in time .

We see from Theorem 1 that the runtime of QSVE depends on the Frobenius norm , rather than the sparsity shown in HHL. This will also appear in our algorithm’s runtime.

Iv Quantum Data Fitting Algorithm

For a Hermitian matrix with the spectral decomposition , its singular value decomposition is given by , where the left singular vectors are equal to depending on the signs of ; i.e., if , and otherwise.

Similar to Wossnig et al. (2018), QSVE in Theorem 1 will also serve as a key subroutine of our algorithm. The difference is, however, we are going to use the following lemma to recover the sign of eigenvalues of a Hermitian matrix.

Lemma 2.

Let be a Hermitian matrix with the spectral decomposition . Let be the spectral norm of , and the -by- identity matrix. For a precision parameter , by performing QSVE algorithm on the matrix , we can transform into such that for all with probability at least in time .

Proof.

The proof is quite straightforward. Since has the spectral decomposition , , where for all . By the definition of , eigenvalues of are all non-negative, meaning that is a positive semi-definite matrix. Therefore, the singular value decomposition of is the same as its spectral decomposition.

By performing QSVE on with the precision parameter , we obtain an estimation of such that for all , with probability at least in time . An estimation of of the original matrix is then obtained by subtracting from . Finally, the estimation error can be bounded the same as QSVE, because we have

(5)

Now we consider the bound of to bound the time complexity.

(6)
(7)

where and Eq.(7) follows from . This completes the proof. ∎

With this lemma, we propose our quantum data fitting algorithm as in the following theorem:

Theorem 3.

Let be the non-sparse Hermitian matrix described in the least squares fitting problem, its spectral decomposition, and its condition number. Assume that is stored in the classical binary tree data structure as in Kerenidis and Prakash (2016). For a precision parameter , Algorithm 1 outputs a quantum state w such that in time , where denotes the quantum state proportional to the optimal fit parameter in Eq. (3).

Input: and stored in the classical binary tree data structure required by QSVE Kerenidis and Prakash (2016), the condition number (or an upper bound of it) of , and precision .
Output: A quantum state which is proportional to the optimal fit parameter with the bounding error as measured by the Euclidean distance.
  1. Generate a value of hyper-parameter

    according to the log-uniform distribution.

  2. Create the quantum state which is proportional to , with

    ’s being the eigenvectors of

    .

  3. Perform the QSVE subroutine for matrix with precision to obtain the state .

  4. Add an auxiliary register and apply a rotation conditioned on the second register, and uncompute the QSVE subroutine to erase the second register, obtaining

    (8)

    where is the estimation of the eigenvalue of and () is a constant.

  5. Post-select on the auxiliary register being in state .

Algorithm 1 Quantum Data Fitting Algorithm for Non-Sparse Matrices
Proof.

The proof mainly contains correctness analysis and complexity analysis. First we give the proof of correctness, i.e., .

From Algorithm 1, we observe, after post-selection, that

(9)

where and is defined as follows:

(10)

Here, we take as an example (Other values of are similar). The ideal state should be

(11)

where . Therefore, we have

(12)
(13)
(a) small
(b) median
(c) large
Figure 1: versus with different .

We now bound and via the following lemma:

Lemma 4.

Let be defined as in (10). Then

(14)
Proof.

From the definition of and the fact that and for all , we have

(15)
(16)
(17)
(18)

where Eq. (15) follows from and Eq. (16) from . This completes the proof of Lemma 4. ∎

From Lemma 4, we can obtain that for all

(19)

Thus

(20)

And then

(21)

By substituting (20) and (21) into (13), we have

(22)

Next we give the proof of the time complexity. From Lemma 2, we know that in Algorithm 1, the QSVE subroutine runs in time

(23)

On the other hand, we consider the success probability of the post-selection process. In order to bound the maximal number of iterations, we need to compute the minimum of the rotation function which is related to the hyper-parameter . The image of as a function of is illustrated in Figure 1 333In general, we take . This is reasonable in machine learning area because too small values of lead to a negligible effect of regularization while too large values of result in the loss of useful information of the original problems, i.e., the so-called under-fitting Svergun (1992)., from which we see that, for , is given by

(24)

Hence, using amplitude amplification Brassard et al. (2002), the number of iterations could be bounded as .

Furthermore, from the experience of machine learning, is usually taken in a logarithmic scale, e.g., 0.01, 0.1, 1, Montavon et al. (1998). Thus we take randomly according to a log-uniform distribution in its domain (Line 1 of Algorithm 1). We estimate the number of iterations as

(25)

where obeys a uniform distribution. Combining with (23), the totoal time complexity is . This completes the proof of Theorem 3. ∎

V Further Discussions and Conclusions

In this paper, we proposed a quantum data fitting algorithm for regularized least squares fitting problem with non-sparse matrices, which achieves a runtime of , where the term is due to the random choice of hyper-parameter according to the log-uniform distribution in Algorithm 1. As the hyper-parameter is usually set empirically in machine learning, we let our algorithm generate it automatically. Of course, if one wants to set it manually, he can simply modify our algorithm by moving the first line into the Input.

The technique proposed in this paper could also be applied to HHL algorithm, which would have the same time complexity as WZP Wossnig et al. (2018). It is worth noting that our algorithm’s running time is actually related to the mean of the eigenvalues of , see Eq. (6). If is close to or all the eigenvalues are negative, then the running time is actually relatively small, e.g, maybe logarithmic on the matrix dimension . If , as shown in the case of in Eq. (4), or , then the running time is root quadratic on the matrix dimension, as stated in this paper. However, on the whole, the time complexity of our algorithm is still polynomial in the dimension of the data matrix, because it is derived from the Frobenius norm, or more precisely, from the binary tree data structure Kerenidis and Prakash (2016). Whether there exists a QDF algorithm which runs in logarithmic time on the dimension of non-sparse matrices is still need to be explored.

Acknowledgements.
We thank Prof. Sanjiang Li for helpful discussions and proofreading the manuscript. G. Li acknowledges the financial support from China Scholarship Council (No. 201806070139). This work was partly supported by the Australian Research Council (Grant No: DP180100691) and the Baidu-UTS collaborative project “AI meets Quantum: Quantum algorithms for knowledge representation and learning”.

References