Single-pass randomized QLP decomposition for low-rank approximation

11/13/2020
by   Huan Ren, et al.
Xiamen University
NetEase, Inc
0

The QLP decomposition is one of the effective algorithms to approximate singular value decomposition (SVD) in numerical linear algebra. In this paper, we propose some single-pass randomized QLP decomposition algorithms for computing the low-rank matrix approximation. Compared with the deterministic QLP decomposition, the complexity of the proposed algorithms does not increase significantly and the system matrix needs to be accessed only once. Therefore, our algorithms are very suitable for a large matrix stored outside of memory or generated by stream data. In the error analysis, we give the bounds of matrix approximation error and singular value approximation error. Numerical experiments also reported to verify our results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/06/2020

Randomized Quaternion Singular Value Decomposition for Low-Rank Approximation

Quaternion matrix approximation problems construct the approximated matr...
02/17/2020

Pass-Efficient Randomized LU Algorithms for Computing Low-Rank Matrix Approximation

Low-rank matrix approximation is extremely useful in the analysis of dat...
05/04/2021

Deterministic matrix sketches for low-rank compression of high-dimensional simulation data

Matrices arising in scientific applications frequently admit linear low-...
06/07/2017

Fast Eigen Decomposition for Low-Rank Matrix Approximation

In this paper we present an efficient algorithm to compute the eigen dec...
10/07/2013

Singular Value Decomposition of Images from Scanned Photographic Plates

We want to approximate the mxn image A from scanned astronomical photogr...
10/30/2019

Learning-Based Low-Rank Approximations

We introduce a "learning-based" algorithm for the low-rank decomposition...
06/15/2018

Deep Learning Approximation: Zero-Shot Neural Network Speedup

Neural networks offer high-accuracy solutions to a range of problems, bu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Let be a data matrix. The low-rank approximation of is to compute two low-rank matrices and such that

(1.1)

where , and , the rank is given to us in advance.

In the era of big data, the data we deal with is often extremely large. In other words, the scale of the data matrix is very large. In this case, the low-rank approximation in the form of (1.1) can greatly reduce the storage of the data matrix (i.e., we only need to store and instead of

). Low-rank approximation is one of the essential tools in scientific computing, including principal component analysis

[1, 2, 3, 4], data analysis [5, 6], and fast approximate algorithms for PDEs [7, 8, 9].

For a general matrix , we usually consider an rank- approximate singular value decomposition (SVD), i.e.,

(1.2)

where , , and with , and

are the left and right singular vectors corresponding to

, respectively. Such low-rank approximation is optimal as stated as follows:

Theorem 1.1.

[10, 11] Given , for any integer with , define

Then

and

Theorem 1.1 shows that the rank- truncated SVD provides the smallest error for the rank- approximation of . Therefore, the truncated SVD is the best low-rank approximation with a given fixed rank. However, the computation of a SVD of a large matrix is very costly. Therefore, we wish to find an algorithm for computing a low-rank approximation to a large matrix. As expected, we hope the proposed algorithm is close to the quality that the SVD provides but needs much lower cost.

The QLP decomposition was proposed by Stewart in 1999 [12]

, which can be regarded as an economical method for computing an approximate SVD. In fact, the QLP decomposition is equivalent to two consecutive QR decomposition with column pivoting (QRCP). Specifically, the QRCP is performed on the data matrix

in the sense that

(1.3)

where and two permutation matrices, and are two orthogonal matrices and is a lower triangular matrix. The diagonal elements of are called the -values and the diagonal elements of are called the -values. Define , , then

Huckbay and Chan [13] showed that the -values approximate the singular values of the original matrix with considerable fidelity. The truncated QLP decomposition of can be expressed as follows:

(1.4)

where both and have orthonormal column vectors and is lower triangular. The truncated QLP decomposition (1.4) can also be regarded as a low-rank approximation of . It is natural to expect the truncated QLP decomposition performs as the truncated SVD.

In recent years, randomized algorithms for low-rank approximation have attracted considerable attention [14, 15, 16, 17]. Compared with deterministic algorithms, randomized algorithms for low-rank approximation have the advantages of low complexity, fast running speed and easy implementation. However, these randomized algorithms need to access the original matrix at least twice, which is expensive for the large matrix stored outside of core memory or generated by stream data.

As we know, the cost of data communication is often much higher than the algorithm itself. In order to reduce the cost of data communication, some single-pass algorithms have been proposed [17, 18, 19, 21, 20, 22]. In this paper, based on the idea of single-pass, we extend the work of Wu and Xiang [23]

to the single-pass randomized QLP decomposition for computing low-rank approximation, where two randomized algorithms are provided. We also give the bounds of matrix approximation error and singular value approximation error for the proposed randomized algorithms, which hold with high probability.

The rest of this paper is organized as follows. In Section 2 we give some preliminary results related to subgaussian random matrices and some basic QLP decomposition algorithms. In Section 3 we propose two single-pass randomized QLP decomposition algorithms and present the corresponding complexity analysis. In Section 4 we give the error analysis of the proposed randomized algorithms, including matrix approximation error and singular value approximation error. Finally, some numerical examples and concluding remarks are given in Section 5 and Section 6, respectively.

2 Preliminaries

In this section, we review some preliminary lemmas on subgaussian random matrices and some basic QLP decomposition algorithms.

In this paper, we use the following notations. Let be the set of all real matrices. For any , let denote the singular values of , where . Denote by , the matrix -norm and the matrix Frobenius norm, respectively. Let and denote the transpose and the Moore-Penrose inverse of a matrix , respectively. In addition,

denotes expectation of random variable and

denotes probability of random event.

2.1 Subgaussian matrix

In this subsection, we recall some preliminary lemmas on subgaussian random matrices.

Definition 2.1.

A random variable is called subgaussian if there exist constants such that

Definition 2.2.

[25] Assume and . Let be the set of all random matrices whose entries are centered independent identical distribution (i.i.d.) real-valued random variables satisfying the following conditions:

  1. Moments: ;

  2. Norm: ;

  3. Variance: .

Remark 2.1.

It is easy to find that subgaussian matrices and Gaussian matrices are random matrices defined by Definition 2.2. Specifically, if is subgaussian, then and ; if

is a standard Gaussian random matrix, then

and .

The following lemma provides a lower bound of the smallest singular value of a randomized matrix, which holds with high probability.

Lemma 2.1.

[25] Let , . Let with , where . Then, there exist two positive constants such that

(2.1)
Remark 2.2.

From [25], the exact value of in Lemma 2.1 are

where , , and .

2.2 Randomized QLP decomposition

In this subsection, we recall the QLP decomposition and the randomized QLP decomposition. We first recall the QLP decomposition [12].

Input: .
Output: Matrices such that , where

is a column orthogonal matrix,


is a orthogonal matrix and is a lower triangular matrix.

function QLP

1:.
2:.
3:, .
Algorithm 1 The QLP decomposition[12]

The MATLAB pseudo-code of rank- randomized QLP decomposition algorithm is described as in Algorithm 2 [23].

Input: , target rank: , oversampling parameter: , and number of columns sampled: .
Output: Matrices such that , where is a column orthogonal matrix,
is a orthogonal matrix and is a lower triangular matrix.

1:.
2:.
3:.
4:.
5:.
6:.
Algorithm 2 Randomized QLP decomposition(RQLP)[23]

3 Single-pass randomized QLP decomposition

In this section, we present two single-pass randomized QLP decomposition algorithms.

3.1 Regular single-pass randomized QLP decomposition

In this subsection, we give a regular single-pass randomized QLP decomposition algorithm for computing the low-rank approximation to a data matrix . To calculate the low-rank approximation of , we first construct a low-rank matrix with orthonormal columns such that and . We observe that, in Algorithm 2, and is an approximate orthogonal projector on . Thus

(3.1)

A single-pass randomized algorithm should realize that each entry of the input matrix can only be accessed once. To do so, we wish replace the matrix in Step 4 of Algorithm 2 by another expression without . We note that, for the matrix generated by Algorithm 2, we have . Then, for , we have . Premultiplying on both sides of we get

Therefore, the matrix can be approximately expressed as

Next, we give the single-pass randomized QLP decomposition algorithm, which is stated in Algorithm 3.

Input: , target rank: , oversampling parameter: , number of columns sampled: , and number of rows sampled: .
Output: Matrices such that , where is a column orthogonal matrix,
is a orthogonal matrix and is a lower triangular matrix.

1:, .
2:, .
3:.
4:.
5:.
6:.
Algorithm 3 Single-pass randomized QLP decomposition (SPRQLP)

On the complexity of Algorithm 3, we have the following remarks.

  • Step 1: Generating random matrices takes operations;

  • Step 2: Computing and takes operations;

  • Step 3: Computing unpivoted QR decomposition of of size , takes operations;

  • Step 4: Computing takes operations, computing Moore-Penrose inverse of takes operations, and multiplying it by takes operations;

  • Step 5: Computing column pivoting QR decomposition of of size , takes operations;

  • Step 6: Computing column pivoting QR decomposition of of size , takes operations;

  • Step 7: Computing takes operations, computing takes operations.

The total complexity of Algorithm 3 is

It is easy to see that the total complexity of Algorithm 2 is

We note that and . Hence, Algorithm 2 has slightly lower complexity than Algorithm 3. As we know, the cost of data communication is even more expensive than the algorithm itself. In particular, when the data is stored outside the core memory and the data matrix is very large, the cost of data communication is much larger than the algorithm itself. We observe that Algorithm 2 needs to access the data matrix twice. Thus, the total computation cost of Algorithm 2 may be much more expensive than Algorithm 3.

3.2 Subspace-orbit single-pass randomized QLP decomposition

In this subsection, we consider replacing Gauss randomized matrix with a sketch of input matrix , i.e., choosing . As in [14], we propose a subspace-orbit single-pass randomized QLP decomposition algorithm (SORQLP) for computing a low-rank approximation of .

From the analysis in Subsection 3.1, for the matrices and generated by Algorithm 2, we have . Premultiplying on both sides of by yields

Thus,

The SORQLP is described as in Algorithm 4.

Input: , target rank: , oversampling parameter: , number of columns sampled: .
Output: Matrices such that , where is a column orthogonal matrix,
is a orthogonal matrix and is a lower triangular matrix.

1:.
2:, .
3:.
4:.
5:.
6:.
Algorithm 4 Subspace-orbit single-pass randomized QLP decomposition (SORQLP)

We have the following remarks on the computational complexity of Algorithm 4.

  • Step 1: Generating random matrices takes operations;

  • Step 2: Computing and takes operations;

  • Step 3: Computing unpivoted QR decomposition of of size , takes operations;

  • Step 4: Computing takes operations, computing Moore-Penrose inverse of takes operations, and multiplying it by takes operations;

  • Step 5: Computing column pivoting QR decomposition of of size , takes operations;

  • Step 6: Computing column pivoting QR decomposition of of size , takes operations;

  • Step 7: Computing takes operations, computing takes operations.

Therefore, the total complexity of Algorithm 4 is

which is the same order of magnitude as the complexity of Algorithm 2. Similarly, considering the cost of data communication, the computation cost of Algorithm 4 is much cheaper than Algorithm 2.

4 Error analysis

In this section, we evaluate the performance of the proposed two single-pass QLP decomposition algorithms in terms of matrix approximation error and singular value approximation error.

In what follows, we assume that all Gaussian random matrices have full rank. We first recall some necessary lemmas.

Lemma 4.1.

[15] Let and let and be positive integers such that . If is a standard Gaussian random matrix, then there exists a matrix such that

(4.1)

and

(4.2)

with probability not less than , where the constants , and are defined as in Lemma 2.1.

Lemma 4.2.

[16] Let , whose SVD is given by

(4.3)

where , and are two orthogonal matrices. Suppose , , and is a standard Gaussian random matrix. Let the reduced QR decomposition of

(4.4)

where has orthonormal columns and is upper triangular. Let

(4.5)

If has full row rank, then

(4.6)
(4.7)
Lemma 4.3.

[16] Under the same assumptions of Lemma 4.2, for any , we have

with probability not less than , where .

The following lemmas provide some inequalities about singular value, which are helpful to prove the main theorem related to the singular value approximation error of the proposed algorithms.

Lemma 4.4.

([26, Theorem 3.3.16]) Let . Then we have

for .

Lemma 4.5.

[10] Let . Suppose is a matrix with orthonormal columns. Then

for .

Lemma 4.6.

[16] Let admit the SVD as in (4.3). Let be a standard Gaussian matrix. Let be such that and partition . Let with being a matrix with orthonormal columns. If the matrix has full row rank, then

for .

4.1 Error analysis of Algorithm 3

4.1.1 Matrix approximation error analysis

The following theorem provides some bounds for the matrix approximation error of Algorithm 3 in the sense of -norm and Frobenius norm.

Theorem 4.1.

Let and let be target rank. Suppose and are such that and . Let , , and be generated by Algorithm 3. Then, for any , we have

(4.8)

with probability not less than and

(4.9)

with probability not less than , where

Here, and are defined as in Definition 2.2 and Remark 2.12.2.

Proof.

Firstly, we consider matrix approximation error in the -norm. From Algorithm 3 we have

(4.10)

The second term of the right hand side of (4.10) is reduced to

(4.11)

where the first equality follows from the fact that has full column rank and thus since is a Gaussian random matrix. Thus,

(4.12)

For the first term of the right hand side of (4.10), by using the triangular inequality, there exists a matrix such that

(4.13)

For the first term of the right hand side of (4.13), we have

(4.14)

For the second term of the right hand side of (4.13), we have

(4.15)

By substituting (4.14) and (4.15) into (4.13), we obtain

By hypothesis, . Using Lemma 4.1 we have

(4.16)

with probability not less than . Furthermore,

(4.17)

We already know that is a Gaussian random matrix. By hypothesis, . Thus, by Lemma 2.1 we have

(4.18)

with probability not less than . By Definition 2.2, we have

(4.19)

with probability not less than . Substituting (4.18) and (4.19) into (4.17) yields

(4.20)

with probability not less than . Plugging (4.16) and (4.20) into (4.12) gives rise to

with probability not less than .

Next, we consider matrix approximation error in the Frobenius norm. By using the similar error analysis in the -norm we have

(4.21)

The second term of the right hand side of (4.21) is reduced to

where the first equality follows from the fact that the Gaussian matrix has full column rank and thus . Thus,

(4.22)

Let be defined by (4.3) and

(4.23)

By Lemma 4.2, we get

(4.24)

From Lemma 4.3 and (4.20) we obtain, for any ,

(4.25)

with probability not less than . ∎

Remark 4.1.

In Theorem 4.1, we require that and . When with and , the similar bounds of matrix approximation error can be derived by using the results in [24].

Remark 4.2.

Using (4.7) in Lemma 4.2, Lemma 4.3, (4.12), and (4.20), for , , and generated by Algorithm 3, we have

(4.26)

with probability not less than .

Remark 4.3.

In fact, in Theorem 4.1, it is necessary to assume that the test matrix has full row rank as in [17, Theorem 9.1]. In this case, we have since .

4.1.2 Singular value approximation error analysis

We first give some lemmas on the error bounds for singular values of the QLP decomposition for a matrix . As noted in [12], for the QLP decomposition, the pivoting in the first QR decomposition is essential while the pivoting of the second QR decomposition is only necessary to avoid “certain contrived counterexamples”. Therefore, in order to simplify the analysis, we assume that there is no pivoting in the second QR decomposition of the QLP decomposition for the matrix generated by Algorithm 3.

The following lemma gives a bound for the maximum singular value approximation error.

Lemma 4.7.

[13] Let and