The joint bidiagonalization process with partial reorthogonalization

01/10/2020
by   Haibo Li, et al.
Tsinghua University
0

The joint bidiagonalization(JBD) process is a useful algorithm for the computation of the generalized singular value decomposition(GSVD) of a matrix pair A,L. However, it always suffers from rounding errors, which causes the Lanczos vectors to loss their mutual orthogonality. In order to maintain some level of orthongonality, we present a semiorthogonalization strategy. Our rounding error analysis shows that the JBD process with semiorthogonalization strategy can ensure that the convergence of the computed quantities is not affected by rounding errors and the final accuracy is high enough. Based on the semiorthogonalization strategy, we develop the joint bidiagonalization process with partial reorthogonalization(JBDPRO). In the JBDPRO algorithm, reorthogonalizations occur only when necessary, which saves a big amount of reorthogonalization work compared with the full reorthogonalization method. Numerical experiments illustrate our theory and algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/18/2019

A rounding error analysis of the joint bidiagonalization process with applications to the GSVD computation

The joint bidiagonalization(JBD) process is a useful algorithm for appro...
01/09/2022

Two harmonic Jacobi–Davidson methods for computing a partial generalized singular value decomposition of a large matrix pair

Two harmonic extraction based Jacobi–Davidson (JD) type algorithms are p...
07/24/2019

On choices of formulations of computing the generalized singular value decomposition of a matrix pair

For the computation of the generalized singular value decomposition (GSV...
05/11/2021

Preconvergence of the randomized extended Kaczmarz method

In this paper, we analyze the convergence behavior of the randomized ext...
11/29/2019

High Order Singular Value Decomposition for Plant Biodiversity Estimation

We propose a new method to estimate plant biodiversity with Rényi and Ra...
08/30/2018

Compensated de Casteljau algorithm in K times the working precision

In computer aided geometric design a polynomial is usually represented i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The joint bidiagonalization(JBD) process is a useful algorithm for computing some extreme generalized singular values and vectors for a large sparse or structured matrix pair Paige1981 ; Van1976 , as well as solving large-scale discrete ill-posed problems with general form Tikhonov regularization Hansen1998 ; Hansen2010 . It was first proposed by Zha Zha1996 that iteratively reduces the matrix pair to an upper or lower bidiagonal form, and later adapted by Kilmer Kilmer2007 that jointly diagonalizes to lower and upper bidiagonal forms.

Supposing and , consider the compact QR factorization of the stacked matrix:

(1.1)

where is column orthonormal and . We partition such that and , so we have and . Applying the BIDIAG-1 procedure and BIDIAG-2 procedure Paige1982 , which correspond to the lower and upper Lanczos bidiagonalization processes, to and , respectively, we can reduce and to the following lower and upper bidiagonal matrices respectively:

(1.2)

This process has computed four column orthonormal matrices, that is

(1.3)

computed by the BIDIAG-1 procedure, and

(1.4)

computed by the BIDIAG-2 procedure.

In order to joint the BIDIAG-1 and BIDIAG-2 procedures, the starting vector of BIDIAG-2 is chosen to be and the upper bidiagonalization process continues. The Lanczos vector and the element of can be computed by

(1.5)

If and are large-scale matrices, the QR factorization (1.1) can be avoided by solving a least squares problem with as the coefficient iteratively Bjorck1996 ; Paige1982 at each iteration. By the above adjustments, we can obtain the JBD process, which will be described in detail in Section 2. The -step JBD process explicitly computes three orthonormal matrices , , , and two bidiagonal matrices , , while two orthonormal matrices and can be obtained from implicitly.

The JBD process can be used to approximate a few largest or smallest generalized singular values and corresponding vectors of iteratively by projecting the original large-scale problem to the reduced small-scale problem . Furthermore, Kilmer et al. Kilmer2007 presented an iterative method based on the JBD process to solve ill-posed problems with general form Tikhonov regularization Hansen1989 ; Hansen1998 ; Hansen2010 . The main thought is to use the projection method to solve a series of small-scale general form Tikhonov regularization problems which lies in low dimensional subspaces. Jia and Yang JiaYang2018 analyzed this iterative regularized method and presented a new iterative regularized algorithm.

In exact arithmetic, the -step JBD algorithm is equivalent to the combination of the lower and upper Lanczos bidiagonalization processes. The lower Lanczos bidiagonalization computes two orthonormal matrices , and a lower bidiagonal matrix , while the upper Lanczos bidiagonalization computes two orthonormal matrices , and an upper bidiagonal matrix . In finite precision arithmetic, however, this equivalence does not hold any longer. Supposing that the roundoff unit is , it was shown in Li2019 that the -step process of computing , and is equivalent to the upper Lanczos bidiagonalization of within error , while the -step process of computing , and is equivalent to the lower Lanczos bidiagonalization of within error .

Due to the influence of rounding errors, the orthogonality among Lanczos vectors computed by the JBD process will be lost. This phenomenon was first observed in the symmetric Lanczos process Lanczos1950

, which was often used to compute some eigenvalues and eigenvectors of a symmetric matrix

Paige1971 ; Parlett1998 . The loss of orthogonality among Lanczos vectors leads to that some convergent Ritz values are additional copies of a real eigenvalue and a delay of the convergence Paige1971 ; Meurant2006 . It was Paige, that has first analyzed the finite precision behavior of the symmetric Lanczos process Paige1976 . He built a theory which links the convergence of the computed eigenvalue approximations to the loss of orthogonality, see Paige1971 ; Paige1972 ; Paige1980 . Like the symmetric Lanczos process, when we use the Lanczos bidiagonalization process to approximate some singular values and vectors of a matrix Golub1965 , the loss of orthogonality among Lanczos vectors leads to the appearance of spurious singular values Larsen1998 , which has been analyzed by Larsen in Larsen1998 . When we use the JBD process for the GSVD computation, the loss of orthogonality among Lanczos vectors will lead to the appearance of spurious generalized singular values and a delay of the convergence Zha1996 ; Li2019 . It has been proved in Li2019 that the loss of orthogonality of Lanczos vectors and are similar as that of the Lanczos bidiagonalization process, while the orthogonal level of is affected by the orthogonal levels of and and the growth of .

In order to maintain some level of orthongonality of Lanczos vectors, full reorthogonalization and many other reorthogonalization strategies have been used. For the symmetric Lanczos process, Simon Simon1984a built a recurrence formula to describe the loss of orthogonality among Lanczos vectors. His backward error analysis shows that semiorthogonality among Lanczos vectors is enough to guarantee the accuracy of the computed quantities up to machine precision and avoid the appearance of spurious eigenvalues. Based on the semiorthogonalization strategy, he introduced a new reorthogonalization method called partial reorthogonalization Simon1984b , which can save a big amount of reorthogoanlization work compared with the full reorthogonalization. For the Lanczos bidiagonalization process, Larsen Larsen1998 proved that it is sufficient to maintain semiorthogonality among Lanczos vectors in order to obtain approximate singular values with accuracy up to machine precision and avoid spurious singular values from appearing. Based on the partial reorthogonalization method for the symmetric Lanczos process, he presented the Lanczos bidiagonalization with partial reorthogonalization algorithm. For the JBD process, it was shown in Li2019 , that semiorthogonalities of the Lanczos vectors , and are enough to guarantee that the convergence of the computed quantities will not be affected by rounding errors and the final accuracy of approximate generalized singular values and vectors is high enough as long as does not grow too big. This result is a guidance for designing a practical semiorthogonalization strategy for the JBD process, which is the main topic of this paper.

In this paper, we propose a semiorthogonalization strategy for the -step JBD process to keep the orthogonal levels of , and blow . We make a rouding error analysis of the JBD process with semiorthogonalization strategy, which builds connections between the JBD process with semiorthogonalization strategy and the Lanczos bidiagonalization process in finite precision arithmetic. We prove that the computed is up to roundoff the Ritz-Galerkin projection of on the subspace and , while is approximate the Ritz-Galerkin projection of on the subspace and within error , which implies that the accuracy of quantities computed from , and will not be affected, as long as does not grow too big.

Based on the semiorthogonalization strategy, we develop a practical algorithm called joint bidiagonalization process with partial reorthogonalization(JBDPRO). The central idea in partial reorthogonalization is that the levels of orthogonality among the Lanczos vectors satisfy a coupled recurrence relations, see Simon1984b ; Larsen1998

, which can be used as a practical tool for computing estimates of the levels of orthogonality in an efficient way and to decide when to reorthogonalize, and which Lanczos vectors it is necessary to include in the reorthogonalization step. Numerical experiments shows that our JBDPRO algorithm is more efficient than the joint bidiagonalization with full reorthognalization(JBDFRO) while can computes approximate generalized singular values and vectors with the same accuracy.

This paper is organized as follows. In Section 2, we review the JBD process and its properties in both exact and finite precision arithmetic. In Section 3, we propose a semiorthogonalization strategy, and make a detailed analysis of the JBD process with semiorthogonalization strategy in finite precision. Base on the semiorthogonalization strategy, in Section 4, we develop the JBDPRO algorithm. In Section 5, we use some numerical examples to illustrate our theory and algorithm. Finally, we conclude the paper in Section 6.

In this paper, the norm notation always refers to , which denotes the Euclidean norm for vectors or the associated matrix norm. The machine precision, which is also called roundoff unit, is denoted by the Greek letter .

2 Joint bidiagonalization process and GSVD computation

In this section, we describe the JBD process and its basic properties in both exact and finite precision arithmetic. We also describe the GSVD computation of based on the JBD process.

Supposing that the compact QR factorization of is as (1.1), the initial idea of the JBD process is to reduce matrices and to lower and upper bidiagonal forms simultaneously, by applying the BIDIAG-1 procedure and BIDIAG-2 procedure Paige1982 , which correspond to the lower and upper Lanczos bidiagonalizations of and respectively. For large-scale matrices and , the explicitly QR factorization (1.1), which is impractical due to efficiency and storage, can be avoided by solving a least squares problem with as the coefficient matrix iteratively at each iteration. This is summarized in Algorithm 1.

1:  Choosing a starting vector ,
2:  
3:  
4:  for  do
5:     
6:     
7:     
8:     
9:  end for
Algorithm 1 The -step joint bidiagonalization(JBD) process

In Algorithm 1, we need to compute at each iteration , which is not directly accessible since is not available. Notice that is just the orthogonal projection of onto the column space of , so , where

(2.1)

The large-scale least squares problem (2.2) can be solved using the most commonly iterative solver LSQR algorithm Paige1982 .

In exact arithmetic, the -step JBD process computes the lower bidiagonal matrix and upper bidiagonal matrix , as well as three column orthonormal matrices

(2.2)

If we write all the recurrences in matrix form, we have

(2.3)
(2.4)
(2.5)

where . Noticing that is in the subspace spanned by the columns of for , supposing and let , then the following two matrices

(2.6)

are column orthonormal and . In exact algorithm, one can obtain from relations (2.3)–(2.5) that

(2.7)
(2.8)
(2.9)
(2.10)

Relations (2.7) and (2.8) indicate that the process of computing , and is equivalent to the lower Lanczos bidiagonalization of matrix , while relations (2.9) and (2.10) indicate that the process of computing , and is equivalent to the upper Lanczos bidiagonalization of matrix . Therefore, the JBD process is a naturally generalization of the Lanczos bidiagonalization process.

The JBD process can be used to approximate some extreme generalized singular values and vectors of a large sparse or structured matrix pair Zha1996 ; Li2019 . We first describe the GSVD of . Let

(2.11)

be the CS decomposition of the matrix pair Golub2012 ; Van1985 , where , and are orthogonal matrices, and and are diagonal matrices(not necessarily square) satisfying . To simplify the presentation, assuming that is of full column rank, then the GSVD of is

(2.12)

with , where the invertibility of follows from the assumption that has full rank. The -th generalized singular value of is , while the -th corresponding generalized right singular vector is and left vectors of and are and respectively. We point out that if , the generalized singular value is .

After the -step JBD process of , we have computed and . Suppose that we have computed the SVD of :

(2.13)

where and are orthogonal. The decomposition (2.13) can be achieved by a direct method since is a matrix of small scale. The approximate generalized singular value of is , while the approximate right generalized singular vector and the approximate left generalized singular vector of is .

For large-scale matrices and , the explicit computation of can be avoided. Notice that

hence by solving a least squares problem, we obtain from .

If we also want to compute the approximations of the left generalized singular vectors of , we need to compute the SVD of . The approximate generalized singular values and corresponding right vectors can also be computed from the SVD of . The procedure is similar as the above and we omit it.

In finite precision arithmetic, however, the orthogonality among Lanczos vectors computed by the JBD process will be lost, and relations (2.7)–(2.10) will not hold any longer. The rounding error analysis of the JBD process in finite precision arithmetic is based on a set of assumptions and properties of the behavior of the rounding errors occurring, which constitutes a rational model for the actual computation. We state them here following Li2019 .

The first property is that

(2.14)

Second, the property of local orthogonality of and holds, that is,

(2.15)
(2.16)

Third, it is assumed that

(2.17)

which is almost always true in practice, and the rare cases where or do become smaller are actually the lucky ones, since then the algorithm should be terminated, having found an invariant singular subspace.

We also assume that the inner least squares problem (2.1) is always solved accurately. It was shown in Li2019 that the following four relations hold:

(2.18)
(2.19)
(2.20)
(2.21)

where

(2.22)

and

(2.23)

for and . Relations (2.18) and (2.19) indicate that the process of computing , and is equivalent to the lower Lanczos bidiagonalization of within error O(), while (2.20) and (2.21) indicate that the process of computing and is equivalent to the upper Lanczos bidiagonalization of within error .

In finite precision arithmetic, if we use the JBD process to approximate some extreme generalized singular values and vectors of , the loss of orthogonality of Lanczos vectors will lead to the appearance of additional copies of approximations of the real generalized singular values, which are called “ghosts”, see Li2019 .

Following Larsen1998 ; Li2019 , the orthogonal level of Lanczos vectors are defined as follows.

Definition 2.1.

Let , define the orthogonal level of Lanczos vectors be

(2.24)

The above definition is also used for and , and we use notations and to denote the above two quantities.

In the following analysis, we often use terminology “the orthogonal level of ” for simplicity, which means the orthogonal level of .

To avoid “ghosts” from appearing, one can use the full reorthogonalization for and at each iteration, to make sure that the orthogonal levels of and are about . The disadvantage of full reorthogonalization strategy is that it will cause too much additional computation. It has been shown in Li2019 that semiorthogonalities of and are enough to guarantee the accuracy of the approximate generalized singular values and avoid “ghosts” from appearing.

Theorem 2.1.

Given the -step JBD process of on a computer with roundoff unit , supposing that the inner least squares problem (2.1) is solved accurately, let , where , let the compact QR factorizations of , are , where the diagonals of upper triangular matrices are nonnegative, let . If

(2.25)

then

(2.26)

and

(2.27)

where the notation means that the elements of matrix are of .

The theorem shows that if the Lanczos vectors are kept semiorthogonal, the computed is up to roundoff the Ritz-Galerkin projection of on the subspaces and , while the computed is the Ritz-Galerkin projection of on the subspaces and within error . Therefore, we can use the SVD of or to approximate the generalized singular values and vectors of and avoid “ghosts” from appearing. The final accuracy of approximated quantities computed from will be high enough as long as does not grow too big.

In the next section, we will propose a semiorthogonalization strategy as well as make a detailed analysis of the JBD process equipped with the semiorthogonalization strategy.

3 A Semiorthogonalization strategy for the JBD process

For the -step JBD process, in order to keep the orthogonal level of below , at the -th step we need to use a reorthogonalization strategy for , see Simon1984a ; Larsen1998 ; SimonZha2000 . At the -th step, supposing that

(3.1)

if for some , then we choose real numbers , and form

(3.2)

where and are rounding errors appeared in the computation. The algorithm will be continued with instead of .

Definition 3.1.

The above modification of the JBD process will be called a semiorthogonalization stategy for if the following conditions are satisfied:
(1). The numbers are chosen such that

(3.3)

(2). The computation of , , and the formulation of causes at most rounding errors of , i.e. we have

(3.4)

where .

The semiorthogonalization stategy for and are similar, and the corresponding -th step recurrences are

(3.5)
(3.6)

where . We point out that the rounding error terms , and here are different from that appeared in relations (2.18)–(2.23), we use the same notations just for simplicity.

Notice that the reorthogonalization of does not use the vector , due to the property of local orthogonality among and . The reasons are similar for the reorthogonalizations of and . After the semiorthogonalization step, relations (2.15) and (2.16) will still hold.

Remark 3.1.

Since is dependent on , it is difficult to keep orthogonal level of just below . We simply use instead of , which is also fruitful.

3.1 Properties of the semiorthogonalization strategy

After steps, we have computed three groups of Lanczos vectors , and , of which orthogonal levels are below . Further more, from (3.4)–(3.6) we have the following relations:

(3.7)
(3.8)
(3.9)

where , and

(3.10)

For the JBD process with semiorthogonalization strategy, noticing that is approximately in the subspace spanned by the columns of within error for , so let and , then and are kept semiorthogonal. Moreover, from relations (3.7)–(3.9) we can obtain that

(3.11)
(3.12)
(3.13)

where . To simplify notations, we always use instead of in the following analysis.

For the semiorthogonalization strategy, a great part of elements of , and are not zero and the nonzero elements are of . In fact, we can obtain a tighter bound for these nonzero elements. We first give the following lemma.

Lemma 3.1.

For the JBD process with semiorthogonalization strategy, the relation

(3.14)

holds for all .

Proof.

We prove (3.14) by mathematical induction. Consider the base case , we have

Next, suppose (3.14) is true for indices up to . For , we have

Since , we only need to prove that