The Convergence of Least-Squares Progressive Iterative Approximation with Singular Iterative Matrix

07/28/2017
by   Hongwei Lin, et al.
0

Developed in [Deng and Lin, 2014], Least-Squares Progressive Iterative Approximation (LSPIA) is an efficient iterative method for solving B-spline curve and surface least-squares fitting systems. In [Deng and Lin 2014], it was shown that LSPIA is convergent when the iterative matrix is nonsingular. In this paper, we will show that LSPIA is still convergent even the iterative matrix is singular.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

08/18/2019

On a progressive and iterative approximation method with memory for least square fitting

In this paper, we present a progressive and iterative approximation meth...
09/02/2019

Implicit Progressive-Iterative Approximation for Curve and Surface Reconstruction

Implicit curve and surface reconstruction attracts the attention of many...
03/05/2020

A generalized projection iterative methods for solving non-singular linear systems

In this paper, we propose and analyze iterative method based on projecti...
09/01/2020

GMRES on singular systems revisited

In [Hayami K, Sugihara M. Numer Linear Algebra Appl. 2011; 18:449–469], ...
12/13/2021

On using the complex step method for the approximation of Fréchet derivatives of matrix functions in automorphism groups

We show, that the complex step approximation Im(f(A+ihE))/h to the Fréch...
05/10/2020

Partial least squares for function-on-function regression via Krylov subspaces

People employ the function-on-function regression (FoFR) to model the re...
07/28/2018

Holographic Sensing

Holographic representations of data encode information in packets of equ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Least-squares fitting is a commonly employed approach in engineering applications and scientific research, including geometric modeling. With the advent of big data era, least-squares fitting systems with singular coefficient matrices often appear, when the number of the fitted data points is very large, or there are “holes” in the fitted data points. LSPIA deng2014progressive is an efficient iterative method for least-squares B-spline curve and surface fitting brandt2015optimal . In Ref. deng2014progressive , it was shown that LSPIA is convergent when the iterative matrix is nonsingular. In this paper, we will show that, when the iterative matrix is singular, LSPIA is still convergent. This property of LSPIA will promote its applications in large scale data fitting.

The motivation of this paper comes from our research practices, where some singular least-squares fitting systems emerge. For examples, in generating trivariate B-spline solids by fitting tetrahedral meshes lin2015constructing , and in fitting images with holes by T-spline surfaces lin2013efficient , coefficient matrices of least-squares fitting systems are singular. There, LSPIA was employed to solve the least-squares fitting systems, and converged to stable solutions. However, in Ref. lin2015constructing ; lin2013efficient , convergence of LSPIA for solving singular linear systems was not proved.

The progressive-iterative approximation (PIA) method was first developed in (lin2004constructing, ; lin2005totally, )

, which endows iterative methods with geometric meanings, so it is suitable to handle geometric problems appearing in the field of geometric design. It was proved that the PIA method is convergent for B-spline fitting 

(lin2011extended, ; deng2014progressive, ), NURBS fitting (shi06iterative, ), T-spline fitting (lin2013efficient, ), subdivision surface fitting (cheng2009loop, ; fan2008subdivision, ; chen2008progressive, ), as well as curve and surface fitting with totally positive basis (lin2005totally, )

. The iterative format of geometric interpolation (GI) 

maekawa2007interpolation is similar as that of PIA. While PIA depends on the parametric distance, the iterations of GI rely on the geometric distance. Moreover, the PIA and GI methods have been employed in some applications, such as reverse engineering (kineri2012b, ; yoshihara2012topologically, ), curve design (okaniwa2012uniform, ), surface-surface intersection (lin2014affine, ), and trivariate B-spline solid generation (lin2015constructing, ), etc.

The structure of this paper is as follows. In Section 2, we show the convergence of LSPIA with singular iterative matrix. In Section 3, an example is illustrated. Finally, Section 4 concludes the paper.

2 The iterative format and its convergence analysis

To integrate the LSPIA iterative formats for B-spline curves, B-spline patches, trivariate B-spline solids, and T-splines, their representations are rewritten as the following form,

(1)

Specifically, T-spline patches sederberg2004t and trivariate T-spline solids zhang2012solid can be naturally written as the form (1). Moreover,

  • If  (1) is a B-spline curve, then, is a scalar , and , where is a B-spline basis function.

  • If  (1) is a B-spline patch with control points, then, , and , where and are B-spline basis functions. In the control net of the B-spline patch, the original index of is , and the original index of is , where represents the maximum integer not exceeding , and is the module of by .

  • If is a trivariate B-spline solid with control points, then , and . In the control net of the trivariate B-spline solid, the original index of is , the original index of is , and the original index of is .

Figure 1:

One iteration step of LSPIA includes two procedures, vector distribution and vector gathering. In the vector distribution procedure, all of DVDs

corresponding to a group of data points are distributed to the control point the data point group corresponds to. In the vector gathering procedure, all of DVDs distributed to a control point are weighted averaged to generate the DVC . Here, blue circles are the data points, and the red curve is the curve .

Suppose we are given a data point set

(2)

each of which is assigned a parameter . Let the initial form be,

(3)

It should be noted that, though the initial control points

are usually chosen from the given data points, the initial control points are unrelated to the convergence of LSPIA. To perform LSPIA iterations, data points are classified into groups. All of data points with parameters

satisfying are classified into the group, corresponding to the control point (3).

After the iteration of the LSPIA, the form is generated,

To produce the form , we first calculate the difference vectors for data points (DVD) (Fig. 1),

And then, two procedures are performed, i.e., vector distribution and vector gathering (Fig. 1). In the vector distribution procedure, all of DVDs corresponding to data points in the group are distributed to the control point ; in the vector gathering procedure, all of DVDs distributed to the control point are weighted averaged to generate the difference vector for control point (DVC) (Fig. 1),

where is the index set of the data points in the group. Then, the new control point is produced by adding the DVC to , i.e.,

(4)

leading to the iteration form,

(5)

In this way, we get a sequence of iterative forms . Let,

(6)
(7)

From Eq. (4), it follows,

Therefore, we get the LSPIA iterative format in matrix form,

(8)

where, is a diagonal matrix, and,

Remark 0

The iterative format (8) is slightly different from that developed in Ref. deng2014progressive , where diagonal elements of the diagonal matrix are equal to each other. Although the difference of their iterative formats is slight, the convergence analysis of the iterative format (8) is a bit more difficult lin2013efficient .

Remark 0

Because diagonal elements of the diagonal matrix in the iterative format (8) are all positive, the diagonal matrix is nonsingular.

To show the convergence of the LSPIA iterative format (8), it is rewritten as,

(9)

In Ref. deng2014progressive , it was shown that, when the iterative matrix is nonsingular, the LSPIA iterative format is convergent. In the following, we will show that, even the matrix is not of full rank, and then is singular, the iterative format (8) is still convergent.

We first show some lemmas.

Lemma 0

The eigenvalues

of the matrix are all real, and satisfy .

Proof: On one hand, suppose is an arbitrary eigenvalue of the matrix

with eigenvector

, i.e.,

(10)

By multiplying at both sides of Eq. (10), we have,

It means that is also an eigenvalue of the matrix with eigenvector . Moreover, , because,

the matrix is a positive semidefinite matrix. Eigenvalues of a semidefinite matrix are all nonnegative, so is real, and .

On the other hand, because the B-spline basis functions are nonnegative and form a partition of unity, it holds, . Together with , we have,

Therefore, the eigenvalue of matrix satisfies,

In conclusion, eigenvalues of the matrix are all real, and satisfy .

Because  (9) is singular, is also singular, and then is its eigenvalue. The following lemma deals with the relationship between the algebraic multiplicity and geometric multiplicity of the zero eigenvalue of .

Remark 0

In this paper, we assume that the dimension of the zero eigenspace of

is . So, the rank of the matrix is

Because is nonsingular (refer to Remark 2), we have

Lemma 0

The algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity.

Proof: The proof consists of three parts.

(1) The algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity. Because

is a positive semidefinite matrix, it is a diagonalizable matrix. Then, for any eigenvalue of

(including the zero eigenvalue), its algebraic multiplicity is equal to its geometric multiplicity. In Remark 4, we assume that the dimension of the zero eigenspace of , i.e., the geometric multiplicity of zero eigenvalue of , is . So, the algebraic multiplicity and geometric multiplicity of zero eigenvalue of are both .

(2) The geometric multiplicity of the zero eigenvalue of matrix is equal to that of matrix . Denote the eigenspaces of matrices and associated with the zero eigenvalue as and , respectively. The geometric multiplicities of the zero eigenvalue of matrices and are dimensions of and , respectively.

Note that the matrix is nonsingular (Remark 2). On one hand, , leading to . So, . On the other hand, , resulting in . So, . In conclusion, . Therefore, the geometric multiplicity of the zero eigenvalue of matrix is equal to that of matrix .

(3) The algebraic multiplicity of the zero eigenvalue of matrix is equal to that of matrix . Denote as an identity matrix,

where .

The characteristic polynomial of and can be written as (horn1985matrix, , pp.42),

(11)

and,

(12)

where are the sums of the principal minors of , and are the sums of the principal minors of .

On one hand, because the algebraic multiplicity of zero eigenvalue of is (see Part (1)), its characteristic polynomial (11) can be represented as,

where . Moreover, because is positive semi-definite, all of its principal minors are nonnegative. Therefore, we have . Consequently, all of principal minors of are nonnegative, and there is at least one principal minor of is positive.

On the other hand, because (Remark 4), all of () principal minors of are zero. Therefore,

(13)

Denote and are the principal minors of and , respectively. Now, consider a principal minor of .

where (Remark 2). In other words, the principal minor of is the product of a principal minor of and a positive value . Together with that all of principal minors of are nonnegative, and there is at least one principal minor of is positive, the sum of all principal minors of , namely, , is positive. That is,

(14)

By Eqs. (13) and (14), the characteristic polynomial of  (12) can be transformed as,

where . It means that the algebraic multiplicity of zero eigenvalue of is , equal to the algebraic multiplicity of zero eigenvalue of .

Combing results of part (1)-(3), we have shown that the algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity.

Denote as a matrix block,

(15)

Specifically, is a Jordan block. Lemma 3 and 5 result in Lemma 6 as follows.

Lemma 0

The Jordan canonical form of matrix  (9) can be written as,

(16)

where are nonzero eigenvalues of , which need not be distinct, and  (15) is an Jordan block, .

Proof: Based on Lemma 3, eigenvalues of are all real and lie in , so the Jordan canonical form of can be written as,

where are nonzero eigenvalues of , which need not be distinct, and  (15) is an Jordan block, ; is an Jordan block corresponding to the zero eigenvalue of , .

According to the theory on Jordan canonical form (horn1985matrix, , p.129), the number of Jordan blocks corresponding to an eigenvalue is the geometric multiplicity of the eigenvalue, and the sum of orders of all Jordan blocks corresponding to an eigenvalue equals its algebraic multiplicity. Based on Lemma 5, the algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity, so the Jordan blocks corresponding to the zero eigenvalue of are all matrix . This proves Lemma 6.

Denote as the Moore-Penrose (M-P) pseudo-inverse of the matrix . We have the following lemma.

Lemma 0

There exists an orthogonal matrix

, such that,

(17)

Proof: Because (Remark 4), and

is a positive semidefinite matrix, it has singular value decomposition (SVD),

(18)

where is an orthogonal matrix, are singular values of . Then, the M-P pseudo-inverse of is,

Therefore,

where is an orthogonal matrix.

Based on the Lemmas above, we can show the convergence of the iterative format (9) when is singular.

Theorem 8

When  (9) is singular, the iterative format (9) is convergent.

Proof: By Lemma 6, the Jordan canonical form of matrix  (9) is  (16

). Then, there exists a invertible matrix

, such that,

Therefore (refer to Eq. (16)),

where . Then, together with Lemma 7, it holds,

(19)

Now, consider the linear system (refer to Eq. (7)). It has solutions if and only if james1978generalised ,

(20)

Subtracting from both sides of the iterative format (9), together with Eq. (20), we have,

(21)

Owing to Eq. (19), it follows,

(22)

By simple computation, Eq. (22) changes to,

(23)

Therefore, the iterative format (9) is convergent when is singular. Theorem 8 is proved.

Remark 0

Returning to Eq. (23), if is the inverse matrix of , i.e., , it becomes,

(24)

where, is an arbitrary initial value. Eq. (24) is the M-P pseudo-inverse solution of the linear system , which is the normal equation of the least-squares fitting to the data points (2). Because is an arbitrary value, there are infinite solutions to the normal equation . Within these solutions, is the one with minimum Euclidean norm horn1985matrix .

Actually, if diagonal elements of matrix  (9) are equal to each other, denoting as , iterative format (9) can be written as,

(25)

In this case, we have the following theorem.

Theorem 10

If is singular, and the spectral radius , the iterative format (25) converges to the M-P pseudo-inverse solution of the linear system . Moreover, if the initial value , the iterative format  (25) converges to , i.e., the M-P pseudo-inverse solution of the linear system with the minimum Euclidean norm.

Proof: Because is both a normal matrix and a positive semidefinite matrix, its eigen decomposition is the same as its singular value decomposition horn1985matrix , with the form presented in Eq. (18). So, we have,

where is an orthogonal matrix, and are both the nonzero eigenvalues and nonzero singular values of . Because , it holds

Then, based on Lemma 7, we have,

Same as the deduction in the proof of Theorem 8 (Eqs. (21) (22)), we have,

and,

Therefore,

where