1 Introduction
Leastsquares fitting is a commonly employed approach in engineering applications and scientific research, including geometric modeling. With the advent of big data era, leastsquares fitting systems with singular coefficient matrices often appear, when the number of the fitted data points is very large, or there are “holes” in the fitted data points. LSPIA deng2014progressive is an efficient iterative method for leastsquares Bspline curve and surface fitting brandt2015optimal . In Ref. deng2014progressive , it was shown that LSPIA is convergent when the iterative matrix is nonsingular. In this paper, we will show that, when the iterative matrix is singular, LSPIA is still convergent. This property of LSPIA will promote its applications in large scale data fitting.
The motivation of this paper comes from our research practices, where some singular leastsquares fitting systems emerge. For examples, in generating trivariate Bspline solids by fitting tetrahedral meshes lin2015constructing , and in fitting images with holes by Tspline surfaces lin2013efficient , coefficient matrices of leastsquares fitting systems are singular. There, LSPIA was employed to solve the leastsquares fitting systems, and converged to stable solutions. However, in Ref. lin2015constructing ; lin2013efficient , convergence of LSPIA for solving singular linear systems was not proved.
The progressiveiterative approximation (PIA) method was first developed in (lin2004constructing, ; lin2005totally, )
, which endows iterative methods with geometric meanings, so it is suitable to handle geometric problems appearing in the field of geometric design. It was proved that the PIA method is convergent for Bspline fitting
(lin2011extended, ; deng2014progressive, ), NURBS fitting (shi06iterative, ), Tspline fitting (lin2013efficient, ), subdivision surface fitting (cheng2009loop, ; fan2008subdivision, ; chen2008progressive, ), as well as curve and surface fitting with totally positive basis (lin2005totally, ). The iterative format of geometric interpolation (GI)
maekawa2007interpolation is similar as that of PIA. While PIA depends on the parametric distance, the iterations of GI rely on the geometric distance. Moreover, the PIA and GI methods have been employed in some applications, such as reverse engineering (kineri2012b, ; yoshihara2012topologically, ), curve design (okaniwa2012uniform, ), surfacesurface intersection (lin2014affine, ), and trivariate Bspline solid generation (lin2015constructing, ), etc.2 The iterative format and its convergence analysis
To integrate the LSPIA iterative formats for Bspline curves, Bspline patches, trivariate Bspline solids, and Tsplines, their representations are rewritten as the following form,
(1) 
Specifically, Tspline patches sederberg2004t and trivariate Tspline solids zhang2012solid can be naturally written as the form (1). Moreover,

If (1) is a Bspline curve, then, is a scalar , and , where is a Bspline basis function.

If (1) is a Bspline patch with control points, then, , and , where and are Bspline basis functions. In the control net of the Bspline patch, the original index of is , and the original index of is , where represents the maximum integer not exceeding , and is the module of by .

If is a trivariate Bspline solid with control points, then , and . In the control net of the trivariate Bspline solid, the original index of is , the original index of is , and the original index of is .
Suppose we are given a data point set
(2) 
each of which is assigned a parameter . Let the initial form be,
(3) 
It should be noted that, though the initial control points
are usually chosen from the given data points, the initial control points are unrelated to the convergence of LSPIA. To perform LSPIA iterations, data points are classified into groups. All of data points with parameters
satisfying are classified into the group, corresponding to the control point (3).After the iteration of the LSPIA, the form is generated,
To produce the form , we first calculate the difference vectors for data points (DVD) (Fig. 1),
And then, two procedures are performed, i.e., vector distribution and vector gathering (Fig. 1). In the vector distribution procedure, all of DVDs corresponding to data points in the group are distributed to the control point ; in the vector gathering procedure, all of DVDs distributed to the control point are weighted averaged to generate the difference vector for control point (DVC) (Fig. 1),
where is the index set of the data points in the group. Then, the new control point is produced by adding the DVC to , i.e.,
(4) 
leading to the iteration form,
(5) 
In this way, we get a sequence of iterative forms . Let,
(6)  
(7) 
From Eq. (4), it follows,
Therefore, we get the LSPIA iterative format in matrix form,
(8) 
where, is a diagonal matrix, and,
Remark 0
The iterative format (8) is slightly different from that developed in Ref. deng2014progressive , where diagonal elements of the diagonal matrix are equal to each other. Although the difference of their iterative formats is slight, the convergence analysis of the iterative format (8) is a bit more difficult lin2013efficient .
Remark 0
Because diagonal elements of the diagonal matrix in the iterative format (8) are all positive, the diagonal matrix is nonsingular.
To show the convergence of the LSPIA iterative format (8), it is rewritten as,
(9) 
In Ref. deng2014progressive , it was shown that, when the iterative matrix is nonsingular, the LSPIA iterative format is convergent. In the following, we will show that, even the matrix is not of full rank, and then is singular, the iterative format (8) is still convergent.
We first show some lemmas.
Lemma 0
Proof: On one hand, suppose is an arbitrary eigenvalue of the matrix
with eigenvector
, i.e.,(10) 
By multiplying at both sides of Eq. (10), we have,
It means that is also an eigenvalue of the matrix with eigenvector . Moreover, , because,
the matrix is a positive semidefinite matrix. Eigenvalues of a semidefinite matrix are all nonnegative, so is real, and .
On the other hand, because the Bspline basis functions are nonnegative and form a partition of unity, it holds, . Together with , we have,
Therefore, the eigenvalue of matrix satisfies,
In conclusion, eigenvalues of the matrix are all real, and satisfy .
Because (9) is singular, is also singular, and then is its eigenvalue. The following lemma deals with the relationship between the algebraic multiplicity and geometric multiplicity of the zero eigenvalue of .
Remark 0
In this paper, we assume that the dimension of the zero eigenspace of
is . So, the rank of the matrix isBecause is nonsingular (refer to Remark 2), we have
Lemma 0
The algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity.
Proof: The proof consists of three parts.
(1) The algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity. Because
is a positive semidefinite matrix, it is a diagonalizable matrix. Then, for any eigenvalue of
(including the zero eigenvalue), its algebraic multiplicity is equal to its geometric multiplicity. In Remark 4, we assume that the dimension of the zero eigenspace of , i.e., the geometric multiplicity of zero eigenvalue of , is . So, the algebraic multiplicity and geometric multiplicity of zero eigenvalue of are both .(2) The geometric multiplicity of the zero eigenvalue of matrix is equal to that of matrix . Denote the eigenspaces of matrices and associated with the zero eigenvalue as and , respectively. The geometric multiplicities of the zero eigenvalue of matrices and are dimensions of and , respectively.
Note that the matrix is nonsingular (Remark 2). On one hand, , leading to . So, . On the other hand, , resulting in . So, . In conclusion, . Therefore, the geometric multiplicity of the zero eigenvalue of matrix is equal to that of matrix .
(3) The algebraic multiplicity of the zero eigenvalue of matrix is equal to that of matrix . Denote as an identity matrix,
where .
The characteristic polynomial of and can be written as (horn1985matrix, , pp.42),
(11) 
and,
(12) 
where are the sums of the principal minors of , and are the sums of the principal minors of .
On one hand, because the algebraic multiplicity of zero eigenvalue of is (see Part (1)), its characteristic polynomial (11) can be represented as,
where . Moreover, because is positive semidefinite, all of its principal minors are nonnegative. Therefore, we have . Consequently, all of principal minors of are nonnegative, and there is at least one principal minor of is positive.
On the other hand, because (Remark 4), all of () principal minors of are zero. Therefore,
(13) 
Denote and are the principal minors of and , respectively. Now, consider a principal minor of .
where (Remark 2). In other words, the principal minor of is the product of a principal minor of and a positive value . Together with that all of principal minors of are nonnegative, and there is at least one principal minor of is positive, the sum of all principal minors of , namely, , is positive. That is,
(14) 
By Eqs. (13) and (14), the characteristic polynomial of (12) can be transformed as,
where . It means that the algebraic multiplicity of zero eigenvalue of is , equal to the algebraic multiplicity of zero eigenvalue of .
Combing results of part (1)(3), we have shown that the algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity.
Denote as a matrix block,
(15) 
Specifically, is a Jordan block. Lemma 3 and 5 result in Lemma 6 as follows.
Lemma 0
Proof: Based on Lemma 3, eigenvalues of are all real and lie in , so the Jordan canonical form of can be written as,
where are nonzero eigenvalues of , which need not be distinct, and (15) is an Jordan block, ; is an Jordan block corresponding to the zero eigenvalue of , .
According to the theory on Jordan canonical form (horn1985matrix, , p.129), the number of Jordan blocks corresponding to an eigenvalue is the geometric multiplicity of the eigenvalue, and the sum of orders of all Jordan blocks corresponding to an eigenvalue equals its algebraic multiplicity. Based on Lemma 5, the algebraic multiplicity of the zero eigenvalue of matrix is equal to its geometric multiplicity, so the Jordan blocks corresponding to the zero eigenvalue of are all matrix . This proves Lemma 6.
Denote as the MoorePenrose (MP) pseudoinverse of the matrix . We have the following lemma.
Lemma 0
Proof: Because (Remark 4), and
is a positive semidefinite matrix, it has singular value decomposition (SVD),
(18) 
where is an orthogonal matrix, are singular values of . Then, the MP pseudoinverse of is,
Therefore,
where is an orthogonal matrix.
Based on the Lemmas above, we can show the convergence of the iterative format (9) when is singular.
Proof: By Lemma 6, the Jordan canonical form of matrix (9) is (16
). Then, there exists a invertible matrix
, such that,Therefore (refer to Eq. (16)),
where . Then, together with Lemma 7, it holds,
(19) 
Now, consider the linear system (refer to Eq. (7)). It has solutions if and only if james1978generalised ,
(20) 
Subtracting from both sides of the iterative format (9), together with Eq. (20), we have,
(21) 
Owing to Eq. (19), it follows,
(22) 
By simple computation, Eq. (22) changes to,
(23) 
Therefore, the iterative format (9) is convergent when is singular. Theorem 8 is proved.
Remark 0
Returning to Eq. (23), if is the inverse matrix of , i.e., , it becomes,
(24) 
where, is an arbitrary initial value. Eq. (24) is the MP pseudoinverse solution of the linear system , which is the normal equation of the leastsquares fitting to the data points (2). Because is an arbitrary value, there are infinite solutions to the normal equation . Within these solutions, is the one with minimum Euclidean norm horn1985matrix .
Actually, if diagonal elements of matrix (9) are equal to each other, denoting as , iterative format (9) can be written as,
(25) 
In this case, we have the following theorem.
Theorem 10
If is singular, and the spectral radius , the iterative format (25) converges to the MP pseudoinverse solution of the linear system . Moreover, if the initial value , the iterative format (25) converges to , i.e., the MP pseudoinverse solution of the linear system with the minimum Euclidean norm.
Proof: Because is both a normal matrix and a positive semidefinite matrix, its eigen decomposition is the same as its singular value decomposition horn1985matrix , with the form presented in Eq. (18). So, we have,
where is an orthogonal matrix, and are both the nonzero eigenvalues and nonzero singular values of . Because , it holds
Then, based on Lemma 7, we have,
Same as the deduction in the proof of Theorem 8 (Eqs. (21) (22)), we have,
and,
Therefore,
where
Comments
There are no comments yet.