1 Introduction
We are now in an era of big data as well as high dimensional data. Fortunately, high dimensional data are not unstructured. Usually, they lie near low dimensional manifolds. This is the basis of linear and nonlinear dimensionality reduction
[1]. As a simple yet effective approximation, linear subspaces are usually adopted to model the data distribution. Because low dimensional subspaces correspond to low rank data matrices, rank minimization problem, which models the real problem into an optimization by minimizing the rank in the objective function (cf. models (1), (3) and (4)), is now widely used in machine learning and data recovery
[2, 3, 4, 5]. Actually, rank is regarded as a sparsity measure for matrices [3]. So low rank recovery problems are studied [6, 7, 8, 9]in parallel with the compressed sensing theories for sparse vector recovery. Typical rank minimization problems include matrix completion
[2, 4], which aims at completing the entire matrix from a small sample of its entries, robust principal component analysis
[3], which recovers the ground truth data from sparsely corrupted elements, and low rank representation [10, 11], which finds an affinity matrix of subspaces that has the lowest rank. All of these techniques have found wide applications, such as background modeling
[3], image repairing [12], image alignment [12], image rectification [13], motion segmentation [10, 11], image segmentation [14], and saliency detection [15].Since the rank of a matrix is discrete, rank minimization problems are usually hard to solve. They can even be NP hard [3]
. To overcome the computational obstacle, as a common practice people usually replace rank in the objective function with nuclear norm, which is the sum of singular values and is the convex envelope of rank on the unit ball of matrix operator norm
[5], to transform rank minimization problems into nuclear norm minimization problems (cf. models (2) and (5)). Such a strategy is widely adopted in most rank minimization problems [2, 3, 4, 10, 11, 12, 13, 14, 15]. However, this naturally brings a replacement validity problem which is defined as follows.Definition 1 (Replacement Validity Problem)
Given a rank minimization problem together with its corresponding nuclear norm formulation, the replacement validity problem investigates whether the solution to the nuclear norm minimization problem is also a solution to the rank minimization one.
In this paper, we focus on the replacement validity problem. There is a related problem, called exact recovery problem, that is more widely studied by scholars. It is defined as follows.
Definition 2 (Exact Recovery Problem)
Given a nuclear norm minimization problem, the exact recovery problem investigates the sufficient conditions under which the nuclear norm minimization problem could exactly recover the real structure of the data.
As an example of the exact recovery problem, Candès et al. proved that when the rank of optimal solution is sufficiently low and the missing data is sufficiently few or the corruption is sufficiently sparse, solving nuclear norm minimization problems of matrix completion [2] or robust PCA problems [3]
can exactly recover the ground truth low rank solution with an overwhelming probability. As another example, Liu et al.
[10, 16] proved that when the rank of optimal solution is sufficiently low and the percentage of corruption does not exceed a threshold, solving the nuclear norm minimization problem of low rank representation (LRR) [10, 11] can exactly recover the ground truth subspaces of the data.We want to highlight the difference between our replacement validity problem and the exact recovery problem that scholars have considered before. The replacement validity problem is to compare the solutions between two optimization problems, while the exact recovery problem is to study whether solving a nuclear norm minimization problem can exactly recover a ground truth low rank matrix. As a result, in all the existing exact recovery problems, the scholars have to assume that the rank of the ground truth solution is sufficiently low. In contrast, the replacement validity problem does not rely on this assumption: even if the ground truth low rank solution cannot be recovered, we can still investigate whether the solution to a nuclear norm minimization problem is also the solution to the corresponding rank minimization problem.
For replacement validity problems, it is easy to believe that the replacement of rank with nuclear norm will break down for complex rank minimization problems. While for exact recovery problems, the existing analysis all focuses on relatively simple rank minimization problems, such as matrix completion [2], robust PCA problems [3], and LRR [10, 11], and has achieved affirmative results under some conditions. So it is also easy to believe that for simple rank minimization problems the replacement of rank with nuclear norm will work. This paper aims at breaking such an illusion. Here, we have to point out that replacement validity problem cannot be studied by numerical experiments. This is because: 1. rank is sensitive to numerical errors. Without prior knowledge, one may not correctly determine the rank of a given matrix, even if there is a clear drop in its singular values; 2. it is hard to verify whether a given solution to nuclear norm minimization problem is a global minimizer to a rank minimization problem, whose objective function is discrete and nonconvex. So we should study replacement validity problem by purely theoretical analysis. We analyze a simple rank minimization problem, noiseless latent LRR (LatLRR) [17], to show that solutions to a nuclear norm minimization problem may not be solutions of the corresponding rank minimization problem.
The contributions of this paper include:

We use a simple rank minimization problem, noiseless LatLRR, to prove that solutions to a nuclear norm minimization problem may not be solutions of the corresponding rank minimization problem, even for very simple rank minimization problems.
2 Latent Low Rank Representation
In this section, we first explain the notations that will be used in this paper and then introduce latent low rank representation which we will analyze its closed form solutions.
2.1 Summary of Main Notations
A large amount of matrix related symbols will be used in this paper. Capital letters are used to represent matrices. Especially,
denotes the identity matrix and
is the allzero matrix. The entry at the
th row and the th column of a matrix is denoted by . Nuclear norm, the sum of all the singular values of a matrix, is denoted by . Operator norm, the maximum singular value, is denoted by . Trace() represents the sum of the diagonal entries of and is the MoorePenrose pseudoinverse of . For simplicity, we use the same letter to present the subspace spanned by the columns of a matrix. The dimension of a space is presented by . The orthogonal complement of is denoted by . Range indicates the linear space spanned by all the columns of matrix , while Null() represents the null space of . They are closely related: . Finally, we always use to represent the skinny SVD of the data matrix . Namely, the numbers of columns in and are both and consists of all the nonzero singular values of , making invertible.2.2 Low Rank Subspace Clustering Models
Low rankness based subspace clustering stems from low rank representation (LRR) [10, 11]. An interested reader may refer to an excellent review on subspace clustering approaches provided by Vidal [18]. The mathematical model of the original LRR is
(1) 
where is the data matrix we observe. LRR extends sparse subspace clustering [19] by generalizing the sparsity from 1D to 2D. When there is noise or corruption, a noise term can be added to the model [10, 11]. Since this paper considers closed form solutions for noiseless models, to save space we omit the noisy model. The corresponding nuclear norm minimization formulation of (1) is
(2) 
which we call the heuristic LRR. LRR has been very successful in clustering data into subspaces robustly
[20]. It is proven that when the underlying subspaces are independent, the optimal representation matrix is block diagonal, each block corresponding to a subspace [10, 11].LRR works well only when the samples are sufficient. This condition may not be fulfilled in practice, particularly when the dimension of samples is large. To resolve this issue, Liu et al. [17] proposed latent low rank representation (LatLRR). Another model to overcome this drawback of LRR is fixed rank representation [21]. LatLRR assumes that the observed samples can be expressed as the linear combinations of themselves together with the unobserved data:
(3) 
where is the unobserved samples for supplementing the shortage of the observed ones. Since is unobserved and problem (3) cannot be solved directly, by some deduction and mathematical approximation, LatLRR [17] is modeled as follows:
(4) 
Both the optimal and can be utilized for learning tasks: can be used for subspace clustering, while
is for feature extraction, thus providing us with the possibility for integrating two tasks into a unified framework. We call (
4) the original LatLRR. Similarly, it has a nuclear norm minimization formulation(5) 
which we call the heuristic LatLRR. LatLRR has been reported to have better performance than LRR [17, 10].
3 Analysis on LatLRR
This section provides surprising results: both the original and heuristic LatLRR have closed form solutions! We are able to write down all their solutions, as presented in the following theorems.
Theorem 3.1
The complete solutions to the original LatLRR problem (4) are as follows
(6) 
where is any idempotent matrix and and are any matrices satisfying: 1. and ; and 2. and .
Theorem 3.2
The complete solutions to the heuristic LatLRR problem (5) are as follows
(7) 
where is any block diagonal matrix satisfying: 1. its blocks are compatible with , i.e., if then ; and 2. both and are positive semidefinite.
By Theorems 3.1 and 3.2, we can conclude that if the in Theorem 3.2 is not idempotent, then the corresponding is not the solution to the original LatLRR, due to the following proposition:
Proposition 1
The above results show that for noiseless LatLRR, nuclear norm is not a valid replacement of rank. As a byproduct, since the solution to the heuristic LatLRR is nonunique, the results of LatLRR reported in [17, 11] may be questionable.
We provide detailed proofs of the above theorems and proposition in the following section.
4 Proofs
4.1 Proof of Theorem 3.1
We first provide the complete closed form solutions to the original LRR in a more general form
(8) 
where so that the constraint is feasible. We call (8) the generalized original LRR. Then we have the following proposition.
Proposition 2
Proof
Suppose is an optimal solution to problem (8). First, we have
(10) 
On the other hand, because is feasible, there exists such that . Then is feasible: , where we have utilized a property of MoorePenrose pseudoinverse . So we obtain
(11) 
Combining (10) with (11), we conclude that is the minimum objective function value of problem (8).
Next, let be the full rank decomposition of the optimal , where both and have rank columns. From , we have . Since both and are full column rank and is square, must be invertible. So and represent the same subspace. Because and
are unique up to an invertible matrix, we may simply choose
. Thus reduces to , i.e., , and we conclude that the complete choices of are given by , where is any matrix such that . Multiplying with , we obtain that the entire solutions to problem (8) can be written as , where is any matrix satisfying .Remark 1
Friedland and Torokhti [22] studied a similar model as (8), which is
(12) 
However, (8) is different from (12) in two aspects. First, (8) requires the data matrix to be strictly expressed as linear combinations of the columns in . Second, (8) does not impose an upper bound for the rank of . Rather, (8) solves for the with the lowest rank. As a result, (8) has infinitely many solutions, as shown by Proposition 2, while (12) has a unique solution when fulfills some conditions. So the results in [22] do not apply to (8).
Similar to Proposition 2, we can have the complete closed form solution to the following problem
(13) 
which will be used in the proof of Theorem 3.1.
Proposition 3
Next, we provide the following propositions.
Proposition 4
is the minimum objective function value of the original LatLRR problem (4).
Proof
Suppose is an optimal solution to problem (4). By Proposition 2 and fixing , we have . Thus
(15) 
On the other hand, if and are adopted as and , respectively, the lower bound is achieved and the constraint is fulfilled as well. So we conclude that is the minimum objective function value of the original LatLRR problem (4).
Proposition 5
Suppose is one of the solutions to problem (4). Then there must exist another solution , such that and for some matrix .
Proof
According to the constraint of problem (4), we have , i.e., . Since is the projection matrix onto , we have
(16) 
On the other hand, given the optimal , is the optimal solution to
(17) 
So by Proposition 2 we get
(18) 
As a result,
(19) 
where the last inequality holds since is a feasible solution to problem (4) and is the minimum objective according to Proposition 4. (19) shows that is an optimal solution. So we may take and write it as , where .
Proposition 5 provides us with a great insight into the structure of problem (4): we may break (4) into two subproblems
(21) 
and
(22) 
and then apply Propositions 2 and 3 to find the complete solutions to problem (4).
Lemma 1
For , if , then the following inequality holds
(23) 
Proof
Based on the above lemma, the following proposition presents the sufficient and necessary condition on .
Proposition 6
Proof
Obviously, is feasible based on the constraint in problem (22). By considering the optimality of for (22) and replacing with in equation (18), we have
(28) 
First, we prove the sufficiency. According to the property of idempotent matrices, we have
(29) 
By substituting into the objective function, the following equalities hold
(30) 
So is optimal since it achieves the minimum objective function value of problem (4).
We are now ready to prove Theorem 3.1.
Proof
Solving problems (21) and (22) by using Propositions 2 and 3, where is idempotent as Proposition 6 shows, we directly get
(32) 
where and are the skinny SVDs of and , respectively, and and are matrices such that and . Since we have and , there exist full column rank matrices and satisfying and , respectively. The sizes of and are and , respectively. We can easily see that a matrix can be decomposed into , such that and is full column rank, if and only if and . Similarly, a matrix can be decomposed into , such that and is full column rank, if and only if and . By substituting , , , and into (32), we obtain the conclusion of Theorem 3.1.
4.2 Proof of Theorem 3.2
We first quote two results from [10].
Lemma 2
Assume and have feasible solution(s), i.e., . Then
(33) 
is the unique minimizer to the generalized heuristic LRR problem:
(34) 
Lemma 3
For any four matrices , , and of compatible dimensions, we have the inequalities
(35) 
where the second equality holds if and only if , , and .
Then we prove the following lemma.
Lemma 4
For any square matrix , we have , where the equality holds if and only if is positive semidefinite.
Proof
We prove by mathematical induction. When , the conclusion is clearly true. When , we may simply write down the singular values of to prove.
Now suppose for any square matrix , whose size does not exceed , the inequality holds. Then for any matrix , using Lemma 3, we get
(36) 
where the second inequality holds due to the inductive assumption on the matrices and . So we always have .
It is easy to check that any positive semidefinite matrix , it satisfies . On the other hand, just following the above proof by choosing as submatrices, we can easily get that strictly holds if is asymmetric. So if , then must be symmetric. Then the singular values of
are simply the absolute values of its eigenvalues. As
equals the sum of all eigenvalues of , holds only if all the eigenvalues of are nonnegative.Using Lemma 2, we may consider the following unconstrained problem
(37) 
which is transformed from (5) be eliminating therein. Then we have the following result.
Proposition 7
Unconstrained optimization problem (37) has a minimum objective function value .
Proof
Recall that the subdifferential of the nuclear norm of a matrix is [23]
(38) 
where is the skinny SVD of the matrix . We prove that is an optimal solution to (37). It is sufficient to show that
(39) 
Notice that is the skinny SVD of and is the skinny SVD of . So contains
(40) 
Substituting into (37), we get the minimum objective function value .
Next, we have the form of the optimal solutions to (37) as follows.
Proposition 8
The optimal solutions to the unconstrained optimization problem (37) can be written as .
Proof
Let be the orthogonal complement of . According to Proposition 7, is the minimum objective function value of (37). Thus we get
(41) 
where the second inequality holds by viewing as a feasible solution to (37). Then all the inequalities in (41) must be equalities. By Lemma 3 we have
(42) 
That is to say
(43) 
where . Hence the equality
(44) 
holds.
Based on all the above lemmas and propositions, the following proposition gives the whole closed form solutions to the unconstrained optimization problem (37). So the solution to problem (37) is nonunique.
Proposition 9
Proof
First, we prove the sufficiency. Suppose satisfies all the conditions in the theorem. Substitute it into the objective function, we have
(45) 
where based on Lemma 4 the second and the fifth equalities hold since as is block diagonal and both and are positive semidefinite.
Next, we give the proof of the necessity. Let represent a minimizer. According to Proposition 8, could be written as . We will show that satisfies the stated conditions. Based on Lemma 4, we have
(46) 
Thus all the inequalities above must be equalities. From the last equality and Lemma 4, we directly get that is positive semidefinite. By the first inequality and Lemma 4, we know that is symmetric, i.e.,
(47) 
where represents the th entry on the diagonal of . Thus if , then , i.e., is block diagonal and its blocks are compatible with . Notice that . By Lemma 4, we get that is also positive semidefinite. Hence the proof is completed.
Now we can prove Theorem 3.2.
Proof
Let satisfy all the conditions in the theorem. According to Proposition 8, since the row space of belongs to that of , it is obvious that is feasible to problem (5). Now suppose that (5) has a better solution than , i.e.,
(48) 
and
(49) 
Fixing in (5) and by Lemma 2, we have
(50) 
Thus
(51) 
So we obtain a contradiction with respect to the optimality of in Proposition 9, hence proving the theorem.
4.3 Proof of Proposition 1
Proof
Suppose the optimal formulation in Theorem 3.2 could be written as , where is idempotent and satisfies . Then we have
(52) 
By multiplying both sides with and on the left and right, respectively, we get
(53) 
As a result, is idempotent:
Comments
There are no comments yet.