We are now in an era of big data as well as high dimensional data. Fortunately, high dimensional data are not unstructured. Usually, they lie near low dimensional manifolds. This is the basis of linear and nonlinear dimensionality reduction. As a simple yet effective approximation, linear subspaces are usually adopted to model the data distribution. Because low dimensional subspaces correspond to low rank data matrices, rank minimization problem, which models the real problem into an optimization by minimizing the rank in the objective function (cf. models (1), (3) and (4
)), is now widely used in machine learning and data recovery[2, 3, 4, 5]. Actually, rank is regarded as a sparsity measure for matrices . So low rank recovery problems are studied [6, 7, 8, 9]
in parallel with the compressed sensing theories for sparse vector recovery. Typical rank minimization problems include matrix completion[2, 4]
, which aims at completing the entire matrix from a small sample of its entries, robust principal component analysis, which recovers the ground truth data from sparsely corrupted elements, and low rank representation [10, 11]
, which finds an affinity matrix of subspaces that has the lowest rank. All of these techniques have found wide applications, such as background modeling, image repairing , image alignment , image rectification , motion segmentation [10, 11], image segmentation , and saliency detection .
Since the rank of a matrix is discrete, rank minimization problems are usually hard to solve. They can even be NP hard 
. To overcome the computational obstacle, as a common practice people usually replace rank in the objective function with nuclear norm, which is the sum of singular values and is the convex envelope of rank on the unit ball of matrix operator norm, to transform rank minimization problems into nuclear norm minimization problems (cf. models (2) and (5)). Such a strategy is widely adopted in most rank minimization problems [2, 3, 4, 10, 11, 12, 13, 14, 15]. However, this naturally brings a replacement validity problem which is defined as follows.
Definition 1 (Replacement Validity Problem)
Given a rank minimization problem together with its corresponding nuclear norm formulation, the replacement validity problem investigates whether the solution to the nuclear norm minimization problem is also a solution to the rank minimization one.
In this paper, we focus on the replacement validity problem. There is a related problem, called exact recovery problem, that is more widely studied by scholars. It is defined as follows.
Definition 2 (Exact Recovery Problem)
Given a nuclear norm minimization problem, the exact recovery problem investigates the sufficient conditions under which the nuclear norm minimization problem could exactly recover the real structure of the data.
As an example of the exact recovery problem, Candès et al. proved that when the rank of optimal solution is sufficiently low and the missing data is sufficiently few or the corruption is sufficiently sparse, solving nuclear norm minimization problems of matrix completion  or robust PCA problems 
can exactly recover the ground truth low rank solution with an overwhelming probability. As another example, Liu et al.[10, 16] proved that when the rank of optimal solution is sufficiently low and the percentage of corruption does not exceed a threshold, solving the nuclear norm minimization problem of low rank representation (LRR) [10, 11] can exactly recover the ground truth subspaces of the data.
We want to highlight the difference between our replacement validity problem and the exact recovery problem that scholars have considered before. The replacement validity problem is to compare the solutions between two optimization problems, while the exact recovery problem is to study whether solving a nuclear norm minimization problem can exactly recover a ground truth low rank matrix. As a result, in all the existing exact recovery problems, the scholars have to assume that the rank of the ground truth solution is sufficiently low. In contrast, the replacement validity problem does not rely on this assumption: even if the ground truth low rank solution cannot be recovered, we can still investigate whether the solution to a nuclear norm minimization problem is also the solution to the corresponding rank minimization problem.
For replacement validity problems, it is easy to believe that the replacement of rank with nuclear norm will break down for complex rank minimization problems. While for exact recovery problems, the existing analysis all focuses on relatively simple rank minimization problems, such as matrix completion , robust PCA problems , and LRR [10, 11], and has achieved affirmative results under some conditions. So it is also easy to believe that for simple rank minimization problems the replacement of rank with nuclear norm will work. This paper aims at breaking such an illusion. Here, we have to point out that replacement validity problem cannot be studied by numerical experiments. This is because: 1. rank is sensitive to numerical errors. Without prior knowledge, one may not correctly determine the rank of a given matrix, even if there is a clear drop in its singular values; 2. it is hard to verify whether a given solution to nuclear norm minimization problem is a global minimizer to a rank minimization problem, whose objective function is discrete and non-convex. So we should study replacement validity problem by purely theoretical analysis. We analyze a simple rank minimization problem, noiseless latent LRR (LatLRR) , to show that solutions to a nuclear norm minimization problem may not be solutions of the corresponding rank minimization problem.
The contributions of this paper include:
We use a simple rank minimization problem, noiseless LatLRR, to prove that solutions to a nuclear norm minimization problem may not be solutions of the corresponding rank minimization problem, even for very simple rank minimization problems.
2 Latent Low Rank Representation
In this section, we first explain the notations that will be used in this paper and then introduce latent low rank representation which we will analyze its closed form solutions.
2.1 Summary of Main Notations
A large amount of matrix related symbols will be used in this paper. Capital letters are used to represent matrices. Especially,
denotes the identity matrix and
is the all-zero matrix. The entry at theth row and the th column of a matrix is denoted by . Nuclear norm, the sum of all the singular values of a matrix, is denoted by . Operator norm, the maximum singular value, is denoted by . Trace() represents the sum of the diagonal entries of and is the Moore-Penrose pseudo-inverse of . For simplicity, we use the same letter to present the subspace spanned by the columns of a matrix. The dimension of a space is presented by . The orthogonal complement of is denoted by . Range indicates the linear space spanned by all the columns of matrix , while Null() represents the null space of . They are closely related: . Finally, we always use to represent the skinny SVD of the data matrix . Namely, the numbers of columns in and are both and consists of all the non-zero singular values of , making invertible.
2.2 Low Rank Subspace Clustering Models
Low rankness based subspace clustering stems from low rank representation (LRR) [10, 11]. An interested reader may refer to an excellent review on subspace clustering approaches provided by Vidal . The mathematical model of the original LRR is
where is the data matrix we observe. LRR extends sparse subspace clustering  by generalizing the sparsity from 1D to 2D. When there is noise or corruption, a noise term can be added to the model [10, 11]. Since this paper considers closed form solutions for noiseless models, to save space we omit the noisy model. The corresponding nuclear norm minimization formulation of (1) is
which we call the heuristic LRR. LRR has been very successful in clustering data into subspaces robustly. It is proven that when the underlying subspaces are independent, the optimal representation matrix is block diagonal, each block corresponding to a subspace [10, 11].
LRR works well only when the samples are sufficient. This condition may not be fulfilled in practice, particularly when the dimension of samples is large. To resolve this issue, Liu et al.  proposed latent low rank representation (LatLRR). Another model to overcome this drawback of LRR is fixed rank representation . LatLRR assumes that the observed samples can be expressed as the linear combinations of themselves together with the unobserved data:
where is the unobserved samples for supplementing the shortage of the observed ones. Since is unobserved and problem (3) cannot be solved directly, by some deduction and mathematical approximation, LatLRR  is modeled as follows:
Both the optimal and can be utilized for learning tasks: can be used for subspace clustering, while
is for feature extraction, thus providing us with the possibility for integrating two tasks into a unified framework. We call (4) the original LatLRR. Similarly, it has a nuclear norm minimization formulation
3 Analysis on LatLRR
This section provides surprising results: both the original and heuristic LatLRR have closed form solutions! We are able to write down all their solutions, as presented in the following theorems.
The complete solutions to the original LatLRR problem (4) are as follows
where is any idempotent matrix and and are any matrices satisfying: 1. and ; and 2. and .
The complete solutions to the heuristic LatLRR problem (5) are as follows
where is any block diagonal matrix satisfying: 1. its blocks are compatible with , i.e., if then ; and 2. both and are positive semi-definite.
The above results show that for noiseless LatLRR, nuclear norm is not a valid replacement of rank. As a by-product, since the solution to the heuristic LatLRR is non-unique, the results of LatLRR reported in [17, 11] may be questionable.
We provide detailed proofs of the above theorems and proposition in the following section.
4.1 Proof of Theorem 3.1
We first provide the complete closed form solutions to the original LRR in a more general form
where so that the constraint is feasible. We call (8) the generalized original LRR. Then we have the following proposition.
Suppose is an optimal solution to problem (8). First, we have
On the other hand, because is feasible, there exists such that . Then is feasible: , where we have utilized a property of Moore-Penrose pseudo-inverse . So we obtain
Next, let be the full rank decomposition of the optimal , where both and have rank columns. From , we have . Since both and are full column rank and is square, must be invertible. So and represent the same subspace. Because and
are unique up to an invertible matrix, we may simply choose. Thus reduces to , i.e., , and we conclude that the complete choices of are given by , where is any matrix such that . Multiplying with , we obtain that the entire solutions to problem (8) can be written as , where is any matrix satisfying .
However, (8) is different from (12) in two aspects. First, (8) requires the data matrix to be strictly expressed as linear combinations of the columns in . Second, (8) does not impose an upper bound for the rank of . Rather, (8) solves for the with the lowest rank. As a result, (8) has infinitely many solutions, as shown by Proposition 2, while (12) has a unique solution when fulfills some conditions. So the results in  do not apply to (8).
Similar to Proposition 2, we can have the complete closed form solution to the following problem
which will be used in the proof of Theorem 3.1.
Next, we provide the following propositions.
is the minimum objective function value of the original LatLRR problem (4).
On the other hand, if and are adopted as and , respectively, the lower bound is achieved and the constraint is fulfilled as well. So we conclude that is the minimum objective function value of the original LatLRR problem (4).
Suppose is one of the solutions to problem (4). Then there must exist another solution , such that and for some matrix .
According to the constraint of problem (4), we have , i.e., . Since is the projection matrix onto , we have
On the other hand, given the optimal , is the optimal solution to
So by Proposition 2 we get
As a result,
where the last inequality holds since is a feasible solution to problem (4) and is the minimum objective according to Proposition 4. (19) shows that is an optimal solution. So we may take and write it as , where .
Finally, combining with equation (16), we conclude that
For , if , then the following inequality holds
Based on the above lemma, the following proposition presents the sufficient and necessary condition on .
First, we prove the sufficiency. According to the property of idempotent matrices, we have
By substituting into the objective function, the following equalities hold
So is optimal since it achieves the minimum objective function value of problem (4).
We are now ready to prove Theorem 3.1.
where and are the skinny SVDs of and , respectively, and and are matrices such that and . Since we have and , there exist full column rank matrices and satisfying and , respectively. The sizes of and are and , respectively. We can easily see that a matrix can be decomposed into , such that and is full column rank, if and only if and . Similarly, a matrix can be decomposed into , such that and is full column rank, if and only if and . By substituting , , , and into (32), we obtain the conclusion of Theorem 3.1.
4.2 Proof of Theorem 3.2
We first quote two results from .
Assume and have feasible solution(s), i.e., . Then
is the unique minimizer to the generalized heuristic LRR problem:
For any four matrices , , and of compatible dimensions, we have the inequalities
where the second equality holds if and only if , , and .
Then we prove the following lemma.
For any square matrix , we have , where the equality holds if and only if is positive semi-definite.
We prove by mathematical induction. When , the conclusion is clearly true. When , we may simply write down the singular values of to prove.
Now suppose for any square matrix , whose size does not exceed , the inequality holds. Then for any matrix , using Lemma 3, we get
where the second inequality holds due to the inductive assumption on the matrices and . So we always have .
It is easy to check that any positive semi-definite matrix , it satisfies . On the other hand, just following the above proof by choosing as submatrices, we can easily get that strictly holds if is asymmetric. So if , then must be symmetric. Then the singular values of
are simply the absolute values of its eigenvalues. Asequals the sum of all eigenvalues of , holds only if all the eigenvalues of are non-negative.
Using Lemma 2, we may consider the following unconstrained problem
which is transformed from (5) be eliminating therein. Then we have the following result.
Unconstrained optimization problem (37) has a minimum objective function value .
Recall that the sub-differential of the nuclear norm of a matrix is 
where is the skinny SVD of the matrix . We prove that is an optimal solution to (37). It is sufficient to show that
Notice that is the skinny SVD of and is the skinny SVD of . So contains
Substituting into (37), we get the minimum objective function value .
Next, we have the form of the optimal solutions to (37) as follows.
The optimal solutions to the unconstrained optimization problem (37) can be written as .
That is to say
where . Hence the equality
Based on all the above lemmas and propositions, the following proposition gives the whole closed form solutions to the unconstrained optimization problem (37). So the solution to problem (37) is non-unique.
First, we prove the sufficiency. Suppose satisfies all the conditions in the theorem. Substitute it into the objective function, we have
where based on Lemma 4 the second and the fifth equalities hold since as is block diagonal and both and are positive semi-definite.
Thus all the inequalities above must be equalities. From the last equality and Lemma 4, we directly get that is positive semi-definite. By the first inequality and Lemma 4, we know that is symmetric, i.e.,
where represents the th entry on the diagonal of . Thus if , then , i.e., is block diagonal and its blocks are compatible with . Notice that . By Lemma 4, we get that is also positive semi-definite. Hence the proof is completed.
Now we can prove Theorem 3.2.
Let satisfy all the conditions in the theorem. According to Proposition 8, since the row space of belongs to that of , it is obvious that is feasible to problem (5). Now suppose that (5) has a better solution than , i.e.,
So we obtain a contradiction with respect to the optimality of in Proposition 9, hence proving the theorem.
4.3 Proof of Proposition 1
Suppose the optimal formulation in Theorem 3.2 could be written as , where is idempotent and satisfies . Then we have
By multiplying both sides with and on the left and right, respectively, we get
As a result, is idempotent: