1 Introduction
Low rank matrix recovery problem is heavily studied and has numerous applications in collaborative filtering, quantum state tomography, clustering, community detection, metric learning and multitask learning [21, 12, 9, 27].
We consider the “matrix sensing” problem of recovering a lowrank (or approximately low rank) p.s.d. matrix^{1}^{1}1We study the case where is PSD. We believe the techniques developed here can be used to extend results to the general case. , given a linear measurement operator and noisy measurements , where
is an i.i.d. noise vector. An estimator for
is given by the rankconstrained, nonconvex problem(1) 
This matrix sensing problem has received considerable attention recently [30, 29, 26]
. This and other rankconstrained problems are common in machine learning and related fields, and have been used for applications discussed above. A typical theoretical approach to lowrank problems, including (
1) is to relax the lowrank constraint to a convex constraint, such as the tracenorm of . Indeed, for matrix sensing, Recht et al. [20] showed that if the measurements are noiseless and the measurement operator satisfies a restricted isometry property, then a lowrank can be recovered as the unique solution to a convex relaxation of (1). Subsequent work established similar guarantees also for the noisy and approximate case [14, 6].However, convex relaxations to the rank are not the common approach employed in practice. In this and other lowrank problems, the method of choice is typically unconstrained local optimization (via e.g. gradient descent, SGD or alternating minimization) on the factorized parametrization
(2) 
where the rank constraint is enforced by limiting the dimensionality of . Problem (2) is a nonconvex optimization problem that could have many bad local minima (as we show in Section 5), as well as saddle points. Nevertheless, local optimization seems to work very well in practice. Working on (2) is much cheaper computationally and allows scaling to largesized problems—the number of optimization variables is only rather than , and the updates are usually very cheap, especially compared to typical methods for solving the SDP resulting from the convex relaxation. There is therefore a significant disconnect between the theoretically studied and analyzed methods (based on convex relaxations) and the methods actually used in practice.
Recent attempts at bridging this gap showed that, some form of global “initialization”, typically relying on singular value decomposition, yields a solution that is already close enough to
; that local optimization from this initializer gets to the global optima (or to a good enough solution). Jain et al. [15], Keshavan [17] proved convergence for alternating minimization algorithm provided the starting point is close to the optimum, while Zheng and Lafferty [30], Zhao et al. [29], Tu et al. [26], Chen and Wainwright [8], Bhojanapalli et al. [2] considered gradient descent methods on the factor space and proved local convergence. But all these studies rely on global initialization followed by local convergence, and do not tackle the question of the existence of spurious local minima or deal with optimization starting from random initialization. There is therefore still a disconnect between this theory and the empirical practice of starting from random initialization and relying only on the local search to find the global optimum.In this paper we show that, under a suitable incoherence condition on the measurement operator (defined in Section 2), with noiseless measurements and with , the problem (2) has no spurious local minima (i.e. all local minima are global and satisfy ). Furthermore, under the same conditions, all saddle points have a direction with significant negative curvature, and so using a recent result of Ge et al. [10] we can establish that stochastic gradient descent from random initialization converges to in polynomial number of iterations. We extend the results also to the noisy and approximatelylowrank settings, where we can guarantee that every local minima is close to a global minimum. The incoherence condition we require is weaker than conditions used to establish recovery through local search, and so our results also ensures recovery in polynomial time under milder conditions than what was previously known. In particular, with i.i.d. Gaussian measurements, we ensure no spurious local minima and recovery through local search with the optimal number of measurements.
Related Work
Our work is heavily inspired by Bandeira et al. [1], who recently showed similar behavior for the problem of community detection—this corresponds to a specific rank problem with a linear objective, elliptope constraints and a binary solution. Here we take their ideas, extend them and apply them to matrix sensing with general rank matrices. In the past several months, similar type of results were also obtained for other nonconvex problems (where the source of nonconvexity is not a rank constraint), specifically complete dictionary learning [24] and phase recovery [25]. A related recent result of a somewhat different nature pertains to rank unconstrained linear optimization on the elliptope, showing that local minima of the rankconstrained problem approximate well the global optimum of the rank unconstrained convex problem, even though they might not be the global minima (in fact, the approximation guarantee for the actual global optimum is better) [18].
Another nonconvex lowrank problem long known to not possess spurious local minima is the PCA problem, which can also be phrased as matrix approximation with full observations, namely (e.g. [23]). Indeed, local search methods such as the powermethod are routinely used for this problem. Recently local optimization methods for the PCA problem working more directly on the optimized formulation have also been studied, including SGD [22] and Grassmannian optimization [28]. These results are somewhat orthogonal to ours, as they study a setting in which it is well known there are never any spurious local minima, and the challenge is obtaining satisfying convergence rates.
The seminal work of Burer and Monteiro [3] proposed lowrank factorized optimization for SDPs, and showed that for extremely high rank (number of constraints), an Augmented Lagrangian method converges asymptotically to the optimum. It was also shown that (under mild conditions) any rank deficient local minima is a global minima [4, 16], providing a posthoc verifiable sufficient condition for global optimality. However, this does not establish any apriori condition, based on problem structure, implying the lack of spurious local minima.
While preparing this manuscript, we also became aware of parallel work [11] studying the same question for the related but different problem of matrix completion. For this problem they obtain a similar guarantee, though with suboptimal dependence on the incoherence parameters and so suboptimal sample complexity, and requiring adding a specific nonstandard regularizer to the objective—this is not needed for our matrix sensing results.
We believe our work, together with the parallel work of [11], are the first to establish the lack of spurious local minima and the global convergence of local search from random initialization for a nontrivial rankconstrained problem (beyond PCA with full observations) with rank .
Notation. For matrices , their inner product is . We use , and for the Frobenius, spectral and nuclear norms of a matrix respectively. Given a matrix , we use to denote singular values of in decreasing order. denotes the rank approximation of , as obtained via its truncated singular value decomposition. We use plain capitals and to denote orthonormal matrices.
2 Formulation and Assumptions
We write the linear measurement operator as where , yielding . We assume is i.i.d Gaussian noise. We are generally interested in the high dimensional regime where the number of measurements is usually much smaller than the dimension .
Even if we know that , having many measurements might not be sufficient for recovery if they are not “spread out” enough. E.g., if all measurements only involve the first rows and columns, we would never have any information on the bottomright block. A sufficient condition for identifiability of a lowrank from linear measurements by Recht et al. [20] is based on restricted isometry property defined below.
Definition 2.1 (Restricted Isometry Property).
Measurement operator (with rows , ) satisfies RIP if for any matrix with rank ,
(3) 
In particular, of rank is identifiable if [see 20, Theorem 3.2]. One situation in which RIP is obtained is for random measurement operators. For example, matrices with i.i.d. entries satisfy RIP when [see 6, Theorem 2.3]. This implies identifiability based on i.i.d. Gaussian measurement with
measurements (coincidentally, the number of degrees of freedom in
, optimal up to a constant factor).3 Main Results
We are now ready to present our main result about local minima for the matrix sensing problem (2). We first present the results for noisy sensing of exact low rank matrices, and then generalize the results also to approximately low rank matrices.
Now we will present our result characterizing local minima of , for lowrank . Recall that measurements are , where entries of are i.i.d. Gaussian  .
Theorem 3.1.
Consider the optimization problem (2) where , is i.i.d. satisfies RIP with , and
. Then, with probability
(over the noise), for any local minimum of :In particular, in the noiseless case () we have and so and every local minima is global. In the noiseless case, we can also relax the RIP requirement to (see Theorem 4.1 in Section 4). In the noisy case we cannot expect to ensure we always get to an exact global minima, since the noise might cause tiny fluctuations very close to the global minima possibly creating multiple very close local minima. But we show that all local minima are indeed very close to some factorization of the true signal, and hence to a global optimum, and this “radius” of local minima decreases as we have more observations.
The proof of the Theorem for the noiseless case is presented in Section 4. The proof for the general setting follows along the same lines and can be found in the Appendix.
So far we have discussed how all local minima are global, or at least very close to a global minimum. Using a recent result by Ge et al. [10] on the convergence of SGD for nonconvex functions, we can further obtain a polynomial bound on the number of SGD iterations required to reach the global minima. The main condition that needs to be established in order to ensure this, is that all saddle points of (2) satisfy the “strict saddle point condition”, i.e. have a direction with significant negative curvature:
Theorem 3.2 (Strict saddle).
Consider the optimization problem (2) in the noiseless case, where , satisfies RIP with , and . Let be a first order critical point of with
. Then the smallest eigenvalue of the Hessian satisfies
Now consider the stochastic gradient descent updates,
(4) 
where
is uniformly distributed on the unit sphere and
is a projection onto . Using Theorem 3.2 and the result of Ge et al. [10] we can establish:Theorem 3.3 (Convergence from random initialization).
Consider the optimization problem (2) under the same noiseless conditions as in Theorem 3.2. Using , for some global optimum of , for any , after iterations of (4) with an appropriate stepsize , starting from a random point uniformly distributed on , with probability at least , we reach an iterate satisfying
The above result guarantees convergence of noisy gradient descent to a global optimum. Alternatively, second order methods such as cubic regularization (Nesterov and Polyak [19]) and trust region (Cartis et al. [7]) that have guarantees based on the strict saddle point property can also be used here.
RIP Requirement: Our results require RIP for the noisy case and RIP for the noiseless case. Requiring RIP with is sufficient to ensure uniqueness of the global optimum of (1), and thus recovery in the noiseless setting [20], but all known efficient recovery methods require stricter conditions. The best guarantees we are aware of require RIP [20] or RIP [6] using a convex relaxation. Our requirement is not directly comparable to the latter, as we require RIP on a smaller set, but with a lower (stricter) . Alternatively, RIP is required for global initialization followed by nonconvex optimization [26]—our requirement is strictly better. In terms of requirements on RIP for nonconvex methods, the best we are aware of is requiring [15, 29, 30]–this is a much stronger condition than ours, and it yields a suboptimal required number of spherical Gaussian measurements of . So, compared to prior work our requirement is very mild—it ensures efficient recovery even in a regime not previously covered by any guarantee on efficient recovery, and requires the optimal number of spherical Gaussian measurements (up to a constant factor) of .
Extension to Approximate Low Rank
We can also obtain similar results that deteriorate gracefully if is not exactly low rank, but is close to being lowrank (see proof in the Appendix):
Theorem 3.4.
Consider the optimization problem (2) where and satisfies RIP with , Then, for any local minima of :
where is the best rank approximation of .
This theorem guarantees that any local optimum of is close to upto an error depending on . For the lowrank noiseless case we have and the right hand side vanishes. When is not exactly low rank, the best recovery error we can hope for is , since is at most rank . On the right hand side of Theorem 3.4, we have also a nuclear norm term, which might be higher, but it also gets scaled down by , and so by the number of measurements.
p.s.d matrix with random subspace. We notice no significant difference between the two initialization methods, suggesting absence of local minima as shown. Both methods have phase transition around
.4 Proof for the Noiseless Case
In this section we present the proof characterizing the local minima of problem (2). For ease of exposition we first present the results for the noiseless case (). Proof for the general case can be found in the Appendix.
Theorem 4.1.
Consider the optimization problem (2) where , satisfies RIP with , and . Then, for any local minimum of :
For the proof of this theorem we first discuss the implications of the first and second order optimality conditions and then show how to combine them to yield the result.
Our proof techniques are different from existing results characterizing local minima of dictionary learning [24], phase retrieval [25] and community detection [1, 18]. For most of these problems Hessian is PSD only for points close to optimum. However, Hessian of can be PSD even for points far from to optima. Hence we need new directions to use the second order conditions.
Invariance of over orthonormal matrices introduces additional challenges in comparing a given stationary point to a global optimum. We have to find the best orthonormal matrix to align a given stationary point to a global optimum , where , to combine results from the first and second order conditions, without degrading the isometry constants.
Consider a local optimum that satisfies first and second order optimality conditions of problem (2). In particular satisfies and for any . Now we will see how these two conditions constrain the error .
First we present the following consequence of the RIP assumption [see 5, Lemma 2.1].
Lemma 4.1.
Given two rank matrices and , and a RIP measurement operator , the following holds:
(5) 
4.1 First order optimality
First we will consider the first order condition, . For any stationary point this implies
(6) 
Now using the isometry property of gives us the following result.
Lemma 4.2.
[First order condition] For any first order stationary point of , and satisfying the RIP (3), the following holds:
where is an orthonormal matrix that spans the column space of .
This lemma states that any stationary point of is close to a global optimum in the subspace spanned by columns of . Notice that the error along the orthogonal direction can still be large making the distance between and arbitrarily far.
Proof of Lemma 4.2.
Let , for some orthonormal . Consider any matrix of the form . The first order optimality condition then implies,
The above equation together with Restricted Isometry Property (equation (5)) gives us the following inequality:
Note that for any matrix , . Furthermore, for any matrix , . Hence the above inequality implies the lemma statement. ∎
4.2 Second order optimality
We now consider the second order condition to show that the error along is indeed bounded well. Let be the hessian of the objective function. Note that this is an matrix. Fortunately for our result we need to only evaluate the Hessian along the direction for some orthonormal matrix . Here denotes writing a matrix in vector form.
Lemma 4.3.
[Hessian computation] Let be a first order critical point of . Then for any orthonormal matrix and ( ),
Proof of Lemma 4.3.
For any matrix , taking directional second derivative of the function with respect to we get:
Setting and using the first order optimality condition on , we get,
and follow from the first order optimality condition (6),
for . Finally taking sum over from to gives the result. ∎
Hence from second order optimality of we get,
Corollary 4.1.
[Second order optimality] Let be a local minimum of . For any orthonormal matrix ,
(7) 
Further for satisfying RIP (equation (3)) we have,
(8) 
The proof of this result follows simply by applying Lemma 4.3. The above Lemma gives a bound on the distance in the factor space . To be able to compare the second order condition to the first order condition we need a relation between and . Towards this we show the following result.
Lemma 4.4.
Let and be two matrices, and is an orthonormal matrix that spans the column space of . Then there exists an orthonormal matrix such that for any first order stationary point of , the following holds:
This Lemma bounds the distance in the factor space () with and . Combining this with the result from second order optimality (Corollary 4.1) shows is bounded by a constant factor of . This implies is bounded, opposite to what the first order condition implied (Lemma 4.2). The proof of the above lemma is in Section E. Hence from the above optimality conditions we get the proof of Theorem 4.1.
5 Necessity of RIP
We showed that there are no spurious local minima only under a restricted isometry assumption. A natural question is whether this is necessary, or whether perhaps the problem (2) never has any spurious local minima, perhaps similarly to the nonconvex PCA problem .
A good indication that this is not the case is that (2) is NPhard, even in the noiseless case when for [20] (if we don’t require RIP, we can have each be nonzero on a single entry in which case (2) becomes a matrix completion problem, for which hardness has been shown even under fairly favorable conditions [13]). That is, we are unlikely to have a polytime algorithm that succeeds for any linear measurement operator. Although this doesn’t formally preclude the possibility that there are no spurious local minima, but it just takes a very long time to find a local minima, this scenario seems somewhat unlikely.
To resolve the question, we present an explicit example of a measurement operator and (i.e. ), with , for which (1), and so also (2), have a nonglobal local minima.
Example 1: Let and consider (1) with (i.e. a rank constraint). For we have and . But is a rank local minimum with .
We can be extended the construction to any rank by simply adding
to the objective, and padding both the global and local minimum with a diagonal beneath the leading
block.In Example 1, we had a rank problem, with a rank exact solution, and a rank local minima. Another question we can ask is what happens if we allow a larger rank than the rank of the optimal solution. That is, if we have with low , even , but consider (1) or (2) with a high . Could we still have nonglobal local minima? The answer is yes…
Example 2: Let and consider the problem (1) with a rank constraint. We can verify that is a rank= global minimum with , but is a local minimum with . Also for an arbitrary large rank constraint (taking
to be odd for simplicity), extend the objective to
. We still have a rank global minimum with a single nonzero entry , while is a local minimum with .6 Conclusion
We established that under conditions similar to those required for convex relaxation recovery guarantees, the nonconvex formulation of matrix sensing (2) does not exhibit any spurious local minima (or, in the noisy and approximate settings, at least not outside some small radius around a global minima), and we can obtain theoretical guarantees on the success of optimizing it using SGD from random initialization. This matches the methods frequently used in practice, and can explain their success. This guarantee is very different in nature from other recent work on nonconvex optimization for lowrank problems, which relied heavily on initialization to get close to the global optimum, and on local search just for the final local convergence to the global optimum. We believe this is the first result, together with the parallel work of Ge et al. [11], on the global convergence of local search for common rankconstrained problems that are worstcase hard.
Our result suggests that SVD initialization is not necessary for global convergence, and random initialization would succeed under similar conditions (in fact, our conditions are even weaker than in previous work that used SVD initialization). To investigate empirically whether SVD initialization is indeed helpful for ensuring global convergence, in Figure 1 we compare recovery probability of random rank matrices for random and SVD initialization—there is no significant difference between the two.
Beyond the implications for matrix sensing, we are hoping these type of results could be a first step and serve as a model for understanding local search in deep networks. Matrix factorization, such as in (2
), is a depthtwo neural network with linear transfer—an extremely simple network, but already nonconvex and arguably the most complicated network we have a good theoretical understanding of. Deep networks are also hard to optimize in the worst case, but local search seems to do very well in practice. Our ultimate goal is to use the study of matrix recovery as a guide in understating the conditions that enable efficient training of deep networks.
Acknowledgements
Authors would like to thank Afonso Bandeira for discussions, Jason Lee and Tengyu Ma for sharing and discussing their work. This research was supported in part by an NSF RIAF award and by Intel ICRICI.
References
 Bandeira et al. [2016] A. S. Bandeira, N. Boumal, and V. Voroninski. On the lowrank approach for semidefinite programs arising in synchronization and community detection. arXiv preprint arXiv:1602.04426, 2016.
 Bhojanapalli et al. [2015] S. Bhojanapalli, A. Kyrillidis, and S. Sanghavi. Dropping convexity for faster semidefinite optimization. arXiv preprint arXiv:1509.03917, 2015.
 Burer and Monteiro [2003] S. Burer and R. D. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via lowrank factorization. Mathematical Programming, 95(2):329–357, 2003.
 Burer and Monteiro [2005] S. Burer and R. D. Monteiro. Local minima and convergence in lowrank semidefinite programming. Mathematical Programming, 103(3):427–444, 2005.
 Candès [2008] E. J. Candès. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9):589–592, 2008.
 Candes and Plan [2011] E. J. Candes and Y. Plan. Tight oracle inequalities for lowrank matrix recovery from a minimal number of noisy random measurements. Information Theory, IEEE Transactions on, 57(4):2342–2359, 2011.
 Cartis et al. [2012] C. Cartis, N. I. Gould, and P. L. Toint. Complexity bounds for secondorder optimality in unconstrained optimization. Journal of Complexity, 28(1):93–108, 2012.
 Chen and Wainwright [2015] Y. Chen and M. J. Wainwright. Fast lowrank estimation by projected gradient descent: General statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
 Flammia et al. [2012] S. Flammia, D. Gross, Y.K. Liu, and J. Eisert. Quantum tomography via compressed sensing: Error bounds, sample complexity and efficient estimators. New Journal of Physics, 14(9):095022, 2012.

Ge et al. [2015]
R. Ge, F. Huang, C. Jin, and Y. Yuan.
Escaping from saddle points—online stochastic gradient for tensor decomposition.
In Proceedings of The 28th Conference on Learning Theory, pages 797–842, 2015.  Ge et al. [2016] R. Ge, J. Lee, and T. Ma. Matrix completion has no spurious local minimum. arXiv preprint arXiv:1605.07272, 2016.
 Gross et al. [2010] D. Gross, Y.K. Liu, S. T. Flammia, S. Becker, and J. Eisert. Quantum state tomography via compressed sensing. Physical review letters, 105(15):150401, 2010.
 Hardt et al. [2014] M. Hardt, R. Meka, P. Raghavendra, and B. Weitz. Computational limits for matrix completion. In Proceedings of The 27th Conference on Learning Theory, pages 703–725, 2014.
 Jain et al. [2010] P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems, pages 937–945, 2010.

Jain et al. [2013]
P. Jain, P. Netrapalli, and S. Sanghavi.
Lowrank matrix completion using alternating minimization.
In
Proceedings of the 45th annual ACM Symposium on theory of computing
, pages 665–674. ACM, 2013.  Journée et al. [2010] M. Journée, F. Bach, P.A. Absil, and R. Sepulchre. Lowrank optimization on the cone of positive semidefinite matrices. SIAM Journal on Optimization, 20(5):2327–2351, 2010.
 Keshavan [2012] R. H. Keshavan. Efficient algorithms for collaborative filtering. PhD thesis, STANFORD, 2012.
 Montanari [2016] A. Montanari. A grothendiecktype inequality for local maxima. arXiv preprint arXiv:1603.04064, 2016.
 Nesterov and Polyak [2006] Y. Nesterov and B. T. Polyak. Cubic regularization of newton method and its global performance. Mathematical Programming, 108(1):177–205, 2006.
 Recht et al. [2010] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471–501, 2010.
 Rennie and Srebro [2005] J. D. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd international conference on Machine learning, pages 713–719. ACM, 2005.
 Sa et al. [2015] C. D. Sa, C. Re, and K. Olukotun. Global convergence of stochastic gradient descent for some nonconvex matrix problems. In Proceedings of the 32nd International Conference on Machine Learning (ICML15), pages 2332–2341, 2015.
 Srebro and Jaakkola [2003] N. Srebro and T. Jaakkola. Weighted lowrank approximations. In Proceedings of the 20th International Conference on Machine Learning (ICML03), pages 720–727, 2003.
 Sun et al. [2015] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery using nonconvex optimization. In Proceedings of The 32nd International Conference on Machine Learning, pages 2351–2360, 2015.
 Sun et al. [2016] J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. preprint arXiv:1602.06664, 2016.
 Tu et al. [2015] S. Tu, R. Boczar, M. Soltanolkotabi, and B. Recht. Lowrank solutions of linear matrix equations via Procrustes flow. arXiv preprint arXiv:1507.03566, 2015.
 Yu et al. [2014] H.F. Yu, P. Jain, P. Kar, and I. Dhillon. Largescale multilabel learning with missing labels. In Proceedings of The 31st International Conference on Machine Learning, pages 593–601, 2014.
 Zhang and Balzano [2015] D. Zhang and L. Balzano. Global convergence of a grassmannian gradient descent algorithm for subspace estimation. arXiv preprint arXiv:1506.07405, 2015.
 Zhao et al. [2015] T. Zhao, Z. Wang, and H. Liu. A nonconvex optimization framework for low rank matrix estimation. In Advances in Neural Information Processing Systems, pages 559–567, 2015.
 Zheng and Lafferty [2015] Q. Zheng and J. Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. In Advances in Neural Information Processing Systems, pages 109–117, 2015.
Appendix A Numerical Simulations
In this section we present simulation results for performance of gradient descent over . We consider measurements , where are i.i.d Gaussian with each entry distributed as . is a rank random p.s.d matrix with . is varied from 1 to 20 in the experiments.
We consider both standard gradient descent and noisy gradient descent (4) with step size . We add noise of magnitude for the noisy gradient updates. Each method is run until convergence (max of 200 iterations). Let the output of gradient descent be . A run of this experiment is considered success if the final error . Each experiment is repeated for 20 times and average probability of success is computed.
We repeat the above procedure starting from both random initialization and SVD initialization. For SVD initialization, the initial point is set to be the rank approximation of as suggested by Jain et al. [15]. In figure 2 we have the plots for the cases discussed above. All of them have phase transition around number of samples . This is in agreement with the results in Section 3. has no local minima once and random initialization has same performance as SVD initialization.
In figure 3, the left two plots show error behaves with varying rank and number of samples for random and SVD initializations. The rightmost plot shows the phase transition for rank 10 case for all the methods. Again we notice no significant difference between these methods.
Appendix B Proof for the Noisy Case
In this section we present the proof characterizing the local minima of problem (2). Recall , where is a rank matrix and is i.i.d.
We consider local optimum that satisfies first and second order optimality conditions of problem (2). In particular satisfies and for any . Now we will see how these two conditions constrain the error .
b.1 First order optimality
First we will consider the first order condition, . For any stationary point this implies
(9) 
Now using the isometry property of gives us the following result.
Lemma B.1.
[First order condition] For any first order stationary point of , and satisfying the RIP (3), the following holds:
w.p. , where is an orthonormal matrix that spans the column space of .
This lemma states that any stationary point of is close to a global optimum in the subspace spanned by columns of . Notice that the error along the orthogonal direction can still be large making the distance between and arbitrarily big.
Proof of Lemma b.1.
Let , for some orthonormal . Consider any matrix of the form . The first order optimality condition then implies,
The above equation together with Restricted Isometry Property(equation (5)) gives us the following inequality:
by Cauchy Schwarz inequality and Lemma E.2. Note that for any matrix , . Furthermore, for any matrix , . Hence the above inequality implies the lemma statement. ∎
b.2 Second order optimality
We will now consider the second order condition to show that the error along is indeed bounded well. Let be the hessian of the objective function. Note that this is an matrix. Fortunately for our result we need to only evaluate the Hessian along the direction for some orthonormal matrix .
Lemma B.2.
[Hessian computation] Let be a first order critical point of . Then for any orthonormal matrix and ,
Proof of Lemma b.2.
For any matrix , taking directional second derivative of the function with respect to we get:
Setting and using the first order optimality condition on , we get,
(10) 
where the last equality is again by the first order optimality condition (9). ∎
Hence from second order optimality of we get,
Corollary B.1.
[Second order optimality] Let be a local minimum of . For any orthonormal matrix , w.p. ,
Further for satisfying RIP (equation (3)) we have,
Comments
There are no comments yet.