The Landscape of Matrix Factorization Revisited

02/27/2020
by   Hossein Valavi, et al.
0

We revisit the landscape of the simple matrix factorization problem. For low-rank matrix factorization, prior work has shown that there exist infinitely many critical points all of which are either global minima or strict saddles. At a strict saddle the minimum eigenvalue of the Hessian is negative. Of interest is whether this minimum eigenvalue is uniformly bounded below zero over all strict saddles. To answer this we consider orbits of critical points under the general linear group. For each orbit we identify a representative point, called a canonical point. If a canonical point is a strict saddle, so is every point on its orbit. We derive an expression for the minimum eigenvalue of the Hessian at each canonical strict saddle and use this to show that the minimum eigenvalue of the Hessian over the set of strict saddles is not uniformly bounded below zero. We also show that a known invariance property of gradient flow ensures the solution of gradient flow only encounters critical points on an invariant manifold M_C determined by the initial condition. We show that, in contrast to the general situation, the minimum eigenvalue of strict saddles in M_0 is uniformly bounded below zero. We obtain an expression for this bound in terms of the singular values of the matrix being factorized. This bound depends on the size of the nonzero singular values and on the separation between distinct nonzero singular values of the matrix.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2021

Asymptotic Escape of Spurious Critical Points on the Low-rank Matrix Manifold

We show that the Riemannian gradient descent algorithm on the low-rank m...
research
12/29/2016

Symmetry, Saddle Points, and Global Geometry of Nonconvex Matrix Factorization

We propose a general theory for studying the geometry of nonconvex objec...
research
01/07/2021

Boundary Conditions for Linear Exit Time Gradient Trajectories Around Saddle Points: Analysis and Algorithm

Gradient-related first-order methods have become the workhorse of large-...
research
11/30/2021

Embedding Principle: a hierarchical structure of loss landscape of deep neural networks

We prove a general Embedding Principle of loss landscape of deep neural ...
research
03/06/2023

Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss

We consider a deep matrix factorization model of covariance matrices tra...
research
11/28/2019

Analysis of Asymptotic Escape of Strict Saddle Sets in Manifold Optimization

In this paper, we provide some analysis on the asymptotic escape of stri...

Please sign up or login with your details

Forgot password? Click here to reset