Complete Dictionary Recovery over the Sphere

04/26/2015
by   Ju Sun, et al.
0

We consider the problem of recovering a complete (i.e., square and invertible) matrix A_0, from Y ∈ R^n × p with Y = A_0 X_0, provided X_0 is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers A_0 when X_0 has O(n) nonzeros per column, under suitable probability model for X_0. In contrast, prior results based on efficient algorithms provide recovery guarantees when X_0 has only O(n^1-δ) nonzeros per column for any constant δ∈ (0, 1). Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no "spurious" local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the presence of saddle points. The geometric approach we develop here may also shed light on other problems arising from nonconvex recovery of structured signals.

READ FULL TEXT

page 5

page 14

research
11/11/2015

Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture

We consider the problem of recovering a complete (i.e., square and inver...
research
11/15/2015

Complete Dictionary Recovery over the Sphere II: Recovery by Riemannian Trust-region Method

We consider the problem of recovering a complete (i.e., square and inver...
research
01/20/2020

Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications

The problem of finding the sparsest vector (direction) in a low dimensio...
research
05/05/2020

Manifold Proximal Point Algorithms for Dual Principal Component Pursuit and Orthogonal Dictionary Learning

We consider the problem of maximizing the ℓ_1 norm of a linear map over ...
research
02/24/2020

Complete Dictionary Learning via ℓ_p-norm Maximization

Dictionary learning is a classic representation learning method that has...
research
12/15/2014

Finding a sparse vector in a subspace: Linear sparsity using alternating directions

Is it possible to find the sparsest vector (direction) in a generic subs...
research
10/21/2015

When Are Nonconvex Problems Not Scary?

In this note, we focus on smooth nonconvex optimization problems that ob...

Please sign up or login with your details

Forgot password? Click here to reset