Complete Dictionary Recovery over the Sphere II: Recovery by Riemannian Trust-region Method

11/15/2015
by   Ju Sun, et al.
0

We consider the problem of recovering a complete (i.e., square and invertible) matrix A_0, from Y ∈R^n × p with Y = A_0 X_0, provided X_0 is sufficiently sparse. This recovery problem is central to theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers A_0 when X_0 has O(n) nonzeros per column, under suitable probability model for X_0. Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. In a companion paper (arXiv:1511.03607), we have showed that with high probability our nonconvex formulation has no "spurious" local minimizers and around any saddle point the objective function has a negative directional curvature. In this paper, we take advantage of the particular geometric structure, and describe a Riemannian trust region algorithm that provably converges to a local minimizer with from arbitrary initializations. Such minimizers give excellent approximations to rows of X_0. The rows are then recovered by linear programming rounding and deflation.

READ FULL TEXT

page 23

page 24

research
04/26/2015

Complete Dictionary Recovery over the Sphere

We consider the problem of recovering a complete (i.e., square and inver...
research
10/21/2015

When Are Nonconvex Problems Not Scary?

In this note, we focus on smooth nonconvex optimization problems that ob...
research
11/11/2015

Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture

We consider the problem of recovering a complete (i.e., square and inver...
research
01/20/2020

Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications

The problem of finding the sparsest vector (direction) in a low dimensio...
research
05/05/2020

Manifold Proximal Point Algorithms for Dual Principal Component Pursuit and Orthogonal Dictionary Learning

We consider the problem of maximizing the ℓ_1 norm of a linear map over ...
research
02/24/2020

Complete Dictionary Learning via ℓ_p-norm Maximization

Dictionary learning is a classic representation learning method that has...
research
12/15/2014

Finding a sparse vector in a subspace: Linear sparsity using alternating directions

Is it possible to find the sparsest vector (direction) in a generic subs...

Please sign up or login with your details

Forgot password? Click here to reset