New Algorithms for Learning Incoherent and Overcomplete Dictionaries

08/28/2013
by   Sanjeev Arora, et al.
0

In sparse recovery we are given a matrix A (the dictionary) and a vector of the form A X where X is sparse, and the goal is to recover X. This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary A is unknown and has to be learned from random examples of the form Y = AX where X is drawn from an appropriate distribution --- this is the dictionary learning problem. In most settings, A is overcomplete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman, Wang and Wright who gave an algorithm for the full-rank case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo. In particular, a dictionary is μ-incoherent if each pair of columns has inner product at most μ / √(n). The algorithm makes natural stochastic assumptions about the unknown sparse vector X, which can contain k ≤ c (√(n)/μ n, m^1/2 -η) non-zero entries (for any η > 0). This is close to the best k allowable by the best sparse recovery algorithms even if one knows the dictionary A exactly. Moreover, both the running time and sample complexity depend on 1/ϵ, where ϵ is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on 1/ϵ, and this is necessary.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2018

Towards Learning Sparsely Used Dictionaries with Arbitrary Supports

Dictionary learning is a popular approach for inferring a hidden basis o...
research
01/03/2014

More Algorithms for Provable Dictionary Learning

In dictionary learning, also known as sparse coding, the algorithm is gi...
research
11/09/2017

Provably Accurate Double-Sparse Coding

Sparse coding is a crucial subroutine in algorithms for various signal p...
research
05/28/2019

Approximate Guarantees for Dictionary Learning

In the dictionary learning (or sparse coding) problem, we are given a co...
research
04/10/2020

The Permuted Striped Block Model and its Factorization – Algorithms with Recovery Guarantees

We introduce a novel class of matrices which are defined by the factoriz...
research
10/25/2018

Subgradient Descent Learns Orthogonal Dictionaries

This paper concerns dictionary learning, i.e., sparse coding, a fundamen...
research
10/19/2022

Spectral Subspace Dictionary Learning

Dictionary learning, the problem of recovering a sparsely used matrix 𝐃∈...

Please sign up or login with your details

Forgot password? Click here to reset