Towards Learning Sparsely Used Dictionaries with Arbitrary Supports

04/23/2018
by   Pranjal Awasthi, et al.
0

Dictionary learning is a popular approach for inferring a hidden basis or dictionary in which data has a sparse representation. Data generated from the dictionary A (an n by m matrix, with m > n in the over-complete setting) is given by Y = AX where X is a matrix whose columns have supports chosen from a distribution over k-sparse vectors, and the non-zero values chosen from a symmetric distribution. Given Y, the goal is to recover A and X in polynomial time. Existing algorithms give polytime guarantees for recovering incoherent dictionaries, under strong distributional assumptions both on the supports of the columns of X, and on the values of the non-zero entries. In this work, we study the following question: Can we design efficient algorithms for recovering dictionaries when the supports of the columns of X are arbitrary? To address this question while circumventing the issue of non-identifiability, we study a natural semirandom model for dictionary learning where there are a large number of samples y=Ax with arbitrary k-sparse supports for x, along with a few samples where the sparse supports are chosen uniformly at random. While the few samples with random supports ensures identifiability, the support distribution can look almost arbitrary in aggregate. Hence existing algorithmic techniques seem to break down as they make strong assumptions on the supports. Our main contribution is a new polynomial time algorithm for learning incoherent over-complete dictionaries that works under the semirandom model. Additionally the same algorithm provides polynomial time guarantees in new parameter regimes when the supports are fully random. Finally using these techniques, we also identify a minimal set of conditions on the supports under which the dictionary can be (information theoretically) recovered from polynomial samples for almost linear sparsity, i.e., k=Õ(n).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/28/2013

New Algorithms for Learning Incoherent and Overcomplete Dictionaries

In sparse recovery we are given a matrix A (the dictionary) and a vector...
research
04/24/2018

On Learning Sparsely Used Dictionaries from Incomplete Samples

Most existing algorithms for dictionary learning assume that all entries...
research
10/19/2022

Spectral Subspace Dictionary Learning

Dictionary learning, the problem of recovering a sparsely used matrix 𝐃∈...
research
10/18/2017

A complete characterization of optimal dictionaries for least squares representation

Dictionaries are collections of vectors used for representations of elem...
research
05/28/2019

Approximate Guarantees for Dictionary Learning

In the dictionary learning (or sparse coding) problem, we are given a co...
research
03/07/2016

Optimal dictionary for least squares representation

Dictionaries are collections of vectors used for representations of rand...
research
10/25/2018

Subgradient Descent Learns Orthogonal Dictionaries

This paper concerns dictionary learning, i.e., sparse coding, a fundamen...

Please sign up or login with your details

Forgot password? Click here to reset