When is sparse dictionary learning well-posed?

06/22/2016
by   Charles J. Garfinkle, et al.
0

Dictionary learning methods for sparse coding have exposed underlying structure in many kinds of natural signals. However, universal theorems guaranteeing the statistical consistency of inference in this model are lacking. Here, we prove that for almost all diverse enough datasets generated from the model, latent dictionaries and sparse codes are uniquely identifiable up to an error commensurate with measurement noise. Applications are given to data analysis, neuroscience, and engineering.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro