When is sparse dictionary learning well-posed?

06/22/2016
by   Charles J. Garfinkle, et al.
0

Dictionary learning methods for sparse coding have exposed underlying structure in many kinds of natural signals. However, universal theorems guaranteeing the statistical consistency of inference in this model are lacking. Here, we prove that for almost all diverse enough datasets generated from the model, latent dictionaries and sparse codes are uniquely identifiable up to an error commensurate with measurement noise. Applications are given to data analysis, neuroscience, and engineering.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2021

Discriminative Dictionary Learning based on Statistical Methods

Sparse Representation (SR) of signals or data has a well founded theory ...
research
09/02/2013

A Study on Unsupervised Dictionary Learning and Feature Encoding for Action Classification

Many efforts have been devoted to develop alternative methods to traditi...
research
10/21/2022

Geometric Sparse Coding in Wasserstein Space

Wasserstein dictionary learning is an unsupervised approach to learning ...
research
05/11/2014

Anomaly-Sensitive Dictionary Learning for Unsupervised Diagnostics of Solid Media

This paper proposes a strategy for the detection and triangulation of st...
research
01/15/2017

Boosting Dictionary Learning with Error Codes

In conventional sparse representations based dictionary learning algorit...
research
07/19/2014

Sparse and spurious: dictionary learning with noise and outliers

A popular approach within the signal processing and machine learning com...
research
08/29/2017

On the Reconstruction Risk of Convolutional Sparse Dictionary Learning

Sparse dictionary learning (SDL) has become a popular method for adaptiv...

Please sign up or login with your details

Forgot password? Click here to reset