PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation

05/31/2021
by   Bahareh Tolooshams, et al.
11

The dictionary learning problem, representing data as a combination of few atoms, has long stood as a popular method for learning representations in statistics and signal processing. The most popular dictionary learning algorithm alternates between sparse coding and dictionary update steps, and a rich literature has studied its theoretical convergence. The growing popularity of neurally plausible unfolded sparse coding networks has led to the empirical finding that backpropagation through such networks performs dictionary learning. This paper offers the first theoretical proof for these empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method. We highlight the impact of loss, unfolding, and backpropagation on convergence. We discover an implicit acceleration: as a function of unfolding, the backpropagated gradient converges faster and is more accurate than the gradient from alternating minimization. We complement our findings through synthetic and image denoising experiments. The findings support the use of accelerated deep learning optimizers and unfolded networks for dictionary learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2021

Dictionary Learning with Convex Update (ROMD)

Dictionary learning aims to find a dictionary under which the training d...
research
04/04/2023

Convergence of alternating minimisation algorithms for dictionary learning

In this paper we derive sufficient conditions for the convergence of two...
research
05/31/2018

Analysis of Fast Structured Dictionary Learning

Sparsity-based models and techniques have been exploited in many signal ...
research
02/24/2020

Complete Dictionary Learning via ℓ_p-norm Maximization

Dictionary learning is a classic representation learning method that has...
research
08/12/2017

Sparse Coding and Autoencoders

In "Dictionary Learning" one tries to recover incoherent matrices A^* ∈R...
research
05/19/2017

Local Information with Feedback Perturbation Suffices for Dictionary Learning in Neural Circuits

While the sparse coding principle can successfully model information pro...
research
12/03/2021

A Structured Dictionary Perspective on Implicit Neural Representations

Propelled by new designs that permit to circumvent the spectral bias, im...

Please sign up or login with your details

Forgot password? Click here to reset