Learned Greedy Method (LGM): A Novel Neural Architecture for Sparse Coding and Beyond

10/14/2020
by   Rajaei Khatib, et al.
0

The fields of signal and image processing have been deeply influenced by the introduction of deep neural networks. These are successfully deployed in a wide range of real-world applications, obtaining state of the art results and surpassing well-known and well-established classical methods. Despite their impressive success, the architectures used in many of these neural networks come with no clear justification. As such, these are usually treated as "black box" machines that lack any kind of interpretability. A constructive remedy to this drawback is a systematic design of such networks by unfolding well-understood iterative algorithms. A popular representative of this approach is the Iterative Shrinkage-Thresholding Algorithm (ISTA) and its learned version – LISTA, aiming for the sparse representations of the processed signals. In this paper we revisit this sparse coding task and propose an unfolded version of a greedy pursuit algorithm for the same goal. More specifically, we concentrate on the well-known Orthogonal-Matching-Pursuit (OMP) algorithm, and introduce its unfolded and learned version. Key features of our Learned Greedy Method (LGM) are the ability to accommodate a dynamic number of unfolded layers, and a stopping mechanism based on representation error, both adapted to the input. We develop several variants of the proposed LGM architecture and test some of them in various experiments, demonstrating their flexibility and efficiency.

READ FULL TEXT

page 38

page 39

page 42

research
05/27/2019

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...
research
03/17/2021

Thresholding Greedy Pursuit for Sparse Recovery Problems

We study here sparse recovery problems in the presence of additive noise...
research
05/12/2023

Revisiting Matching Pursuit: Beyond Approximate Submodularity

We study the problem of selecting a subset of vectors from a large set, ...
research
12/22/2019

Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing

Deep neural networks provide unprecedented performance gains in many rea...
research
09/29/2021

Adaptive Approach For Sparse Representations Using The Locally Competitive Algorithm For Audio

Gammachirp filterbank has been used to approximate the cochlea in sparse...
research
12/26/2018

A Greedy Approach to ℓ_0,∞ Based Convolutional Sparse Coding

Sparse coding techniques for image processing traditionally rely on a pr...

Please sign up or login with your details

Forgot password? Click here to reset