Designing recurrent neural networks by unfolding an l1-l1 minimization algorithm

02/18/2019
by   Hung Duy Le, et al.
0

We propose a new deep recurrent neural network (RNN) architecture for sequential signal reconstruction. Our network is designed by unfolding the iterations of the proximal gradient method that solves the l1-l1 minimization problem. As such, our network leverages by design that signals have a sparse representation and that the difference between consecutive signal representations is also sparse. We evaluate the proposed model in the task of reconstructing video frames from compressive measurements and show that it outperforms several state-of-the-art RNN models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2015

A HMAX with LLC for visual recognition

Today's high performance deep artificial neural networks (ANNs) rely hea...
research
06/14/2013

Sparse Recovery of Streaming Signals Using L1-Homotopy

Most of the existing methods for sparse signal recovery assume a static ...
research
12/02/2019

DeepFPC: Deep Unfolding of a Fixed-Point Continuation Algorithm for Sparse Signal Recovery from Quantized Measurements

We present DeepFPC, a novel deep neural network designed by unfolding th...
research
08/08/2023

Sorted L1/L2 Minimization for Sparse Signal Recovery

This paper introduces a novel approach for recovering sparse signals usi...
research
01/13/2021

The l1-l2 minimization with rotation for sparse approximation in uncertainty quantification

This paper proposes a combination of rotational compressive sensing with...
research
02/04/2015

Sparse Representation Classification Beyond L1 Minimization and the Subspace Assumption

The sparse representation classifier (SRC) proposed in Wright et al. (20...

Please sign up or login with your details

Forgot password? Click here to reset