DeepAI AI Chat
Log In Sign Up

Learned Interpretable Residual Extragradient ISTA for Sparse Coding

by   Lin Kong, et al.

Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions. A large number of experiments as well as some theories have proved the high efficiency of LISTA for solving sparse coding problems. However, existing LISTA methods are all serial connection. To address this issue, we propose a novel extragradient based LISTA (ELISTA), which has a residual structure and theoretical guarantees. In particular, our algorithm can also provide the interpretability for Res-Net to a certain extent. From a theoretical perspective, we prove that our method attains linear convergence. In practice, extensive empirical results verify the advantages of our method.


page 1

page 2

page 3

page 4


Learned ISTA with Error-based Thresholding for Adaptive Sparse Coding

The learned iterative shrinkage thresholding algorithm (LISTA) introduce...

Learning Fast Approximations of Sparse Nonlinear Regression

The idea of unfolding iterative algorithms as deep neural networks has b...

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...

Towards Understanding Residual and Dilated Dense Neural Networks via Convolutional Sparse Coding

Convolutional neural network (CNN) and its variants have led to many sta...

Convolutional Sparse Coding Fast Approximation with Application to Seismic Reflectivity Estimation

In sparse coding, we attempt to extract features of input vectors, assum...

Linear Convergence of Reshuffling Kaczmarz Methods With Sparse Constraints

The Kaczmarz method (KZ) and its variants, which are types of stochastic...

Convolutional Neural Networks Analyzed via Convolutional Sparse Coding

Convolutional neural networks (CNN) have led to many state-of-the-art re...