Learning Fast Approximations of Sparse Nonlinear Regression

10/26/2020
by   Yuhai Song, et al.
0

The idea of unfolding iterative algorithms as deep neural networks has been widely applied in solving sparse coding problems, providing both solid theoretical analysis in convergence rate and superior empirical performance. However, for sparse nonlinear regression problems, a similar idea is rarely exploited due to the complexity of nonlinearity. In this work, we bridge this gap by introducing the Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA), which can attain a linear convergence under suitable conditions. Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2018

Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds

In recent years, unfolding iterative algorithms as neural networks has b...
research
06/22/2021

Learned Interpretable Residual Extragradient ISTA for Sparse Coding

Recently, the study on learned iterative shrinkage thresholding algorith...
research
05/27/2019

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...
research
05/16/2018

On the Convergence of the SINDy Algorithm

One way to understand time-series data is to identify the underlying dyn...
research
08/01/2019

On variational iterative methods for semilinear problems

This paper presents an iterative method suitable for inverting semilinea...
research
04/25/2022

Hybrid ISTA: Unfolding ISTA With Convergence Guarantees Using Free-Form Deep Neural Networks

It is promising to solve linear inverse problems by unfolding iterative ...
research
10/29/2021

Hyperparameter Tuning is All You Need for LISTA

Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces th...

Please sign up or login with your details

Forgot password? Click here to reset