DeepAI AI Chat
Log In Sign Up

Learning Fast Approximations of Sparse Nonlinear Regression

by   Yuhai Song, et al.

The idea of unfolding iterative algorithms as deep neural networks has been widely applied in solving sparse coding problems, providing both solid theoretical analysis in convergence rate and superior empirical performance. However, for sparse nonlinear regression problems, a similar idea is rarely exploited due to the complexity of nonlinearity. In this work, we bridge this gap by introducing the Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA), which can attain a linear convergence under suitable conditions. Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.


page 1

page 2

page 3

page 4


Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds

In recent years, unfolding iterative algorithms as neural networks has b...

Learned Interpretable Residual Extragradient ISTA for Sparse Coding

Recently, the study on learned iterative shrinkage thresholding algorith...

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...

On the Convergence of the SINDy Algorithm

One way to understand time-series data is to identify the underlying dyn...

On variational iterative methods for semilinear problems

This paper presents an iterative method suitable for inverting semilinea...

Hyperparameter Tuning is All You Need for LISTA

Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces th...

Sparse Methods for Automatic Relevance Determination

This work considers methods for imposing sparsity in Bayesian regression...