DeepAI AI Chat
Log In Sign Up

Learning Fast Approximations of Sparse Nonlinear Regression

10/26/2020
by   Yuhai Song, et al.
0

The idea of unfolding iterative algorithms as deep neural networks has been widely applied in solving sparse coding problems, providing both solid theoretical analysis in convergence rate and superior empirical performance. However, for sparse nonlinear regression problems, a similar idea is rarely exploited due to the complexity of nonlinearity. In this work, we bridge this gap by introducing the Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA), which can attain a linear convergence under suitable conditions. Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/29/2018

Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds

In recent years, unfolding iterative algorithms as neural networks has b...
06/22/2021

Learned Interpretable Residual Extragradient ISTA for Sparse Coding

Recently, the study on learned iterative shrinkage thresholding algorith...
05/27/2019

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...
05/16/2018

On the Convergence of the SINDy Algorithm

One way to understand time-series data is to identify the underlying dyn...
08/01/2019

On variational iterative methods for semilinear problems

This paper presents an iterative method suitable for inverting semilinea...
10/29/2021

Hyperparameter Tuning is All You Need for LISTA

Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces th...
05/18/2020

Sparse Methods for Automatic Relevance Determination

This work considers methods for imposing sparsity in Bayesian regression...