Deeply-Sparse Signal rePresentations (DS^2P)

07/05/2018
by   Demba Ba, et al.
0

The solution to the regularized least-squares problem min_x∈R^p+1/2y-Ax_2^2 + λx_1, assuming A is unitary, is given by the soft-thresholding operator, or ReLu in neural network parlance, applied component-wise to A^Ty. This equivalence is at the core of recent work that has sought to build a parallel between deep neural network architectures and sparse recovery and estimation, namely that a deep neural network architecture with ReLu nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable. We show that if the measurement matrices in the cascaded sparse coding model (a) satisfy RIP and (b) all have sparse columns except for the last, they can be recovered with high probability in the absence of noise using an optimization algorithm that, beginning with the last element of the cascade, alternates between estimating the dictionary and the sparse code and then, at convergence, proceeds to the preceding element in the cascade. The method of choice in deep learning to solve this problem is by training an auto-encoder whose architecture we specify. Our algorithm provides a sound alternative, with theoretical guarantees, as well as sample complexity assessments. Letting r_ℓ be the dimension of the input of the ℓ^th transformation (embedding dimension) and s_Y^(ℓ) the sparsity of this input (number of active neurons), the computational complexity is O(max_ℓ r_ℓ s_Y^(ℓ-1)) , i.e. the maximum, across layers, of the product of the number of active neurons and the embedding dimension. Our proof relies on a certain type of sparse random matrix satisfying the RIP property.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2019

Joint Design of Measurement Matrix and Sparse Support Recovery Method via Deep Auto-encoder

Sparse support recovery arises in many applications in communications an...
research
11/09/2017

Provably Accurate Double-Sparse Coding

Sparse coding is a crucial subroutine in algorithms for various signal p...
research
06/05/2023

Does a sparse ReLU network training problem always admit an optimum?

Given a training set, a loss function, and a neural network architecture...
research
09/30/2022

Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery

The practice of deep learning has shown that neural networks generalize ...
research
06/16/2020

Dense and Sparse Coding: Theory and Architectures

The sparse representation model has been successfully utilized in a numb...
research
04/21/2021

Efficient Sparse Coding using Hierarchical Riemannian Pursuit

Sparse coding is a class of unsupervised methods for learning a sparse r...
research
05/12/2014

Estimating Diffusion Network Structures: Recovery Conditions, Sample Complexity & Soft-thresholding Algorithm

Information spreads across social and technological networks, but often ...

Please sign up or login with your details

Forgot password? Click here to reset