Optimal deep neural networks for sparse recovery via Laplace techniques

09/04/2017
by   Steffen Limmer, et al.
0

This paper introduces Laplace techniques for designing a neural network, with the goal of estimating simplex-constraint sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution) as the problem of computing the centroid of some polytope that results from the intersection of the simplex and an affine subspace determined by the measurements. Owing to the specific structure, it is shown that the centroid can be computed analytically by extending a recent result that facilitates the volume computation of polytopes via Laplace transformations. A main insight of this paper is that the desired volume and centroid computations can be performed by a classical deep neural network comprising threshold functions, rectified linear (ReLU) and rectified polynomial (ReP) activation functions. The proposed construction of a deep neural network for sparse recovery is completely analytic so that time-consuming training procedures are not necessary. Furthermore, we show that the number of layers in our construction is equal to the number of measurements which might enable novel low-latency sparse recovery algorithms for a larger class of signals than that assumed in this paper. To assess the applicability of the proposed uniform input distribution, we showcase the recovery performance on samples that are soft-classification vectors generated by two standard datasets. As both volume and centroid computation are known to be computationally hard, the network width grows exponentially in the worst-case. It can be, however, decreased by inducing sparse connectivity in the neural network via a well-suited basis of the affine subspace. Finally, the presented analytical construction may serve as a viable initialization to be further optimized and trained using particular input datasets at hand.

READ FULL TEXT
research
02/19/2020

Span Recovery for Deep Neural Networks with Applications to Input Obfuscation

The tremendous success of deep neural networks has motivated the need to...
research
04/24/2018

Sparse Power Factorization: Balancing peakiness and sample complexity

In many applications, one is faced with an inverse problem, where the kn...
research
07/03/2018

On decision regions of narrow deep neural networks

We show that for neural network functions that have width less or equal ...
research
08/27/2023

The inverse problem for neural networks

We study the problem of computing the preimage of a set under a neural n...
research
08/14/2021

A Sparse Coding Interpretation of Neural Networks and Theoretical Implications

Neural networks, specifically deep convolutional neural networks, have a...
research
08/05/2023

Approximating Positive Homogeneous Functions with Scale Invariant Neural Networks

We investigate to what extent it is possible to solve linear inverse pro...

Please sign up or login with your details

Forgot password? Click here to reset