Generalization and Estimation Error Bounds for Model-based Neural Networks

04/19/2023
by   Avner Shultzman, et al.
0

Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2022

Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery

The practice of deep learning has shown that neural networks generalize ...
research
07/15/2022

Error analysis for deep neural network approximations of parametric hyperbolic conservation laws

We derive rigorous bounds on the error resulting from the approximation ...
research
10/07/2021

On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications

This paper follows up on a recent work of (Neu, 2021) and presents new a...
research
05/26/2022

Training ReLU networks to high uniform accuracy is intractable

Statistical learning theory provides bounds on the necessary number of t...
research
06/11/2020

Tangent Space Sensitivity and Distribution of Linear Regions in ReLU Networks

Recent articles indicate that deep neural networks are efficient models ...
research
05/24/2017

Towards Understanding the Invertibility of Convolutional Neural Networks

Several recent works have empirically observed that Convolutional Neural...
research
10/07/2021

Multi-Head ReLU Implicit Neural Representation Networks

In this paper, a novel multi-head multi-layer perceptron (MLP) structure...

Please sign up or login with your details

Forgot password? Click here to reset