Concentration inequalities and optimal number of layers for stochastic deep neural networks

06/22/2022
by   Michele Caprio, et al.
0

We state concentration and martingale inequalities for the output of the hidden layers of a stochastic deep neural network (SDNN), as well as for the output of the whole SDNN. These results allow us to introduce an expected classifier (EC), and to give probabilistic upper bound for the classification error of the EC. We also state the optimal number of layers for the SDNN via an optimal stopping procedure. We apply our analysis to a stochastic version of a feedforward neural network with ReLU activation function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2018

Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning

Activation functions are essential for deep learning methods to learn an...
research
02/10/2018

Generalization of an Upper Bound on the Number of Nodes Needed to Achieve Linear Separability

An important issue in neural network research is how to choose the numbe...
research
03/14/2022

Quantitative Gaussian Approximation of Randomly Initialized Deep Neural Networks

Given any deep fully connected neural network, initialized with random G...
research
01/09/2019

A Constructive Approach for One-Shot Training of Neural Networks Using Hypercube-Based Topological Coverings

In this paper we presented a novel constructive approach for training de...
research
07/08/2021

On Margins and Derandomisation in PAC-Bayes

We develop a framework for derandomising PAC-Bayesian generalisation bou...
research
04/13/2020

Topology of deep neural networks

We study how the topology of a data set M = M_a ∪ M_b ⊆R^d, representing...
research
12/09/2018

Towards Neural Network Patching: Evaluating Engagement-Layers and Patch-Architectures

In this report we investigate fundamental requirements for the applicati...

Please sign up or login with your details

Forgot password? Click here to reset