Deep Gated Networks: A framework to understand training and generalisation in deep learning

Understanding the role of (stochastic) gradient descent in training and generalisation of deep neural networks with ReLU activation has been the object study in the recent past. In this paper, we make use of deep gated networks (DGNs) as a framework to obtain insights about DNNs with ReLU activation. In DGNs, a single neuronal unit has two components namely the pre-activation input (equal to the inner product the weights of the layer and previous layer outputs), and a gating value which belongs to [0,1] and the output of the neuronal unit is equal to the multiplication of pre-activation input and the gating value. The standard DNN with ReLU gate, is a special case of the DGNs, wherein the gating value is 1/0 based on whether or not the pre-activation input is positive or negative. We theoretically analyse and experiment with several variants of DGNs, each variant suited to understand a particular aspect of either training/generalisation in DNNs with ReLU activations. Our theory throws light on two questions namely i) why increasing depth till a point helps in training and ii) why increasing depth beyond a point hurts training? We also present experimental evidence to show that gate adaptation is key for generalisation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2018

Weighted Sigmoid Gate Unit for an Activation Function of Deep Neural Network

An activation function has crucial role in a deep neural network. A si...
research
04/06/2023

Training a Two Layer ReLU Network Analytically

Neural networks are usually trained with different variants of gradient ...
research
12/14/2018

Why ReLU Units Sometimes Die: Analysis of Single-Unit Error Backpropagation in Neural Networks

Recently, neural networks in machine learning use rectified linear units...
research
07/09/2021

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation

Gradient descent (GD) type optimization schemes are the standard methods...
research
10/04/2021

AdjointBackMapV2: Precise Reconstruction of Arbitrary CNN Unit's Activation via Adjoint Operators

Adjoint operators have been found to be effective in the exploration of ...
research
12/17/2017

Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study

Deep Neural Networks (DNNs) are very popular these days, and are the sub...
research
06/25/2018

Analysis of Invariance and Robustness via Invertibility of ReLU-Networks

Studying the invertibility of deep neural networks (DNNs) provides a pri...

Please sign up or login with your details

Forgot password? Click here to reset