Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

03/04/2016
by   Devansh Arpit, et al.
0

While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks-- Internal Covariate Shift-- the current solution has certain drawbacks. Specifically, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate for validation due to shifting parameter values (especially during initial training epochs). Also, BN cannot be used with batch-size 1 during training. We address these drawbacks by proposing a non-adaptive normalization technique for removing internal covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2015

Why Regularized Auto-Encoders learn Sparse Representation?

While the authors of Batch Normalization (BN) identify and address an im...
research
03/28/2018

Normalization of Neural Networks using Analytic Variance Propagation

We address the problem of estimating statistics of hidden units in a neu...
research
05/29/2018

How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)

Batch Normalization (BatchNorm) is a widely adopted technique that enabl...
research
12/07/2017

Solving internal covariate shift in deep learning with linked neurons

This work proposes a novel solution to the problem of internal covariate...
research
12/30/2022

Batchless Normalization: How to Normalize Activations with just one Instance in Memory

In training neural networks, batch normalization has many benefits, not ...
research
10/10/2020

Double Forward Propagation for Memorized Batch Normalization

Batch Normalization (BN) has been a standard component in designing deep...
research
05/18/2018

Batch Normalization in the final layer of generative networks

Generative Networks have shown great promise in generating photo-realist...

Please sign up or login with your details

Forgot password? Click here to reset