Adjusting for Dropout Variance in Batch Normalization and Weight Initialization

07/08/2016
by   Dan Hendrycks, et al.
0

We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by correcting for the influence of dropout rates and an arbitrary nonlinearity's influence on variance through simple corrective scalars. Since Batch Normalization trained with dropout estimates the variance of a layer's incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normalization's variance moving averages with dropout off and obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2021

Variance-Aware Weight Initialization for Point Convolutional Neural Networks

Appropriate weight initialization has been of key importance to successf...
research
01/16/2018

Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift

This paper first answers the question "why do the two most powerful tech...
research
02/13/2023

How to Use Dropout Correctly on Residual Networks with Batch Normalization

For the stable optimization of deep neural networks, regularization meth...
research
05/15/2019

Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks

In this work, we propose a novel technique to boost training efficiency ...
research
08/24/2021

Improving Generalization of Batch Whitening by Convolutional Unit Optimization

Batch Whitening is a technique that accelerates and stabilizes training ...
research
08/24/2019

Don't ignore Dropout in Fully Convolutional Networks

Data for Image segmentation models can be costly to obtain due to the pr...
research
08/02/2018

Normalization Before Shaking Toward Learning Symmetrically Distributed Representation Without Margin in Speech Emotion Recognition

Regularization is crucial to the success of many practical deep learning...

Please sign up or login with your details

Forgot password? Click here to reset