Generalized Dropout

11/21/2016
by   Suraj Srinivas, et al.
0

Deep Neural Networks often require good regularizers to generalize well. Dropout is one such regularizer that is widely used among Deep Learning practitioners. Recent work has shown that Dropout can also be viewed as performing Approximate Bayesian Inference over the network parameters. In this work, we generalize this notion and introduce a rich family of regularizers which we call Generalized Dropout. One set of methods in this family, called Dropout++, is a version of Dropout with trainable parameters. Classical Dropout emerges as a special case of this method. Another member of this family selects the width of neural network layers. Experiments show that these methods help in improving generalization performance over Dropout.

READ FULL TEXT
research
04/25/2019

Survey of Dropout Methods for Deep Neural Networks

Dropout methods are a family of stochastic techniques used in neural net...
research
05/23/2019

Ensemble Model Patching: A Parameter-Efficient Variational Bayesian Neural Network

Two main obstacles preventing the widespread adoption of variational Bay...
research
11/04/2016

Information Dropout: Learning Optimal Representations Through Noisy Computation

The cross-entropy loss commonly used in deep learning is closely related...
research
05/23/2018

Pushing the bounds of dropout

We show that dropout training is best understood as performing MAP estim...
research
12/15/2016

Improving Neural Network Generalization by Combining Parallel Circuits with Dropout

In an attempt to solve the lengthy training times of neural networks, we...
research
12/04/2017

Data Dropout in Arbitrary Basis for Deep Network Regularization

An important problem in training deep networks with high capacity is to ...
research
04/17/2019

Sparseout: Controlling Sparsity in Deep Networks

Dropout is commonly used to help reduce overfitting in deep neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset