Survey of Dropout Methods for Deep Neural Networks

04/25/2019
by   Alex Labach, et al.
0

Dropout methods are a family of stochastic techniques used in neural network training or inference that have generated significant research interest and are widely used in practice. They have been successfully applied in neural network regularization, model compression, and in measuring the uncertainty of neural network outputs. While original formulated for dense neural network layers, recent advances have made dropout methods also applicable to convolutional and recurrent neural network layers. This paper summarizes the history of dropout methods, their various applications, and current areas of research interest. Important proposed methods are described in additional detail.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2016

Generalized Dropout

Deep Neural Networks often require good regularizers to generalize well....
research
12/10/2018

Guided Dropout

Dropout is often used in deep neural networks to prevent over-fitting. C...
research
04/26/2023

Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models

With growing size of Neural Networks (NNs), model sparsification to redu...
research
09/16/2020

Geometric Uncertainty in Patient-Specific Cardiovascular Modeling with Convolutional Dropout Networks

We propose a novel approach to generate samples from the conditional dis...
research
11/23/2017

Regularization of Deep Neural Networks with Spectral Dropout

The big breakthrough on the ImageNet challenge in 2012 was partially due...
research
10/13/2021

Dropout Prediction Variation Estimation Using Neuron Activation Strength

It is well-known DNNs would generate different prediction results even g...
research
06/22/2022

Information Geometry of Dropout Training

Dropout is one of the most popular regularization techniques in neural n...

Please sign up or login with your details

Forgot password? Click here to reset