DeepAI
Log In Sign Up

Survey of Dropout Methods for Deep Neural Networks

04/25/2019
by   Alex Labach, et al.
0

Dropout methods are a family of stochastic techniques used in neural network training or inference that have generated significant research interest and are widely used in practice. They have been successfully applied in neural network regularization, model compression, and in measuring the uncertainty of neural network outputs. While original formulated for dense neural network layers, recent advances have made dropout methods also applicable to convolutional and recurrent neural network layers. This paper summarizes the history of dropout methods, their various applications, and current areas of research interest. Important proposed methods are described in additional detail.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/21/2016

Generalized Dropout

Deep Neural Networks often require good regularizers to generalize well....
12/10/2018

Guided Dropout

Dropout is often used in deep neural networks to prevent over-fitting. C...
09/16/2020

Geometric Uncertainty in Patient-Specific Cardiovascular Modeling with Convolutional Dropout Networks

We propose a novel approach to generate samples from the conditional dis...
11/23/2017

Regularization of Deep Neural Networks with Spectral Dropout

The big breakthrough on the ImageNet challenge in 2012 was partially due...
10/13/2021

Dropout Prediction Variation Estimation Using Neuron Activation Strength

It is well-known DNNs would generate different prediction results even g...
06/22/2022

Information Geometry of Dropout Training

Dropout is one of the most popular regularization techniques in neural n...
02/20/2020

Neural Network Compression Framework for fast model inference

In this work we present a new framework for neural networks compression ...