A Bayesian encourages dropout

12/22/2014
by   Shin-ichi Maeda, et al.
0

Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.

READ FULL TEXT
research
06/06/2015

Dropout as a Bayesian Approximation: Appendix

We show that a neural network with arbitrary depth and non-linearities, ...
research
10/11/2020

Advanced Dropout: A Model-free Methodology for Bayesian Dropout Optimization

Due to lack of data, overfitting ubiquitously exists in real-world appli...
research
07/02/2020

On Dropout, Overfitting, and Interaction Effects in Deep Neural Networks

We examine Dropout through the perspective of interactions: learned effe...
research
06/15/2021

CODA: Constructivism Learning for Instance-Dependent Dropout Architecture Construction

Dropout is attracting intensive research interest in deep learning as an...
research
08/06/2019

Self-Balanced Dropout

Dropout is known as an effective way to reduce overfitting via preventin...
research
01/23/2022

Weight Expansion: A New Perspective on Dropout and Generalization

While dropout is known to be a successful regularization technique, insi...
research
04/05/2022

A Survey on Dropout Methods and Experimental Verification in Recommendation

Overfitting is a common problem in machine learning, which means the mod...

Please sign up or login with your details

Forgot password? Click here to reset