On Fast Dropout and its Applicability to Recurrent Networks

11/04/2013
by   Justin Bayer, et al.
0

Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients. The control of overfitting has seen considerably less attention. This paper contributes to that by analyzing fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective. We show that fast dropout implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss. The derivatives of that regularizer are exclusively based on the training error signal. One consequence of this is the absense of a global weight attractor, which is particularly appealing for RNNs, since the dynamics are not biased towards a certain regime. We positively test the hypothesis that this improves the performance of RNNs on four musical data sets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

04/22/2019

Adversarial Dropout for Recurrent Neural Networks

Successful application processing sequential data, such as text and spee...
10/21/2014

Regularizing Recurrent Networks - On Injected Noise and Norm-based Methods

Advancements in parallel processing have lead to a surge in multilayer p...
07/04/2013

Dropout Training as Adaptive Regularization

Dropout and other feature noising schemes control overfitting by artific...
10/31/2017

Fraternal Dropout

Recurrent neural networks (RNNs) are important class of architectures am...
08/22/2019

RNNs Evolving in Equilibrium: A Solution to the Vanishing and Exploding Gradients

Recurrent neural networks (RNNs) are particularly well-suited for modeli...
07/02/2020

On Dropout, Overfitting, and Interaction Effects in Deep Neural Networks

We examine Dropout through the perspective of interactions: learned effe...
05/29/2018

Deep Learning under Privileged Information Using Heteroscedastic Dropout

Unlike machines, humans learn through rapid, abstract model-building. Th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.