An empirical analysis of dropout in piecewise linear networks

12/21/2013
by   David Warde-Farley, et al.
0

The recently introduced dropout training criterion for neural networks has been the subject of much attention due to its simplicity and remarkable effectiveness as a regularizer, as well as its interpretation as a training procedure for an exponentially large ensemble of networks that share parameters. In this work we empirically investigate several questions related to the efficacy of dropout, specifically as it concerns networks employing the popular rectified linear activation function. We investigate the quality of the test time weight-scaling inference procedure by evaluating the geometric average exactly in small models, as well as compare the performance of the geometric mean to the arithmetic mean more commonly employed by ensemble techniques. We explore the effect of tied weights on the ensemble interpretation by training ensembles of masked networks without tied weights. Finally, we investigate an alternative criterion based on a biased estimator of the maximum likelihood ensemble gradient.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2016

Surprising properties of dropout in deep networks

We analyze dropout in deep networks with rectified linear units and the ...
research
11/21/2019

Regularizing Neural Networks by Stochastically Training Layer Ensembles

Dropout and similar stochastic neural network regularization methods are...
research
01/19/2017

Variational Dropout Sparsifies Deep Neural Networks

We explore a recently proposed Variational Dropout technique that provid...
research
02/19/2012

Classification by Ensembles of Neural Networks

We introduce a new procedure for training of artificial neural networks ...
research
02/24/2022

Embedded Ensembles: Infinite Width Limit and Operating Regimes

A memory efficient approach to ensembling neural networks is to share mo...
research
11/04/2013

On Fast Dropout and its Applicability to Recurrent Networks

Recurrent Neural Networks (RNNs) are rich models for the processing of s...
research
12/16/2014

Learning with Pseudo-Ensembles

We formalize the notion of a pseudo-ensemble, a (possibly infinite) coll...

Please sign up or login with your details

Forgot password? Click here to reset