DeepAI
Log In Sign Up

Adaptive Network Sparsification via Dependent Variational Beta-Bernoulli Dropout

05/28/2018
by   Juho Lee, et al.
0

While variational dropout approaches have been shown to be effective for network sparsification, they are still suboptimal in the sense that they set the dropout rate for each neuron without consideration of the input data. With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss. To overcome this limitation, we propose adaptive variational dropout whose probabilities are drawn from sparsity-inducing beta-Bernoulli prior. It allows each neuron to be evolved either to be generic or specific for certain inputs, or dropped altogether. Such input-adaptive sparsity- inducing dropout allows the resulting network to tolerate larger degree of sparsity without losing its expressive power by removing redundancies among features. We validate our dependent variational beta-Bernoulli dropout on multiple public datasets, on which it obtains significantly more compact networks than baseline methods, with consistent accuracy improvements over the base networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/01/2018

Variational Dropout via Empirical Bayes

We study the Automatic Relevance Determination procedure applied to deep...
11/18/2016

Compacting Neural Network Classifiers via Dropout Training

We introduce dropout compaction, a novel method for training feed-forwar...
12/21/2017

DropMax: Adaptive Stochastic Softmax

We propose DropMax, a stochastic version of softmax classifier which at ...
02/06/2016

Improved Dropout for Shallow and Deep Learning

Dropout has been witnessed with great success in training deep neural ne...
10/13/2021

Dropout Prediction Variation Estimation Using Neuron Activation Strength

It is well-known DNNs would generate different prediction results even g...
03/04/2018

Deep Network Regularization via Bayesian Inference of Synaptic Connectivity

Deep neural networks (DNNs) often require good regularizers to generaliz...