Dropout Inference with Non-Uniform Weight Scaling

04/27/2022
by   Zhaoyuan Yang, et al.
0

Dropout as regularization has been used extensively to prevent overfitting for training neural networks. During training, units and their connections are randomly dropped, which could be considered as sampling many different submodels from the original model. At test time, weight scaling and Monte Carlo approximation are two widely applied approaches to approximate the outputs. Both approaches work well practically when all submodels are low-bias complex learners. However, in this work, we demonstrate scenarios where some submodels behave closer to high-bias models and a non-uniform weight scaling is a better approximation for inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2019

Well-calibrated Model Uncertainty with Temperature Scaling for Dropout Variational Inference

In this paper, well-calibrated model uncertainty is obtained by using te...
research
06/26/2018

On the Implicit Bias of Dropout

Algorithmic approaches endow deep learning systems with implicit bias th...
research
12/20/2014

Neural Network Regularization via Robust Weight Factorization

Regularization is essential when training large neural networks. As deep...
research
04/28/2020

A System for Generating Non-Uniform Random Variates using Graphene Field-Effect Transistors

We introduce a new method for hardware non-uniform random number generat...
research
09/28/2020

Quantal synaptic dilution enhances sparse encoding and dropout regularisation in deep networks

Dropout is a technique that silences the activity of units stochasticall...
research
07/05/2018

Variational Bayesian dropout: pitfalls and fixes

Dropout, a stochastic regularisation technique for training of neural ne...
research
02/08/2020

Soft Threshold Weight Reparameterization for Learnable Sparsity

Sparsity in Deep Neural Networks (DNNs) is studied extensively with the ...

Please sign up or login with your details

Forgot password? Click here to reset