Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks

02/06/2023
by   Emanuele Ledda, et al.
0

Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks. Its popularity recently led to seminal works that proposed activating the dropout layers only during inference for evaluating uncertainty. This approach, which we call dropout injection, provides clear benefits over its traditional counterpart (which we call embedded dropout) since it allows one to obtain a post hoc uncertainty measure for any existing network previously trained without dropout, avoiding an additional, time-consuming training process. Unfortunately, no previous work compared injected and embedded dropout; therefore, we provide the first thorough investigation, focusing on regression problems. The main contribution of our work is to provide guidelines on the effective use of injected dropout so that it can be a practical alternative to the current use of embedded dropout. In particular, we show that its effectiveness strongly relies on a suitable scaling of the corresponding uncertainty measure, and we discuss the trade-off between negative log-likelihood and calibration error as a function of the scale factor. Experimental results on UCI data sets and crowd counting benchmarks support our claim that dropout injection can effectively behave as a competitive post hoc uncertainty quantification technique.

READ FULL TEXT
research
06/20/2020

Calibration of Model Uncertainty for Dropout Variational Inference

The model uncertainty obtained by variational Bayesian inference with Mo...
research
08/11/2022

Uncertainty Quantification for Traffic Forecasting: A Unified Approach

Uncertainty is an essential consideration for time series forecasting ta...
research
01/31/2020

Fast Monte Carlo Dropout and Error Correction for Radio Transmitter Classification

Monte Carlo dropout may effectively capture model uncertainty in deep le...
research
04/29/2021

Bayesian Deep Networks for Supervised Single-View Depth Learning

Uncertainty quantification is a key aspect in robotic perception, as ove...
research
02/14/2023

B-BACN: Bayesian Boundary-Aware Convolutional Network for Crack Characterization

The accurate detection of crack boundaries is crucial for various purpos...
research
05/27/2019

On approximating dropout noise injection

This paper examines the assumptions of the derived equivalence between d...
research
09/27/2021

Bayesian deep learning of affordances from RGB images

Autonomous agents, such as robots or intelligent devices, need to unders...

Please sign up or login with your details

Forgot password? Click here to reset