Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings

09/03/2020
by   John Mitros, et al.
0

Deep neural networks have been successful in diverse discriminative classification tasks, although, they are poorly calibrated often assigning high probability to misclassified predictions. Potential consequences could lead to trustworthiness and accountability of the models when deployed in real applications, where predictions are evaluated based on their confidence scores. Existing solutions suggest the benefits attained by combining deep neural networks and Bayesian inference to quantify uncertainty over the models' predictions for ambiguous datapoints. In this work we propose to validate and test the efficacy of likelihood based models in the task of out of distribution detection (OoD). Across different datasets and metrics we show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks and in the event of minimal overlap between in/out distribution classes, even the best models exhibit a reduction in AUC scores in detecting OoD data. Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions. We hypothesise that the sensitivity of neural networks to unseen inputs could be a multi-factor phenomenon arising from the different architectural design choices often amplified by the curse of dimensionality. Furthermore, we perform a study to find the effect of the adversarial noise resistance methods on in and out-of-distribution performance, as well as, also investigate adversarial noise robustness of Bayesian deep learners.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2017

Adversarial Phenomenon in the Eyes of Bayesian Deep Learning

Deep Learning models are vulnerable to adversarial examples, i.e. images...
research
10/05/2020

Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model

As neural network classifiers are deployed in real-world applications, i...
research
12/29/2022

Do Bayesian Variational Autoencoders Know What They Don't Know?

The problem of detecting the Out-of-Distribution (OoD) inputs is of para...
research
11/29/2018

Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting

Modern deep neural network models suffer from adversarial examples, i.e....
research
07/08/2020

URSABench: Comprehensive Benchmarking of Approximate Bayesian Inference Methods for Deep Neural Networks

While deep learning methods continue to improve in predictive accuracy o...
research
06/01/2023

Initial Guessing Bias: How Untrained Networks Favor Some Classes

The initial state of neural networks plays a central role in conditionin...
research
07/15/2021

On the Importance of Regularisation Auxiliary Information in OOD Detection

Neural networks are often utilised in critical domain applications (e.g....

Please sign up or login with your details

Forgot password? Click here to reset