Understanding Measures of Uncertainty for Adversarial Example Detection

03/22/2018
by   Lewis Smith, et al.
0

Measuring uncertainty is a promising technique for detecting adversarial examples, crafted inputs on which the model predicts an incorrect class with high confidence. But many measures of uncertainty exist, including predictive en- tropy and mutual information, each capturing different types of uncertainty. We study these measures, and shed light on why mutual information seems to be effective at the task of adversarial example detection. We highlight failure modes for MC dropout, a widely used approach for estimating uncertainty in deep models. This leads to an improved understanding of the drawbacks of current methods, and a proposal to improve the quality of uncertainty estimates using probabilistic model ensembles. We give illustrative experiments using MNIST to demonstrate the intuition underlying the different measures of uncertainty, as well as experiments on a real world Kaggle dogs vs cats classification dataset.

READ FULL TEXT

page 1

page 6

page 7

research
12/06/2018

Prior Networks for Detection of Adversarial Attacks

Adversarial examples are considered a serious issue for safety critical ...
research
12/06/2018

The Limitations of Model Uncertainty in Adversarial Settings

Machine learning models are vulnerable to adversarial examples: minor pe...
research
01/24/2022

Analytic Mutual Information in Bayesian Neural Networks

Bayesian neural networks have successfully designed and optimized a robu...
research
07/15/2022

On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution Detection

The ability to detect Out-of-Distribution (OOD) data is important in saf...
research
06/02/2018

Idealised Bayesian Neural Networks Cannot Have Adversarial Examples: Theoretical and Empirical Study

We prove that idealised discriminative Bayesian neural networks, capturi...
research
05/17/2022

Uncertainty-based Network for Few-shot Image Classification

The transductive inference is an effective technique in the few-shot lea...
research
11/21/2022

ZigZag: Universal Sampling-free Uncertainty Estimation Through Two-Step Inference

Whereas the ability of deep networks to produce useful predictions on ma...

Please sign up or login with your details

Forgot password? Click here to reset