The Limitations of Model Uncertainty in Adversarial Settings

12/06/2018
by   Kathrin Grosse, et al.
0

Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification. Many defenses have led to an arms race-we thus study a promising, recent trend in this setting, Bayesian uncertainty measures. These measures allow a classifier to provide principled confidence and uncertainty for an input, where the latter refers to how usual the input is. We focus on Gaussian processes (GP), a classifier providing such principled uncertainty and confidence measures. Using correctly classified benign data as comparison, GP's intrinsic uncertainty and confidence deviate for misclassified benign samples and misclassified adversarial examples. We therefore introduce high-confidence-low-uncertainty adversarial examples: adversarial examples crafted maximizing GP confidence and minimizing GP uncertainty. Visual inspection shows HCLU adversarial examples are malicious, and resemble the original rather than the target class. HCLU adversarial examples also transfer to other classifiers. We focus on transferability to other algorithms providing uncertainty measures, and find that a Bayesian neural network confidently misclassifies HCLU adversarial examples. We conclude that uncertainty and confidence, even in the Bayesian sense, can be circumvented by both white-box and black-box attackers.

READ FULL TEXT

page 1

page 4

page 9

research
11/22/2017

Adversarial Phenomenon in the Eyes of Bayesian Deep Learning

Deep Learning models are vulnerable to adversarial examples, i.e. images...
research
11/17/2017

How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

Machine learning models are vulnerable to adversarial examples: minor, i...
research
03/22/2018

Understanding Measures of Uncertainty for Adversarial Example Detection

Measuring uncertainty is a promising technique for detecting adversarial...
research
07/30/2021

Who's Afraid of Thomas Bayes?

In many cases, neural networks perform well on test data, but tend to ov...
research
08/21/2018

zoNNscan : a boundary-entropy index for zone inspection of neural models

The training of deep neural network classifiers results in decision boun...
research
02/13/2018

Predicting Adversarial Examples with High Confidence

It has been suggested that adversarial examples cause deep learning mode...
research
08/31/2022

Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution

Most current methods generate adversarial examples with the L_p norm spe...

Please sign up or login with your details

Forgot password? Click here to reset