When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

12/18/2017
by   David J. Miller, et al.
0

A significant threat to the recent, wide deployment of machine learning-based systems, including deep neural networks (DNNs), for a host of application domains is adversarial learning (Adv-L) attacks. The main focus here is on exploits applied against (DNN-based) classifiers at test time. While much work has focused on devising attacks that make perturbations to a test pattern (e.g., an image) which are human-imperceptible and yet still induce a change in the classifier's decision, there is relative paucity of work in defending against such attacks. Moreover, our thesis is that most existing defense approaches "miss the mark", seeking to robustify the classifier to make "correct" decisions on perturbed patterns. While, unlike some prior works, we make explicit the motivation of such approaches, we argue that it is generally much more actionable to detect the attack, rather than to "correctly classify" in the face of it. We hypothesize that, even if human-imperceptible, adversarial perturbations are machine-detectable. We propose a purely unsupervised anomaly detector (AD), based on suitable (null hypothesis) density models for the different DNN layers and a novel Kullback-Leibler "distance" AD test statistic. Tested on MNIST and CIFAR10 image databases under the prominent attack strategy proposed by Goodfellow et al. [5], our approach achieves compelling ROC AUCs for attack detection of 0.992 on MNIST, 0.957 on noisy MNIST images, and 0.924 on CIFAR10. We also show that a simple detector that counts the number of white regions in the image achieves 0.97 AUC in detecting the attack on MNIST proposed by Papernot et al. [12].

READ FULL TEXT

page 9

page 10

research
04/12/2019

Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks

With the wide deployment of machine learning (ML) based systems for a va...
research
10/31/2018

When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers

This paper addresses detection of a reverse engineering (RE) attack targ...
research
08/23/2021

Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency

The success of deep neural networks (DNNs) haspromoted the widespread ap...
research
08/27/2019

Revealing Backdoors, Post-Training, in DNN Classifiers via Novel Inference on Optimized Perturbations Inducing Group Misclassification

Recently, a special type of data poisoning (DP) attack targeting Deep Ne...
research
05/19/2020

On Intrinsic Dataset Properties for Adversarial Machine Learning

Deep neural networks (DNNs) have played a key role in a wide range of ma...
research
11/18/2019

Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic

Recently, a special type of data poisoning (DP) attack, known as a backd...
research
10/09/2017

Standard detectors aren't (currently) fooled by physical adversarial stop signs

An adversarial example is an example that has been adjusted to produce t...

Please sign up or login with your details

Forgot password? Click here to reset