Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples

07/28/2019
by   Hossein Hosseini, et al.
1

Deep learning classifiers are known to be vulnerable to adversarial examples. A recent paper presented at ICML 2019 proposed a statistical test detection method based on the observation that logits of noisy adversarial examples are biased toward the true class. The method is evaluated on CIFAR-10 dataset and is shown to achieve 99 (FPR). In this paper, we first develop a classifier-based adaptation of the statistical test method and show that it improves the detection performance. We then propose Logit Mimicry Attack method to generate adversarial examples such that their logits mimic those of benign images. We show that our attack bypasses both statistical test and classifier-based methods, reducing their TPR to less than 2:2 a classifier-based detector that is trained with logits of mimicry adversarial examples can be evaded by an adaptive attacker that specifically targets the detector. Furthermore, even a detector that is iteratively trained to defend against adaptive attacker cannot be made robust, indicating that statistics of logits cannot be used to detect adversarial examples.

READ FULL TEXT

page 1

page 6

research
02/05/2022

Adversarial Detector with Robust Classifier

Deep neural network (DNN) models are wellknown to easily misclassify pre...
research
03/09/2018

Detecting Adversarial Examples - A Lesson from Multimedia Forensics

Adversarial classification is the task of performing robust classificati...
research
02/21/2020

Adversarial Detection and Correction by Matching Prediction Distributions

We present a novel adversarial detection and correction method for machi...
research
12/22/2016

Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics

Deep learning has greatly improved visual recognition in recent years. H...
research
01/02/2020

Reject Illegal Inputs with Generative Classifier Derived from Any Discriminative Classifier

Generative classifiers have been shown promising to detect illegal input...
research
03/23/2022

Input-specific Attention Subnetworks for Adversarial Detection

Self-attention heads are characteristic of Transformer models and have b...
research
12/21/2017

Note on Attacking Object Detectors with Adversarial Stickers

Deep learning has proven to be a powerful tool for computer vision and h...

Please sign up or login with your details

Forgot password? Click here to reset