DeepAI AI Chat
Log In Sign Up

Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch

by   João Monteiro, et al.
Université INRS

Deep neural networks (DNNs) have shown phenomenal success in a wide range of applications. However, recent studies have discovered that they are vulnerable to Adversarial Examples, i.e., original samples with added subtle perturbations. Such perturbations are often too small and imperceptible to humans, yet they can easily fool the neural networks. Few defense techniques against adversarial examples have been proposed, but they require modifying the target model or prior knowledge of adversarial examples generation methods. Likewise, their performance remarkably drops upon encountering adversarial example types not used during the training stage. In this paper, we propose a new framework that can be used to enhance DNNs' robustness by detecting adversarial examples. In particular, we employ the decision layer of independently trained models as features for posterior detection. The proposed framework doesn't require any prior knowledge of adversarial examples generation techniques, and can be directly augmented with unmodified off-the-shelf models. Experiments on the standard MNIST and CIFAR10 datasets show that it generalizes well across not only different adversarial examples generation methods but also various additive perturbations. Specifically, distinct binary classifiers trained on top of our proposed features can achieve a high detection rate (>90 performance when tested against unseen attacks.


page 1

page 5

page 6


On Procedural Adversarial Noise Attack And Defense

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which...

Unsupervised Detection of Adversarial Examples with Model Explanations

Deep Neural Networks (DNNs) have shown remarkable performance in a diver...

Improved Detection of Adversarial Images Using Deep Neural Networks

Machine learning techniques are immensely deployed in both industry and ...

Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike

We propose to generate adversarial samples by modifying activations of u...

Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

The problem of adversarial examples has shown that modern Neural Network...

TEAM: We Need More Powerful Adversarial Examples for DNNs

Although deep neural networks (DNNs) have achieved success in many appli...

Detecting Adversarial Examples via Neural Fingerprinting

Deep neural networks are vulnerable to adversarial examples, which drama...