Where Classification Fails, Interpretation Rises

12/02/2017
by   Chanh Nguyen, et al.
0

An intriguing property of deep neural networks is their inherent vulnerability to adversarial inputs, which significantly hinders their application in security-critical domains. Most existing detection methods attempt to use carefully engineered patterns to distinguish adversarial inputs from their genuine counterparts, which however can often be circumvented by adaptive adversaries. In this work, we take a completely different route by leveraging the definition of adversarial inputs: while deceiving for deep neural networks, they are barely discernible for human visions. Building upon recent advances in interpretable models, we construct a new detection framework that contrasts an input's interpretation against its classification. We validate the efficacy of this framework through extensive experiments using benchmark datasets and attacks. We believe that this work opens a new direction for designing adversarial input detection methods.

READ FULL TEXT

page 1

page 2

page 4

page 6

research
12/03/2018

Interpretable Deep Learning under Fire

Providing explanations for complicated deep neural network (DNN) models ...
research
08/01/2018

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)

Deep neural networks (DNNs) are inherently vulnerable to adversarial inp...
research
11/04/2022

An Adversarial Robustness Perspective on the Topology of Neural Networks

In this paper, we investigate the impact of neural networks (NNs) topolo...
research
04/23/2020

Adversarial Machine Learning: An Interpretation Perspective

Recent years have witnessed the significant advances of machine learning...
research
09/12/2022

Adaptive Perturbation Generation for Multiple Backdoors Detection

Extensive evidence has demonstrated that deep neural networks (DNNs) are...
research
02/28/2020

Detecting and Recovering Adversarial Examples: An Input Sensitivity Guided Method

Deep neural networks undergo rapid development and achieve notable succe...
research
11/28/2017

Adversary Detection in Neural Networks via Persistent Homology

We outline a detection method for adversarial inputs to deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset