Interpreting Adversarial Examples by Activation Promotion and Suppression

04/03/2019
by   Kaidi Xu, et al.
0

It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: crafted images with imperceptible perturbations. However, interpretability of these perturbations is less explored in the literature. This work aims to better understand the roles of adversarial perturbations and provide visual explanations from pixel, image and network perspectives. We show that adversaries make a promotion and suppression effect (PSE) on neurons' activation and can be primarily categorized into three types: 1)suppression-dominated perturbations that mainly reduce the classification score of the true label, 2)promotion-dominated perturbations that focus on boosting the confidence of the target label, and 3)balanced perturbations that play a dual role on suppression and promotion. Further, we provide the image-level interpretability of adversarial examples, which links PSE of pixel-level perturbations to class-specific discriminative image regions localized by class activation mapping. Lastly, we analyze the effect of adversarial examples through network dissection, which offers concept-level interpretability of hidden units. We show that there exists a tight connection between the sensitivity (against attacks) of internal response of units with their interpretability on semantic concepts.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 13

page 14

page 15

page 16

research
02/23/2021

Adversarial Examples Detection beyond Image Space

Deep neural networks have been proved that they are vulnerable to advers...
research
04/21/2019

Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning

In this study, we propose the leveraging of interpretability for tasks b...
research
10/27/2019

Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolution Neural Networks

Recent studies have shown convolution neural networks (CNNs) for image r...
research
06/17/2021

Localized Uncertainty Attacks

The susceptibility of deep learning models to adversarial perturbations ...
research
11/22/2018

Detecting Adversarial Perturbations Through Spatial Behavior in Activation Spaces

Neural network based classifiers are still prone to manipulation through...
research
09/07/2020

Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Convolutional and recurrent neural networks have been widely employed to...

Please sign up or login with your details

Forgot password? Click here to reset