When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

02/09/2019
by   Chao-Han Huck Yang, et al.
0

Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide the first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods [  https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg]. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.

READ FULL TEXT
research
02/09/2019

When Causal Intervention Meets Image Masking and Adversarial Perturbation for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...
research
10/14/2020

GreedyFool: An Imperceptible Black-box Adversarial Example Attack against Neural Networks

Deep neural networks (DNNs) are inherently vulnerable to well-designed i...
research
03/11/2021

Improving Adversarial Robustness via Channel-wise Activation Suppressing

The study of adversarial examples and their activation has attracted sig...
research
03/09/2023

NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks

Deep Learning (DL) and Deep Neural Networks (DNNs) are widely used in va...
research
03/29/2023

Latent Feature Relation Consistency for Adversarial Robustness

Deep neural networks have been applied in many computer vision tasks and...
research
06/17/2021

Adversarial Visual Robustness by Causal Intervention

Adversarial training is the de facto most promising defense against adve...
research
05/24/2022

Certified Robustness Against Natural Language Attacks by Causal Intervention

Deep learning models have achieved great success in many fields, yet the...

Please sign up or login with your details

Forgot password? Click here to reset