When Causal Intervention Meets Image Masking and Adversarial Perturbation for Deep Neural Networks

02/09/2019
by   Chao-Han Huck Yang, et al.
14

Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level feature(s) for causal reasoning, we introduce pixel-wide masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide a first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the ChestX-ray 14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.

READ FULL TEXT
research
02/09/2019

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...
research
06/30/2022

Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations

Deep neural networks (DNNs) have been shown to be vulnerable against adv...
research
03/09/2023

NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks

Deep Learning (DL) and Deep Neural Networks (DNNs) are widely used in va...
research
03/18/2022

AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack

Deep neural networks (DNNs) have been proven to be vulnerable to adversa...
research
10/26/2020

Examining the causal structures of deep neural networks using information theory

Deep Neural Networks (DNNs) are often examined at the level of their res...
research
05/24/2022

Certified Robustness Against Natural Language Attacks by Causal Intervention

Deep learning models have achieved great success in many fields, yet the...
research
12/28/2021

DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification

Data processing and analysis pipelines in cosmological survey experiment...

Please sign up or login with your details

Forgot password? Click here to reset