Adversarial Visual Robustness by Causal Intervention

06/17/2021
by   Kaihua Tang, et al.
0

Adversarial training is the de facto most promising defense against adversarial examples. Yet, its passive nature inevitably prevents it from being immune to unknown attackers. To achieve a proactive defense, we need a more fundamental understanding of adversarial examples, beyond the popular bounded threat model. In this paper, we provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning, where attackers are precisely exploiting the confounding effect. Therefore, a fundamental solution for adversarial robustness is causal intervention. As the confounder is unobserved in general, we propose to use the instrumental variable that achieves intervention without the need for confounder observation. We term our robust training method as Causal intervention by instrumental Variable (CiiV). It has a differentiable retinotopic sampling layer and a consistency loss, which is stable and guaranteed not to suffer from gradient obfuscation. Extensive experiments on a wide spectrum of attackers and settings applied in MNIST, CIFAR-10, and mini-ImageNet datasets empirically demonstrate that CiiV is robust to adaptive attacks.

READ FULL TEXT

page 8

page 9

page 11

research
05/25/2023

IDEA: Invariant Causal Defense for Graph Adversarial Robustness

Graph neural networks (GNNs) have achieved remarkable success in various...
research
05/24/2022

Certified Robustness Against Natural Language Attacks by Causal Intervention

Deep learning models have achieved great success in many fields, yet the...
research
01/31/2022

Can Adversarial Training Be Manipulated By Non-Robust Features?

Adversarial training, originally designed to resist test-time adversaria...
research
10/03/2022

Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual Active Speaker Detection

Audio-visual active speaker detection (AVASD) is well-developed, and now...
research
10/25/2022

Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network

The information bottleneck (IB) method is a feasible defense solution ag...
research
06/26/2020

Proper Network Interpretability Helps Adversarial Robustness in Classification

Recent works have empirically shown that there exist adversarial example...
research
02/09/2019

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...

Please sign up or login with your details

Forgot password? Click here to reset