IDEA: Invariant Causal Defense for Graph Adversarial Robustness

05/25/2023
by   Shuchang Tao, et al.
0

Graph neural networks (GNNs) have achieved remarkable success in various tasks, however, their vulnerability to adversarial attacks raises concerns for the real-world applications. Existing defense methods can resist some attacks, but suffer unbearable performance degradation under other unknown attacks. This is due to their reliance on either limited observed adversarial examples to optimize (adversarial training) or specific heuristics to alter graph or model structures (graph purification or robust aggregation). In this paper, we propose an Invariant causal DEfense method against adversarial Attacks (IDEA), providing a new perspective to address this issue. The method aims to learn causal features that possess strong predictability for labels and invariant predictability across attacks, to achieve graph adversarial robustness. Through modeling and analyzing the causal relationships in graph adversarial attacks, we design two invariance objectives to learn the causal features. Extensive experiments demonstrate that our IDEA significantly outperforms all the baselines under both poisoning and evasion attacks on five benchmark datasets, highlighting the strong and invariant predictability of IDEA. The implementation of IDEA is available at https://anonymous.4open.science/r/IDEA_repo-666B.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2019

Improving Model Robustness with Transformation-Invariant Attacks

Vulnerability of neural networks under adversarial attacks has raised se...
research
06/17/2021

Adversarial Visual Robustness by Causal Intervention

Adversarial training is the de facto most promising defense against adve...
research
06/11/2021

Adversarial Robustness through the Lens of Causality

The adversarial vulnerability of deep neural networks has attracted sign...
research
05/26/2021

Intriguing Parameters of Structural Causal Models

In recent years there has been a lot of focus on adversarial attacks, es...
research
05/24/2022

Certified Robustness Against Natural Language Attacks by Causal Intervention

Deep learning models have achieved great success in many fields, yet the...
research
09/28/2020

Graph Adversarial Networks: Protecting Information against Adversarial Attacks

We study the problem of protecting information when learning with graph ...
research
10/25/2022

Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network

The information bottleneck (IB) method is a feasible defense solution ag...

Please sign up or login with your details

Forgot password? Click here to reset