Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation

11/15/2022
by   Zhihao Zhu, et al.
0

Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications. In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack(GIA), in which the adversary poisons the graph by injecting fake nodes instead of modifying existing structures or node attributes. Inspired by findings that the adversarial attacks are related to the increased heterophily on perturbed graphs (the adversary tends to connect dissimilar nodes), we propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model. Specifically, the model generates pseudo-labels for unlabeled nodes in each round of training to reduce heterophilous edges of nodes with distinct labels. The cleaner graph is fed back to the model, producing more informative pseudo-labels. In such an iterative manner, model robustness is then promisingly enhanced. We present the theoretical analysis of the effect of homophilous augmentation and provide the guarantee of the proposal's validity. Experimental results empirically demonstrate the effectiveness of CHAGNN in comparison with recent state-of-the-art defense methods on diverse real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2021

TDGIA:Effective Injection Attacks on Graph Neural Networks

Graph Neural Networks (GNNs) have achieved promising performance in vari...
research
02/11/2023

Unnoticeable Backdoor Attacks on Graph Neural Networks

Graph Neural Networks (GNNs) have achieved promising results in various ...
research
08/03/2022

Adversarial Camouflage for Node Injection Attack on Graphs

Node injection attacks against Graph Neural Networks (GNNs) have receive...
research
08/07/2021

Jointly Attacking Graph Neural Network and its Explanations

Graph Neural Networks (GNNs) have boosted the performance for many graph...
research
03/02/2020

Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study

Deep neural networks (DNNs) have achieved significant performance in var...
research
09/22/2020

Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers

Graph neural networks (GNNs) have achieved high performance in analyzing...
research
02/16/2022

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

Recently Graph Injection Attack (GIA) emerges as a practical attack scen...

Please sign up or login with your details

Forgot password? Click here to reset