Transferable Graph Backdoor Attack

06/21/2022
by   Shuiqiao Yang, et al.
0

Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting from the message passing strategy that fuses the local structure and node features for better graph representation learning. Despite the success of GNNs, and similar to other types of deep neural networks, GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features. Many adversarial attacks have been proposed to disclose the fragility of GNNs under different perturbation strategies to create adversarial examples. However, vulnerability of GNNs to successful backdoor attacks was only shown recently. In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack. The core attack principle is to poison the training dataset with perturbation-based triggers that can lead to an effective and transferable backdoor attack. The perturbation trigger for a graph is generated by performing the perturbation actions on the graph structure via a gradient based score matrix from a surrogate model. Compared with prior works, TRAP attack is different in several ways: i) it exploits a surrogate Graph Convolutional Network (GCN) model to generate perturbation triggers for a blackbox based backdoor attack; ii) it generates sample-specific perturbation triggers which do not have a fixed pattern; and iii) the attack transfers, for the first time in the context of GNNs, to different GNN models when trained with the forged poisoned training dataset. Through extensive evaluations on four real-world datasets, we demonstrate the effectiveness of the TRAP attack to build transferable backdoors in four different popular GNNs using four real-world datasets.

READ FULL TEXT
research
06/10/2019

Attacking Graph Convolutional Networks via Rewiring

Graph Neural Networks (GNNs) have boosted the performance of many graph ...
research
08/13/2022

Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification

Graph neural networks (GNNs) have achieved tremendous success in the tas...
research
03/10/2023

Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks

Graph neural networks (GNNs) have achieved state-of-the-art performance ...
research
02/25/2022

Projective Ranking-based GNN Evasion Attacks

Graph neural networks (GNNs) offer promising learning methods for graph-...
research
08/07/2021

Jointly Attacking Graph Neural Network and its Explanations

Graph Neural Networks (GNNs) have boosted the performance for many graph...
research
05/23/2020

Adversarial Attack on Hierarchical Graph Pooling Neural Networks

Recent years have witnessed the emergence and development of graph neura...
research
08/29/2023

Everything Perturbed All at Once: Enabling Differentiable Graph Attacks

As powerful tools for representation learning on graphs, graph neural ne...

Please sign up or login with your details

Forgot password? Click here to reset