Unnoticeable Backdoor Attacks on Graph Neural Networks

02/11/2023
by   Enyan Dai, et al.
0

Graph Neural Networks (GNNs) have achieved promising results in various tasks such as node classification and graph classification. Recent studies find that GNNs are vulnerable to adversarial attacks. However, effective backdoor attacks on graphs are still an open problem. In particular, backdoor attack poisons the graph by attaching triggers and the target class label to a set of nodes in the training graph. The backdoored GNNs trained on the poisoned graph will then be misled to predict test nodes to target class once attached with triggers. Though there are some initial efforts in graph backdoor attacks, our empirical analysis shows that they may require a large attack budget for effective backdoor attacks and the injected triggers can be easily detected and pruned. Therefore, in this paper, we study a novel problem of unnoticeable graph backdoor attacks with limited attack budget. To fully utilize the attack budget, we propose to deliberately select the nodes to inject triggers and target class labels in the poisoning phase. An adaptive trigger generator is deployed to obtain effective triggers that are difficult to be noticed. Extensive experiments on real-world datasets against various defense strategies demonstrate the effectiveness of our proposed method in conducting effective unnoticeable backdoor attacks.

READ FULL TEXT
research
06/12/2021

TDGIA:Effective Injection Attacks on Graph Neural Networks

Graph Neural Networks (GNNs) have achieved promising performance in vari...
research
04/21/2022

Detecting Topology Attacks against Graph Neural Networks

Graph neural networks (GNNs) have been widely used in many real applicat...
research
12/25/2021

Task and Model Agnostic Adversarial Attack on Graph Neural Networks

Graph neural networks (GNNs) have witnessed significant adoption in the ...
research
02/25/2022

Projective Ranking-based GNN Evasion Attacks

Graph neural networks (GNNs) offer promising learning methods for graph-...
research
08/15/2023

Simple and Efficient Partial Graph Adversarial Attack: A New Perspective

As the study of graph neural networks becomes more intensive and compreh...
research
08/07/2021

Jointly Attacking Graph Neural Network and its Explanations

Graph Neural Networks (GNNs) have boosted the performance for many graph...
research
11/15/2022

Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation

Recent studies show that Graph Neural Networks(GNNs) are vulnerable and ...

Please sign up or login with your details

Forgot password? Click here to reset