A semantic backdoor attack against Graph Convolutional Networks

02/28/2023
by   Jiazhu Dai, et al.
0

Graph Convolutional Networks (GCNs) have been very effective in addressing the issue of various graph-structured related tasks, such as node classification and graph classification. However, extensive research has shown that GCNs are vulnerable to adversarial attacks. One of the security threats facing GCNs is the backdoor attack, which hides incorrect classification rules in models and activates only when the model encounters specific inputs containing special features (e.g., fixed patterns like subgraphs, called triggers), thus outputting incorrect classification results, while the model behaves normally on benign samples. The semantic backdoor attack is a type of the backdoor attack where the trigger is a semantic part of the sample; i.e., the trigger exists naturally in the original dataset and the attacker can pick a naturally occurring feature as the backdoor trigger, which causes the model to misclassify even unmodified inputs. Meanwhile, it is difficult to detect even if the attacker modifies the input samples in the inference phase as they do not have any anomaly compared to normal samples. Thus, semantic backdoor attacks are more imperceptible than non-semantic ones. However, existed research on semantic backdoor attacks has only focused on image and text domains, which have not been well explored against GCNs. In this work, we propose a black-box Semantic Backdoor Attack (SBA) against GCNs. We assign the trigger as a certain class of nodes in the dataset and our trigger is semantic. Through evaluation on several real-world benchmark graph datasets, the experimental results demonstrate that our proposed SBA can achieve almost 100 attack success rate under the poisoning rate less than 5 impact on normal predictive accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2020

A Targeted Universal Attack on Graph Convolutional Network

Graph-structured data exist in numerous applications in real life. As a ...
research
06/01/2023

Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

Deep neural networks (DNNs) can be manipulated to exhibit specific behav...
research
09/27/2021

Query-based Adversarial Attacks on Graph with Fake Nodes

While deep neural networks have achieved great success on the graph anal...
research
10/25/2018

Attack Graph Convolutional Networks by Adding Fake Nodes

Graph convolutional networks (GCNs) have been widely used for classifyin...
research
03/05/2019

The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques

Graph deep learning models, such as graph convolutional networks (GCN) a...
research
09/23/2021

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

Neural network implementations are known to be vulnerable to physical at...
research
10/26/2021

Semantic Host-free Trojan Attack

In this paper, we propose a novel host-free Trojan attack with triggers ...

Please sign up or login with your details

Forgot password? Click here to reset