Graph Backdoor

06/21/2020
by   Zhaohan Xi, et al.
10

One intriguing property of deep neural network (DNN) models is their inherent vulnerability to backdoor attacks – a trojaned model responds to trigger-embedded inputs in a highly predictable manner but functions normally otherwise. Surprisingly, despite the plethora of work on DNNs for continuous data (e.g., images), little is known about the vulnerability of graph neural network (GNN) models for discrete-structured data (e.g., graphs), which is highly concerning given the increasing use of GNNs in security-critical domains. To bridge this gap, we present GTA, the first class of backdoor attacks against GNNs. Compared with prior work, GTA departs in significant ways: graph-oriented – it allows the adversary to define triggers as specific subgraphs, including both topological structures and descriptive features; input-tailored – it generates triggers tailored to individual graphs, thereby optimizing both attack effectiveness and evasiveness; downstream model-agnostic – it assumes no knowledge about downstream models or fine-tuning strategies; and attack-extensible – it can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks, constituting severe threats for a range of security-critical applications (e.g., toxic chemical classification). Through extensive evaluation on benchmark datasets and state-of-the-art GNNs, we demonstrate the efficacy of GTA. For instance, on pre-trained, off-the-shelf GNNs, GTA attains over 99.2 attack success rate with less than 0.3 further provide analytical justification for the effectiveness of GTA and discuss potential mitigation, pointing to several promising research directions.

READ FULL TEXT

page 8

page 10

research
03/29/2023

Graph Neural Networks for Hardware Vulnerability Analysis – Can you Trust your GNN?

The participation of third-party entities in the globalized semiconducto...
research
04/08/2021

Explainability-based Backdoor Attacks Against Graph Neural Networks

Backdoor attacks represent a serious threat to neural network models. A ...
research
03/24/2023

PoisonedGNN: Backdoor Attack on Graph Neural Networks-based Hardware Security Systems

Graph neural networks (GNNs) have shown great success in detecting intel...
research
10/17/2021

Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications

Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean...
research
05/03/2023

On the Security Risks of Knowledge Graph Reasoning

Knowledge graph reasoning (KGR) – answering complex logical queries over...
research
11/05/2019

The Tale of Evil Twins: Adversarial Inputs versus Backdoored Models

Despite their tremendous success in a wide range of applications, deep n...
research
10/27/2021

Towards Robust Reasoning over Knowledge Graphs

Answering complex logical queries over large-scale knowledge graphs (KGs...

Please sign up or login with your details

Forgot password? Click here to reset