Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

10/25/2022
by   Haibin Zheng, et al.
0

Graph neural network (GNN) with a powerful representation capability has been widely applied to various areas, such as biological gene prediction, social recommendation, etc. Recent works have exposed that GNN is vulnerable to the backdoor attack, i.e., models trained with maliciously crafted training samples are easily fooled by patched samples. Most of the proposed studies launch the backdoor attack using a trigger that either is the randomly generated subgraph (e.g., erdős-rényi backdoor) for less computational burden, or the gradient-based generative subgraph (e.g., graph trojaning attack) to enable a more effective attack. However, the interpretation of how is the trigger structure and the effect of the backdoor attack related has been overlooked in the current literature. Motifs, recurrent and statistically significant sub-graphs in graphs, contain rich structure information. In this paper, we are rethinking the trigger from the perspective of motifs, and propose a motif-based backdoor attack, denoted as Motif-Backdoor. It contributes from three aspects. (i) Interpretation: it provides an in-depth explanation for backdoor effectiveness by the validity of the trigger structure from motifs, leading to some novel insights, e.g., using subgraphs that appear less frequently in the graph as the trigger can achieve better attack performance. (ii) Effectiveness: Motif-Backdoor reaches the state-of-the-art (SOTA) attack performance in both black-box and defensive scenarios. (iii) Efficiency: based on the graph motif distribution, Motif-Backdoor can quickly obtain an effective trigger structure without target model feedback or subgraph model generation. Extensive experimental results show that Motif-Backdoor realizes the SOTA performance on three popular models and four public datasets compared with five baselines.

READ FULL TEXT

page 1

page 10

page 11

research
09/08/2020

Adversarial Attack on Large Scale Graph

Recent studies have shown that graph neural networks are vulnerable agai...
research
04/30/2021

Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense

Graph Neural Networks (GNNs) have received significant attention due to ...
research
09/07/2022

Defending Against Backdoor Attack on Graph Nerual Network by Explainability

Backdoor attack is a powerful attack algorithm to deep learning model. R...
research
09/01/2020

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

Link prediction in dynamic graphs (LPDG) is an important research proble...
research
03/25/2021

Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation

Prior works on formalizing explanations of a graph neural network (GNN) ...
research
08/29/2023

Everything Perturbed All at Once: Enabling Differentiable Graph Attacks

As powerful tools for representation learning on graphs, graph neural ne...
research
08/21/2022

Revisiting Item Promotion in GNN-based Collaborative Filtering: A Masked Targeted Topological Attack Perspective

Graph neural networks (GNN) based collaborative filtering (CF) have attr...

Please sign up or login with your details

Forgot password? Click here to reset