Single Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning

05/04/2023
by   Dayuan Chen, et al.
0

Graph neural networks (GNNs) have achieved remarkable success in various real-world applications. However, recent studies highlight the vulnerability of GNNs to malicious perturbations. Previous adversaries primarily focus on graph modifications or node injections to existing graphs, yielding promising results but with notable limitations. Graph modification attack (GMA) requires manipulation of the original graph, which is often impractical, while graph injection attack (GIA) necessitates training a surrogate model in the black-box setting, leading to significant performance degradation due to divergence between the surrogate architecture and the actual victim model. Furthermore, most methods concentrate on a single attack goal and lack a generalizable adversary to develop distinct attack strategies for diverse goals, thus limiting precise control over victim model behavior in real-world scenarios. To address these issues, we present a gradient-free generalizable adversary that injects a single malicious node to manipulate the classification result of a target node in the black-box evasion setting. We propose Gradient-free Generalizable Single Node Injection Attack, namely G^2-SNIA, a reinforcement learning framework employing Proximal Policy Optimization. By directly querying the victim model, G^2-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets. Through comprehensive experiments over three acknowledged benchmark datasets and four prominent GNNs in the most challenging and realistic scenario, we demonstrate the superior performance of our proposed G^2-SNIA over the existing state-of-the-art baselines. Moreover, by comparing G^2-SNIA with multiple white-box evasion baselines, we confirm its capacity to generate solutions comparable to those of the best adversaries.

READ FULL TEXT

page 1

page 4

page 9

page 13

research
11/19/2022

Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning

Graph Neural Networks (GNNs) have drawn significant attentions over the ...
research
02/18/2022

Black-box Node Injection Attack for Graph Neural Networks

Graph Neural Networks (GNNs) have drawn significant attentions over the ...
research
08/30/2021

Single Node Injection Attack against Graph Neural Networks

Node injection attack on Graph Neural Networks (GNNs) is an emerging and...
research
09/14/2019

Node Injection Attacks on Graphs via Reinforcement Learning

Real-world graph applications, such as advertisements and product recomm...
research
09/01/2020

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

Link prediction in dynamic graphs (LPDG) is an important research proble...
research
02/16/2022

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

Recently Graph Injection Attack (GIA) emerges as a practical attack scen...
research
07/15/2021

Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting

We study the realistic potential of conducting backdoor attack against d...

Please sign up or login with your details

Forgot password? Click here to reset