Evasion Attacks to Graph Neural Networks via Influence Function

09/01/2020
by   Binghui Wang, et al.
0

Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-related tasks, e.g., node classification. However, recent works show that GNNs are vulnerable to evasion attacks, i.e., an attacker can slightly perturb the graph structure to fool GNN models. Existing evasion attacks to GNNs have several key drawbacks: 1) they are limited to attack two-layer GNNs; 2) they are not efficient; or/and 3) they need to know GNN model parameters. We address the above drawbacks in this paper and propose an influence-based evasion attack against GNNs. Specifically, we first introduce two influence functions, i.e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively. Then, we build a strong connection between GNNs and LP in terms of influence. Next, we reformulate the evasion attack against GNNs to be related to calculating label influence on LP, which is applicable to multi-layer GNNs and does not need to know the GNN model. We also propose an efficient algorithm to calculate label influence. Finally, we evaluate our influence-based attack on three benchmark graph datasets. Our experimental results show that, compared to state-of-the-art attack, our attack can achieve comparable attack performance, but has a 5-50x speedup when attacking two-layer GNNs. Moreover, our attack is effective to attack multi-layer GNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2021

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

Graph Neural Networks (GNNs) have achieved state-of-the-art performance ...
research
06/21/2021

Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem

Graph neural networks (GNNs) have attracted increasing interests. With b...
research
03/01/2021

Snowflake: Scaling GNNs to High-Dimensional Continuous Control via Parameter Freezing

Recent research has shown that Graph Neural Networks (GNNs) can learn po...
research
04/05/2023

Rethinking the Trigger-injecting Position in Graph Backdoor Attack

Backdoor attacks have been demonstrated as a security threat for machine...
research
04/08/2021

Explainability-based Backdoor Attacks Against Graph Neural Networks

Backdoor attacks represent a serious threat to neural network models. A ...
research
07/27/2022

Label-Only Membership Inference Attack against Node-Level Graph Neural Networks

Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks ...
research
12/13/2021

Hybrid Graph Neural Networks for Few-Shot Learning

Graph neural networks (GNNs) have been used to tackle the few-shot learn...

Please sign up or login with your details

Forgot password? Click here to reset