Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

05/07/2022
βˆ™
by   Binghui Wang, et al.
βˆ™
0
βˆ™

Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks such as node classification and graph classification. However, many recent works have demonstrated that an attacker can mislead GNN models by slightly perturbing the graph structure. Existing attacks to GNNs are either under the less practical threat model where the attacker is assumed to access the GNN model parameters, or under the practical black-box threat model but consider perturbing node features that are shown to be not enough effective. In this paper, we aim to bridge this gap and consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees. We propose to address this challenge through bandit techniques. Specifically, we formulate our attack as an online optimization with bandit feedback. This original problem is essentially NP-hard due to the fact that perturbing the graph structure is a binary optimization problem. We then propose an online attack based on bandit optimization which is proven to be sublinear to the query number T, i.e., π’ͺ(√(N)T^3/4) where N is the number of nodes in the graph. Finally, we evaluate our proposed attack by conducting experiments over multiple datasets and GNN models. The experimental results on various citation graphs and image graphs show that our attack is both effective and efficient. Source code is available atΒ <https://github.com/Metaoblivion/Bandit_GNN_Attack>

READ FULL TEXT

page 1

page 2

page 3

page 4

research
βˆ™ 08/21/2021

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

Graph Neural Networks (GNNs) have achieved state-of-the-art performance ...
research
βˆ™ 04/30/2021

Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense

Graph Neural Networks (GNNs) have received significant attention due to ...
research
βˆ™ 10/24/2020

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

Graph neural networks (GNNs) have been widely used to analyze the graph-...
research
βˆ™ 06/09/2020

Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access

We study the black-box attacks on graph neural networks (GNNs) under a n...
research
βˆ™ 02/18/2022

Black-box Node Injection Attack for Graph Neural Networks

Graph Neural Networks (GNNs) have drawn significant attentions over the ...
research
βˆ™ 08/04/2019

The General Black-box Attack Method for Graph Neural Networks

With the great success of Graph Neural Networks (GNNs) towards represent...
research
βˆ™ 11/06/2020

Single-Node Attack for Fooling Graph Neural Networks

Graph neural networks (GNNs) have shown broad applicability in a variety...

Please sign up or login with your details

Forgot password? Click here to reset