The General Black-box Attack Method for Graph Neural Networks

by   Heng Chang, et al.
The University of Texas at Arlington
Tsinghua University
Georgia Institute of Technology

With the great success of Graph Neural Networks (GNNs) towards representation learning on graph-structure data, the robustness of GNNs against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, current works suffer from two main limitations: First, the attack method required to be developed case by case; Second, most of them are restricted to the white-box attack. This paper promotes current frameworks in a more general and flexible sense -- we demand only one single method to attack various kinds of GNNs and this attacker is black box driven. To this end, we begin by investigating the theoretical connections between different kinds of GNNs in a principled way and integrate different GNN models into a unified framework, dubbed as General Spectral Graph Convolution. As such, a generalized adversarial attacker is proposed towards two families of GNNs: Convolution-based model and sampling-based model. More interestingly, our attacker does not require any knowledge of the target classifiers used in GNNs. Extensive experimental results validate the effectiveness of our method on several benchmark datasets. Particularly by using our attack, even small graph perturbations like one-edge flip is able to consistently make a strong attack in performance to different GNN models.


page 1

page 2

page 3

page 4


A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models

With the great success of graph embedding model on both academic and ind...

Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge

With the success of the graph embedding model in both academic and indus...

Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense

Graph Neural Networks (GNNs) have received significant attention due to ...

Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

Graph neural networks (GNNs) have achieved state-of-the-art performance ...

Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access

We study the black-box attacks on graph neural networks (GNNs) under a n...

Model Inversion Attacks against Graph Neural Networks

Many data mining tasks rely on graphs to model relational structures amo...

Adversarial Model Extraction on Graph Neural Networks

Along with the advent of deep neural networks came various methods of ex...

Please sign up or login with your details

Forgot password? Click here to reset