A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

by   Jiaming Mu, et al.

Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model.To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.



There are no comments yet.


page 1

page 2

page 3

page 4


Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

Graph neural networks (GNNs) have achieved state-of-the-art performance ...

Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense

Graph Neural Networks (GNNs) have received significant attention due to ...

Evasion Attacks to Graph Neural Networks via Influence Function

Graph neural networks (GNNs) have achieved state-of-the-art performance ...

Backdoor Attacks to Graph Neural Networks

Node classification and graph classification are two basic graph analyti...

Watermarking Graph Neural Networks based on Backdoor Attacks

Graph Neural Networks (GNNs) have achieved promising performance in vari...

NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data

Recent advances in protecting node privacy on graph data and attacking g...

The General Black-box Attack Method for Graph Neural Networks

With the great success of Graph Neural Networks (GNNs) towards represent...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.