A Targeted Universal Attack on Graph Convolutional Network

11/29/2020
by   Jiazhu Dai, et al.
0

Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GCNs are also vulnerable to adversarial attacks, which means that GCN models may suffer malicious attacks with unnoticeable modifications of the data. Among all the adversarial attacks on GCNs, there is a special kind of attack method called the universal adversarial attack, which generates a perturbation that can be applied to any sample and causes GCN models to output incorrect results. Although universal adversarial attacks in computer vision have been extensively researched, there are few research works on universal adversarial attacks on graph structured data. In this paper, we propose a targeted universal adversarial attack against GCNs. Our method employs a few nodes as the attack nodes. The attack capability of the attack nodes is enhanced through a small number of fake nodes connected to them. During an attack, any victim node will be misclassified by the GCN as the attack node class as long as it is linked to them. The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83 nodes and 6 fake nodes. We hope that our work will make the community aware of the threat of this type of attack and raise the attention given to its future defense.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2023

GUAP: Graph Universal Attack Through Adversarial Patching

Graph neural networks (GNNs) are a class of effective deep learning mode...
research
10/25/2018

Attack Graph Convolutional Networks by Adding Fake Nodes

Graph convolutional networks (GCNs) have been widely used for classifyin...
research
09/27/2021

Query-based Adversarial Attacks on Graph with Fake Nodes

While deep neural networks have achieved great success on the graph anal...
research
03/05/2019

The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques

Graph deep learning models, such as graph convolutional networks (GCN) a...
research
02/28/2023

A semantic backdoor attack against Graph Convolutional Networks

Graph Convolutional Networks (GCNs) have been very effective in addressi...
research
02/24/2021

Graphfool: Targeted Label Adversarial Attack on Graph Embedding

Deep learning is effective in graph analysis. It is widely applied in ma...
research
02/12/2020

Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models

Deep neural networks, while generalize well, are known to be sensitive t...

Please sign up or login with your details

Forgot password? Click here to reset