Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models

02/12/2020
by   Xiao Zang, et al.
17

Deep neural networks, while generalize well, are known to be sensitive to small adversarial perturbations. This phenomenon poses severe security threat and calls for in-depth investigation of the robustness of deep learning models. With the emergence of neural networks for graph structured data, similar investigations are urged to understand their robustness. It has been found that adversarially perturbing the graph structure and/or node features may result in a significant degradation of the model performance. In this work, we show from a different angle that such fragility similarly occurs if the graph contains a few bad-actor nodes, which compromise a trained graph neural network through flipping the connections to any targeted victim. Worse, the bad actors found for one graph model severely compromise other models as well. We call the bad actors "anchor nodes" and propose an algorithm, named GUA, to identify them. Thorough empirical investigations suggest an interesting finding that the anchor nodes often belong to the same class; and they also corroborate the intuitive trade-off between the number of anchor nodes and the attack success rate. For the data set Cora which contains 2708 nodes, as few as six anchor nodes will result in an attack success rate higher than 80% for GCN and other three models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2023

GUAP: Graph Universal Attack Through Adversarial Patching

Graph neural networks (GNNs) are a class of effective deep learning mode...
research
11/29/2020

A Targeted Universal Attack on Graph Convolutional Network

Graph-structured data exist in numerous applications in real life. As a ...
research
10/31/2019

Certifiable Robustness to Graph Perturbations

Despite the exploding interest in graph neural networks there has been l...
research
02/22/2019

Adversarial Attacks on Graph Neural Networks via Meta Learning

Deep learning models for graphs have advanced the state of the art on ma...
research
03/05/2019

The Vulnerabilities of Graph Convolutional Networks: Stronger Attacks and Defensive Techniques

Graph deep learning models, such as graph convolutional networks (GCN) a...
research
11/30/2021

COREATTACK: Breaking Up the Core Structure of Graphs

The concept of k-core in complex networks plays a key role in many appli...
research
07/09/2020

Node Copying for Protection Against Graph Neural Network Topology Attacks

Adversarial attacks can affect the performance of existing deep learning...

Please sign up or login with your details

Forgot password? Click here to reset