Adversarial Attacks on Neural Networks for Graph Data

05/21/2018
by   Daniel Zügner, et al.
0

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2018

Adversarial Attacks on Classification Models for Graphs

Deep learning models for graphs have achieved strong performance for the...
research
02/22/2019

Adversarial Attacks on Graph Neural Networks via Meta Learning

Deep learning models for graphs have advanced the state of the art on ma...
research
11/04/2021

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

Despite the widespread use of Knowledge Graph Embeddings (KGE), little i...
research
08/04/2022

Node Copying: A Random Graph Model for Effective Graph Sampling

There has been an increased interest in applying machine learning techni...
research
09/04/2018

Adversarial Attacks on Node Embeddings

The goal of network representation learning is to learn low-dimensional ...
research
12/04/2020

Unsupervised Adversarially-Robust Representation Learning on Graphs

Recent works have demonstrated that deep learning on graphs is vulnerabl...
research
07/13/2021

Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors

Deep learning models have been shown to be vulnerable to adversarial att...

Please sign up or login with your details

Forgot password? Click here to reset