Stealing Links from Graph Neural Networks

05/05/2020
by   Xinlei He, et al.
0

Graph data, such as social networks and chemical networks, contains a wealth of information that can help to build powerful applications. To fully unleash the power of graph data, a family of machine learning models, namely graph neural networks (GNNs), is introduced. Empirical results show that GNNs have achieved state-of-the-art performance in various tasks. Graph data is the key to the success of GNNs. High-quality graph is expensive to collect and often contains sensitive information, such as social relations. Various research has shown that machine learning models are vulnerable to attacks against their training data. Most of these models focus on data from the Euclidean space, such as images and texts. Meanwhile, little attention has been paid to the security and privacy risks of graph data used to train GNNs. In this paper, we aim at filling the gap by proposing the first link stealing attacks against graph neural networks. Given a black-box access to a GNN model, the goal of an adversary is to infer whether there exists a link between any pair of nodes in the graph used to train the model. We propose a threat model to systematically characterize the adversary's background knowledge along three dimensions. By combination, we obtain a comprehensive taxonomy of 8 different link stealing attacks. We propose multiple novel methods to realize these attacks. Extensive experiments over 8 real-world datasets show that our attacks are effective at inferring links, e.g., AUC (area under the ROC curve) is above 0.95 in multiple cases.

READ FULL TEXT

page 8

page 9

page 10

page 13

research
02/10/2021

Node-Level Membership Inference Attacks Against Graph Neural Networks

Many real-world data comes in the form of graphs, such as social network...
research
08/02/2023

VertexSerum: Poisoning Graph Neural Networks for Link Inference

Graph neural networks (GNNs) have brought superb performance to various ...
research
10/24/2020

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

Graph neural networks (GNNs) have been widely used to analyze the graph-...
research
02/04/2019

Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks

Recently, Graph Neural Networks (GNNs) are trending in the machine learn...
research
12/16/2019

Adversarial Model Extraction on Graph Neural Networks

Along with the advent of deep neural networks came various methods of ex...
research
08/31/2023

A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications

Graph Neural Networks (GNNs) have gained significant attention owing to ...
research
12/25/2021

Task and Model Agnostic Adversarial Attack on Graph Neural Networks

Graph neural networks (GNNs) have witnessed significant adoption in the ...

Please sign up or login with your details

Forgot password? Click here to reset