Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

10/24/2020
by   Bang Wu, et al.
0

Graph neural networks (GNNs) have been widely used to analyze the graph-structured data in various application domains, e.g., social networks, molecular biology, and anomaly detection. With great power, the GNN models, usually as valuable Intellectual Properties of their owners, also become attractive targets of the attacker. Recent studies show that machine learning models are facing a severe threat called Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by the attacker pretending as a client. Unfortunately, existing works focus on the models trained on the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, we explore and develop model extraction attacks against GNN models. Given only black-box access to a target GNN model, the attacker aims to reconstruct a duplicated one via several nodes he obtained (called attacker nodes). We first systematically formalise the threat modeling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbor connectives of the attacker nodes. Then we present the detailed methods which utilize the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., more than 89 the victim model.

READ FULL TEXT
05/05/2020

Stealing Links from Graph Neural Networks

Graph data, such as social networks and chemical networks, contains a we...
05/07/2022

Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

Graph neural networks (GNNs) have achieved state-of-the-art performance ...
11/06/2020

Single-Node Attack for Fooling Graph Neural Networks

Graph neural networks (GNNs) have shown broad applicability in a variety...
02/10/2021

Node-Level Membership Inference Attacks Against Graph Neural Networks

Many real-world data comes in the form of graphs, such as social network...
10/17/2021

Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications

Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean...
09/16/2022

Model Inversion Attacks against Graph Neural Networks

Many data mining tasks rely on graphs to model relational structures amo...
06/05/2021

GraphMI: Extracting Private Graph Data from Graph Neural Networks

As machine learning becomes more widely used for critical applications, ...