Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

10/24/2020
by   Bang Wu, et al.
0

Graph neural networks (GNNs) have been widely used to analyze the graph-structured data in various application domains, e.g., social networks, molecular biology, and anomaly detection. With great power, the GNN models, usually as valuable Intellectual Properties of their owners, also become attractive targets of the attacker. Recent studies show that machine learning models are facing a severe threat called Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by the attacker pretending as a client. Unfortunately, existing works focus on the models trained on the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, we explore and develop model extraction attacks against GNN models. Given only black-box access to a target GNN model, the attacker aims to reconstruct a duplicated one via several nodes he obtained (called attacker nodes). We first systematically formalise the threat modeling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbor connectives of the attacker nodes. Then we present the detailed methods which utilize the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., more than 89 the victim model.

READ FULL TEXT
research
05/05/2020

Stealing Links from Graph Neural Networks

Graph data, such as social networks and chemical networks, contains a we...
research
05/07/2022

Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

Graph neural networks (GNNs) have achieved state-of-the-art performance ...
research
08/22/2023

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

Malicious domain detection (MDD) is an open security challenge that aims...
research
11/06/2020

Single-Node Attack for Fooling Graph Neural Networks

Graph neural networks (GNNs) have shown broad applicability in a variety...
research
10/17/2021

Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications

Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean...
research
04/17/2023

GrOVe: Ownership Verification of Graph Neural Networks using Embeddings

Graph neural networks (GNNs) have emerged as a state-of-the-art approach...
research
09/16/2022

Model Inversion Attacks against Graph Neural Networks

Many data mining tasks rely on graphs to model relational structures amo...

Please sign up or login with your details

Forgot password? Click here to reset