Model Inversion Attacks against Graph Neural Networks

09/16/2022
by   Zaixi Zhang, et al.
8

Many data mining tasks rely on graphs to model relational structures among individuals (nodes). Since relational data are often sensitive, there is an urgent need to evaluate the privacy risks in graph data. One famous privacy attack against data analysis models is the model inversion attack, which aims to infer sensitive data in the training dataset and leads to great privacy concerns. Despite its success in grid-like domains, directly applying model inversion attacks on non-grid domains such as graph leads to poor attack performance. This is mainly due to the failure to consider the unique properties of graphs. To bridge this gap, we conduct a systematic study on model inversion attacks against Graph Neural Networks (GNNs), one of the state-of-the-art graph analysis tools in this paper. Firstly, in the white-box setting where the attacker has full access to the target GNN model, we present GraphMI to infer the private training graph data. Specifically, in GraphMI, a projected gradient module is proposed to tackle the discreteness of graph edges and preserve the sparsity and smoothness of graph features; a graph auto-encoder module is used to efficiently exploit graph topology, node attributes, and target model parameters for edge inference; a random sampling module can finally sample discrete edges. Furthermore, in the hard-label black-box setting where the attacker can only query the GNN API and receive the classification results, we propose two methods based on gradient estimation and reinforcement learning (RL-GraphMI). Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.

READ FULL TEXT
research
06/05/2021

GraphMI: Extracting Private Graph Data from Graph Neural Networks

As machine learning becomes more widely used for critical applications, ...
research
06/01/2023

Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?

Graph neural networks (GNNs) have shown promising results on real-life d...
research
09/02/2022

Group Property Inference Attacks Against Graph Neural Networks

With the fast adoption of machine learning (ML) techniques, sharing of M...
research
08/04/2019

The General Black-box Attack Method for Graph Neural Networks

With the great success of Graph Neural Networks (GNNs) towards represent...
research
09/01/2020

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

Link prediction in dynamic graphs (LPDG) is an important research proble...
research
06/29/2020

Reducing Risk of Model Inversion Using Privacy-Guided Training

Machine learning models often pose a threat to the privacy of individual...
research
10/24/2020

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

Graph neural networks (GNNs) have been widely used to analyze the graph-...

Please sign up or login with your details

Forgot password? Click here to reset