Group Property Inference Attacks Against Graph Neural Networks

09/02/2022
by   Xiuling Wang, et al.
9

With the fast adoption of machine learning (ML) techniques, sharing of ML models is becoming popular. However, ML models are vulnerable to privacy attacks that leak information about the training data. In this work, we focus on a particular type of privacy attacks named property inference attack (PIA) which infers the sensitive properties of the training data through the access to the target ML model. In particular, we consider Graph Neural Networks (GNNs) as the target model, and distribution of particular groups of nodes and links in the training graph as the target property. While the existing work has investigated PIAs that target at graph-level properties, no prior works have studied the inference of node and link properties at group level yet. In this work, we perform the first systematic study of group property inference attacks (GPIA) against GNNs. First, we consider a taxonomy of threat models under both black-box and white-box settings with various types of adversary knowledge, and design six different attacks for these settings. We evaluate the effectiveness of these attacks through extensive experiments on three representative GNN models and three real-world graphs. Our results demonstrate the effectiveness of these attacks whose accuracy outperforms the baseline approaches. Second, we analyze the underlying factors that contribute to GPIA's success, and show that the target model trained on the graphs with or without the target property represents some dissimilarity in model parameters and/or model outputs, which enables the adversary to infer the existence of the property. Further, we design a set of defense mechanisms against the GPIA attacks, and demonstrate that these mechanisms can reduce attack accuracy effectively with small loss on GNN model accuracy.

READ FULL TEXT

page 5

page 9

page 17

page 23

research
02/10/2021

Node-Level Membership Inference Attacks Against Graph Neural Networks

Many real-world data comes in the form of graphs, such as social network...
research
05/18/2022

Property Unlearning: A Defense Strategy Against Property Inference Attacks

During the training of machine learning models, they may store or "learn...
research
12/16/2019

Adversarial Model Extraction on Graph Neural Networks

Along with the advent of deep neural networks came various methods of ex...
research
06/19/2023

Substitutional Alloying Using Crystal Graph Neural Networks

Materials discovery, especially for applications that require extreme op...
research
07/03/2019

On the Privacy of dK-Random Graphs

Real social network datasets provide significant benefits for understand...
research
09/16/2022

Model Inversion Attacks against Graph Neural Networks

Many data mining tasks rely on graphs to model relational structures amo...
research
10/31/2019

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...

Please sign up or login with your details

Forgot password? Click here to reset