Quantifying Privacy Leakage in Graph Embedding

10/02/2020
by   Vasisht Duddu, et al.
0

Graph embeddings have been proposed to map graph data to low dimensional space for downstream processing (e.g., node classification or link prediction). With the increasing collection of personal data, graph embeddings can be trained on private and sensitive data. For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. We propose a membership inference attack to infer whether a graph node corresponding to individual user's data was member of the model's training or not. We consider a blackbox setting where the adversary exploits the output prediction scores, and a whitebox setting where the adversary has also access to the released node embeddings. This attack provides an accuracy up to 28 exploiting the distinguishable footprint between train and test data records left by the graph embedding. We propose a Graph Reconstruction attack where the adversary aims to reconstruct the target graph given the corresponding graph embeddings. Here, the adversary can reconstruct the graph with more than 80 accuracy and link inference between two nodes around 30 random guess. We then propose an attribute inference attack where the adversary aims to infer a sensitive attribute. We show that graph embeddings are strongly correlated to node attributes letting the adversary inferring sensitive information (e.g., gender or location).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

On Primes, Log-Loss Scores and (No) Privacy

Membership Inference Attacks exploit the vulnerabilities of exposing mod...
research
01/23/2022

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of machine learning (ML) technologies in privacy-sensitiv...
research
10/06/2021

Inference Attacks Against Graph Neural Networks

Graph is an important data representation ubiquitously existing in the r...
research
12/23/2019

Privacy Attacks on Network Embeddings

Data ownership and data protection are increasingly important topics wit...
research
08/21/2022

Inferring Sensitive Attributes from Model Explanations

Model explanations provide transparency into a trained machine learning ...
research
09/02/2022

Are Attribute Inference Attacks Just Imputation?

Models can expose sensitive information about their training data. In an...
research
04/14/2023

Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice

Behavioral data generated by users' devices, ranging from emoji use to p...

Please sign up or login with your details

Forgot password? Click here to reset