Releasing Graph Neural Networks with Differential Privacy Guarantees

09/18/2021
by   Iyiola E. Olatunji, et al.
5

With the increasing popularity of Graph Neural Networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only blackbox access to the trained model is granted. To build defenses, differential privacy has emerged as a mechanism to disguise the sensitive data in training datasets. Following the strategy of Private Aggregation of Teacher Ensembles (PATE), recent methods leverage a large ensemble of teacher models. These teachers are trained on disjoint subsets of private data and are employed to transfer knowledge to a student model, which is then released with privacy guarantees. However, splitting graph data into many disjoint training sets may destroy the structural information and adversely affect accuracy. We propose a new graph-specific scheme of releasing a student GNN, which avoids splitting private training data altogether. The student GNN is trained using public data, partly labeled privately using the teacher GNN models trained exclusively for each query node. We theoretically analyze our approach in the Rènyi differential privacy framework and provide privacy guarantees. Besides, we show the solid experimental performance of our method compared to several baselines, including the PATE baseline adapted for graph-structured data. Our anonymized code is available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2023

ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees

Graph Neural Networks (GNNs) have become a popular tool for learning on ...
research
04/05/2020

Private Knowledge Transfer via Model Distillation with Generative Adversarial Networks

The deployment of deep learning applications has to address the growing ...
research
08/09/2023

Differentially Private Graph Neural Network with Importance-Grained Noise Adaption

Graph Neural Networks (GNNs) with differential privacy have been propose...
research
10/18/2016

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

Some machine learning applications involve training data that is sensiti...
research
02/24/2018

Scalable Private Learning with PATE

The rapid adoption of machine learning has increased concerns about the ...
research
03/01/2020

Differentially Private Deep Learning with Smooth Sensitivity

Ensuring the privacy of sensitive data used to train modern machine lear...
research
09/06/2023

Blink: Link Local Differential Privacy in Graph Neural Networks via Bayesian Estimation

Graph neural networks (GNNs) have gained an increasing amount of popular...

Please sign up or login with your details

Forgot password? Click here to reset