Private Knowledge Transfer via Model Distillation with Generative Adversarial Networks

04/05/2020
by   Di Gao, et al.
0

The deployment of deep learning applications has to address the growing privacy concerns when using private and sensitive data for training. A conventional deep learning model is prone to privacy attacks that can recover the sensitive information of individuals from either model parameters or accesses to the target model. Recently, differential privacy that offers provable privacy guarantees has been proposed to train neural networks in a privacy-preserving manner to protect training data. However, many approaches tend to provide the worst case privacy guarantees for model publishing, inevitably impairing the accuracy of the trained models. In this paper, we present a novel private knowledge transfer strategy, where the private teacher trained on sensitive data is not publicly accessible but teaches a student to be publicly released. In particular, a three-player (teacher-student-discriminator) learning framework is proposed to achieve trade-off between utility and privacy, where the student acquires the distilled knowledge from the teacher and is trained with the discriminator to generate similar outputs as the teacher. We then integrate a differential privacy protection mechanism into the learning procedure, which enables a rigorous privacy budget for the training. The framework eventually allows student to be trained with only unlabelled public data and very few epochs, and hence prevents the exposure of sensitive training data, while ensuring model utility with a modest privacy budget. The experiments on MNIST, SVHN and CIFAR-10 datasets show that our students obtain the accuracy losses w.r.t teachers of 0.89 (5.02, 10^-6), (8.81, 10^-6). When compared with the existing works <cit.>, the proposed work can achieve 5-82 accuracy loss improvement.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2019

Not Just Cloud Privacy: Protecting Client Privacy in Teacher-Student Learning

Ensuring the privacy of sensitive data used to train modern machine lear...
research
06/05/2019

Private Deep Learning with Teacher Ensembles

Privacy-preserving deep learning is crucial for deploying deep neural ne...
research
09/18/2021

Releasing Graph Neural Networks with Differential Privacy Guarantees

With the increasing popularity of Graph Neural Networks (GNNs) in severa...
research
10/18/2016

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

Some machine learning applications involve training data that is sensiti...
research
07/27/2022

Fine-grained Private Knowledge Distillation

Knowledge distillation has emerged as a scalable and effective way for p...
research
10/13/2022

Mitigating Unintended Memorization in Language Models via Alternating Teaching

Recent research has shown that language models have a tendency to memori...
research
11/03/2022

Private Semi-supervised Knowledge Transfer for Deep Learning from Noisy Labels

Deep learning models trained on large-scale data have achieved encouragi...

Please sign up or login with your details

Forgot password? Click here to reset