Model Conversion via Differentially Private Data-Free Distillation

04/25/2023
by   Bochao Liu, et al.
0

While massive valuable deep models trained on large-scale data have been released to facilitate the artificial intelligence community, they may encounter attacks in deployment which leads to privacy leakage of training data. In this work, we propose a learning approach termed differentially private data-free distillation (DPDFD) for model conversion that can convert a pretrained model (teacher) into its privacy-preserving counterpart (student) via an intermediate generator without access to training data. The learning collaborates three parties in a unified way. First, massive synthetic data are generated with the generator. Then, they are fed into the teacher and student to compute differentially private gradients by normalizing the gradients and adding noise before performing descent. Finally, the student is updated with these differentially private gradients and the generator is updated by taking the student as a fixed discriminator in an alternate manner. In addition to a privacy-preserving student, the generator can generate synthetic data in a differentially private way for other downstream tasks. We theoretically prove that our approach can guarantee differential privacy and well convergence. Extensive experiments clearly demonstrate that our approach significantly outperform other differentially private generative approaches.

READ FULL TEXT

page 11

page 12

research
05/18/2023

Learning Differentially Private Probabilistic Models for Privacy-Preserving Image Generation

A number of deep models trained on high-quality and valuable images have...
research
06/21/2019

Scalable Differentially Private Generative Student Model via PATE

Recent rapid development of machine learning is largely due to algorithm...
research
03/08/2018

Generating Differentially Private Datasets Using GANs

In this paper, we present a technique for generating artificial datasets...
research
02/06/2023

Private GANs, Revisited

We show that the canonical approach for training differentially private ...
research
07/18/2020

How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning

This paper firstly considers the research problem of fairness in collabo...
research
12/20/2022

Privacy-Preserving Domain Adaptation of Semantic Parsers

Task-oriented dialogue systems often assist users with personal or confi...
research
11/06/2020

Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning

The Private Aggregation of Teacher Ensembles (PATE) framework is one of ...

Please sign up or login with your details

Forgot password? Click here to reset