Private Model Compression via Knowledge Distillation

11/13/2018
by   Ji Wang, et al.
0

The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices' capacity. What is worse, app service providers need to collect and utilize a large volume of users' data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83,10^-6)-differential privacy is guaranteed, the compact model trained by RONA can obtain 20× compression ratio and 19× speed-up with merely 0.97

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2018

Not Just Privacy: Improving Performance of Private Deep Learning in Mobile Cloud

The increasing demand for on-device deep learning services calls for a h...
research
02/28/2020

An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation

Compressing deep neural network (DNN) models becomes a very important an...
research
11/28/2019

Data-Driven Compression of Convolutional Neural Networks

Deploying trained convolutional neural networks (CNNs) to mobile devices...
research
06/23/2022

Device-centric Federated Analytics At Ease

Nowadays, high-volume and privacy-sensitive data are generated by mobile...
research
07/03/2021

Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural Networks

In spite of the great success of deep learning technologies, training an...
research
07/27/2022

Fine-grained Private Knowledge Distillation

Knowledge distillation has emerged as a scalable and effective way for p...
research
01/10/2021

Adversarially robust and explainable model compression with on-device personalization for NLP applications

On-device Deep Neural Networks (DNNs) have recently gained more attentio...

Please sign up or login with your details

Forgot password? Click here to reset