Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge

A wide variety of DNN applications increasingly rely on the cloud to perform their huge computation. This heavy trend toward cloud-hosted inference services raises serious privacy concerns. This model requires the sending of private and privileged data over the network to remote servers, exposing it to the service provider. Even if the provider is trusted, the data can still be vulnerable over communication channels or via side-channel attacks [1,2] at the provider. To that end, this paper aims to reduce the information content of the communicated data without compromising the cloud service's ability to provide a DNN inference with acceptably high accuracy. This paper presents an end-to-end framework, called Shredder, that, without altering the topology or the weights of a pre-trained network, learns an additive noise distribution that significantly reduces the information content of communicated data while maintaining the inference accuracy. Shredder learns the additive noise by casting it as a tensor of trainable parameters enabling us to devise a loss functions that strikes a balance between accuracy and information degradation. The loss function exposes a knob for a disciplined and controlled asymmetric trade-off between privacy and accuracy. While keeping the DNN intact, Shredder enables inference on noisy data without the need to update the model or the cloud. Experimentation with real-world DNNs shows that Shredder reduces the mutual information between the input and the communicated data to the cloud by 70.2 accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2020

A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference

INFerence-as-a-Service (INFaaS) in the cloud has enabled the prevalent u...
research
04/06/2021

Enabling Inference Privacy with Adaptive Noise Injection

User-facing software services are becoming increasingly reliant on remot...
research
06/03/2019

BAYHENN: Combining Bayesian Deep Learning and Homomorphic Encryption for Secure DNN Inference

Recently, deep learning as a service (DLaaS) has emerged as a promising ...
research
05/14/2020

Prive-HD: Privacy-Preserved Hyperdimensional Computing

The privacy of data is a major challenge in machine learning as a traine...
research
03/02/2022

EnclaveTree: Privacy-preserving Data Stream Training and Inference Using TEE

The classification service over a stream of data is becoming an importan...
research
12/31/2019

Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep Models

The huge computation demand of deep learning models and limited computat...
research
06/01/2020

DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

Protecting the privacy of input data is of growing importance as machine...

Please sign up or login with your details

Forgot password? Click here to reset