DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

06/01/2020
by   Hanieh Hashemi, et al.
0

Protecting the privacy of input data is of growing importance as machine learning applications reach new application domains. Cloud companies are providing machine learning as a service to make it easier for data holders to create customized training and inference paradigms. In this paper, we provide a unified training and inference framework for large DNNs while protecting input privacy. Our approach called DarKnight relies on cooperative execution between GPUs and trusted execution environment (TEE) to train complex models. The cooperative execution allows DarKnight to exploit the computational power of GPUs to perform linear operations while exploiting TEEs to protect input privacy. In particular, DarKnight uses a novel encoding to linearly combine multiple inputs along with an additive stream cipher noise to obfuscate the inputs. The proposed encoding process allows DarKnight to efficiently decode the computed data even as the model parameters continuously evolve during the backward propagation of DNN training. DarKnight further simplifies the encoding process for inference where the model parameters are unchanged. Unlike prior approaches, DarKnight does not need to store model parameters within the TEE memory thereby getting around the TEE's limited secure memory limitations. By encoding and decoding multiple inputs during each iteration, DarKnight is well suited for the current generation batch training process. We implement DarKnight on an Intel SGX enclave augmented with a GPU to demonstrate our new training capabilities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2022

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...
research
05/01/2021

Privacy and Integrity Preserving Training Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...
research
12/07/2019

Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

This work presents Origami, which provides privacy-preserving inference ...
research
10/04/2021

AsymML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference

Leveraging parallel hardware (e.g. GPUs) to conduct deep neural network ...
research
09/09/2020

Privacy-Preserving Machine Learning in Untrusted Clouds Made Simple

We present a practical framework to deploy privacy-preserving machine le...
research
05/26/2019

Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge

A wide variety of DNN applications increasingly rely on the cloud to per...

Please sign up or login with your details

Forgot password? Click here to reset