Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

12/07/2019
by   Krishna Giri Narra, et al.
18

This work presents Origami, which provides privacy-preserving inference for large deep neural network (DNN) models through a combination of enclave execution, cryptographic blinding, interspersed with accelerator-based computation. Origami partitions the ML model into multiple partitions. The first partition receives the encrypted user input within an SGX enclave. The enclave decrypts the input and then applies cryptographic blinding to the input data and the model parameters. Cryptographic blinding is a technique that adds noise to obfuscate data. Origami sends the obfuscated data for computation to an untrusted GPU/CPU. The blinding and de-blinding factors are kept private by the SGX enclave, thereby preventing any adversary from denoising the data, when the computation is offloaded to a GPU/CPU. The computed output is returned to the enclave, which decodes the computation on noisy data using the unblinding factors privately stored within SGX. This process may be repeated for each DNN layer, as has been done in prior work Slalom. However, the overhead of blinding and unblinding the data is a limiting factor to scalability. Origami relies on the empirical observation that the feature maps after the first several layers can not be used, even by a powerful conditional GAN adversary to reconstruct input. Hence, Origami dynamically switches to executing the rest of the DNN layers directly on an accelerator without needing any further cryptographic blinding intervention to preserve privacy. We empirically demonstrate that using Origami, a conditional GAN adversary, even with an unlimited inference budget, cannot reconstruct the input. We implement and demonstrate the performance gains of Origami using the VGG-16 and VGG-19 models. Compared to running the entire VGG-19 model within SGX, Origami inference improves the performance of private inference from 11x while using Slalom to 15.1x.

READ FULL TEXT

page 1

page 8

page 10

research
12/17/2020

Towards Scalable and Privacy-Preserving Deep Neural Network via Algorithmic-Cryptographic Co-design

Deep Neural Networks (DNNs) have achieved remarkable progress in various...
research
11/12/2020

Customizing Trusted AI Accelerators for Efficient Privacy-Preserving Machine Learning

The use of trusted hardware has become a promising solution to enable pr...
research
02/07/2021

Privacy-preserving Cloud-based DNN Inference

Deep learning as a service (DLaaS) has been intensively studied to facil...
research
06/01/2020

DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

Protecting the privacy of input data is of growing importance as machine...
research
10/04/2021

AsymML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference

Leveraging parallel hardware (e.g. GPUs) to conduct deep neural network ...
research
07/25/2020

SOTERIA: In Search of Efficient Neural Networks for Private Inference

ML-as-a-service is gaining popularity where a cloud server hosts a train...
research
04/13/2023

PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on Compute-in-Memory Accelerators

Analog compute-in-memory (CIM) accelerators are becoming increasingly po...

Please sign up or login with your details

Forgot password? Click here to reset