AsymML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference

by   Yue Niu, et al.

Leveraging parallel hardware (e.g. GPUs) to conduct deep neural network (DNN) training/inference, though significantly speeds up the computations, raises several data privacy concerns. Trusted execution environments (TEEs) have emerged as a promising solution to enable privacy-preserving inference and training. TEEs, however, have limited memory and computation resources which renders it not comparable to untrusted parallel hardware in performance. To mitigate the trade-off between privacy and computing performance, we propose an asymmetric model decomposition framework, AsymML, to (1) accelerate training/inference using parallel hardware; and (2) preserve privacy using TEEs. By exploiting the low-rank characteristics in data and intermediate features, AsymML asymmetrically splits a DNN model into trusted and untrusted parts: the trusted part features privacy-sensitive data but incurs small compute/memory costs; while the untrusted part is computationally-intensive but not privacy-sensitive. Computing performance and privacy are guaranteed by respectively delegating the trusted and untrusted part to TEEs and GPUs. Furthermore, we present a theoretical rank bound analysis showing that low-rank characteristics are still preserved in intermediate features, which guarantees efficiency of AsymML. Extensive evaluations on DNN models shows that AsymML delivers 11.2× speedup in inference, 7.6× in training compared to the TEE-only executions.


page 1

page 2

page 3

page 4


EnclaveTree: Privacy-preserving Data Stream Training and Inference Using TEE

The classification service over a stream of data is becoming an importan...

HySec-Flow: Privacy-Preserving Genomic Computing with SGX-based Big-Data Analytics Framework

Trusted execution environments (TEE) such as Intel's Software Guard Exte...

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

We propose and implement a Privacy-preserving Federated Learning (PPFL) ...

A Garbled Circuit Accelerator for Arbitrary, Fast Privacy-Preserving Computation

Privacy and security have rapidly emerged as priorities in system design...

DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

Protecting the privacy of input data is of growing importance as machine...

Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

This work presents Origami, which provides privacy-preserving inference ...

Please sign up or login with your details

Forgot password? Click here to reset