AsymML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference

10/04/2021
by   Yue Niu, et al.
0

Leveraging parallel hardware (e.g. GPUs) to conduct deep neural network (DNN) training/inference, though significantly speeds up the computations, raises several data privacy concerns. Trusted execution environments (TEEs) have emerged as a promising solution to enable privacy-preserving inference and training. TEEs, however, have limited memory and computation resources which renders it not comparable to untrusted parallel hardware in performance. To mitigate the trade-off between privacy and computing performance, we propose an asymmetric model decomposition framework, AsymML, to (1) accelerate training/inference using parallel hardware; and (2) preserve privacy using TEEs. By exploiting the low-rank characteristics in data and intermediate features, AsymML asymmetrically splits a DNN model into trusted and untrusted parts: the trusted part features privacy-sensitive data but incurs small compute/memory costs; while the untrusted part is computationally-intensive but not privacy-sensitive. Computing performance and privacy are guaranteed by respectively delegating the trusted and untrusted part to TEEs and GPUs. Furthermore, we present a theoretical rank bound analysis showing that low-rank characteristics are still preserved in intermediate features, which guarantees efficiency of AsymML. Extensive evaluations on DNN models shows that AsymML delivers 11.2× speedup in inference, 7.6× in training compared to the TEE-only executions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2022

EnclaveTree: Privacy-preserving Data Stream Training and Inference Using TEE

The classification service over a stream of data is becoming an importan...
research
07/26/2021

HySec-Flow: Privacy-Preserving Genomic Computing with SGX-based Big-Data Analytics Framework

Trusted execution environments (TEE) such as Intel's Software Guard Exte...
research
06/30/2022

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...
research
04/29/2021

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

We propose and implement a Privacy-preserving Federated Learning (PPFL) ...
research
11/23/2022

A Garbled Circuit Accelerator for Arbitrary, Fast Privacy-Preserving Computation

Privacy and security have rapidly emerged as priorities in system design...
research
06/01/2020

DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

Protecting the privacy of input data is of growing importance as machine...
research
12/07/2019

Privacy-Preserving Inference in Machine Learning Services Using Trusted Execution Environments

This work presents Origami, which provides privacy-preserving inference ...

Please sign up or login with your details

Forgot password? Click here to reset