Securing Input Data of Deep Learning Inference Systems via Partitioned Enclave Execution

07/03/2018
by   Zhongshu Gu, et al.
0

Deep learning systems have been widely deployed as backend engines of artificial intelligence (AI) services for their approaching-human performance in cognitive tasks. However, end users always have some concerns about the confidentiality of their provisioned input data, even for those reputable AI service providers. Accidental disclosures of sensitive user data might unexpectedly happen due to security breaches, exploited vulnerabilities, neglect, or insiders. In this paper, we systematically investigate the potential information exposure in deep learning based AI inference systems. Based on our observation, we develop DeepEnclave, a privacy-enhancing system to mitigate sensitive information disclosure in deep learning inference pipelines. The key innovation is to partition deep learning models and leverage secure enclave techniques on cloud infrastructures to cryptographically protect the confidentiality and integrity of user inputs. We formulate the information exposure problem as a reconstruction privacy attack and quantify the adversary's capabilities with different attack strategies. Our comprehensive security analysis and performance measurement can act as a guideline for end users to determine their principle of partitioning deep neural networks, thus to achieve maximum privacy guarantee with acceptable performance overhead.

READ FULL TEXT

page 9

page 10

research
09/17/2019

Towards Efficient and Secure Delivery of Data for Deep Learning with Privacy-Preserving

Privacy recently emerges as a severe concern in deep learning, that is, ...
research
12/20/2020

DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks

Recent deep learning models have shown remarkable performance in image c...
research
07/31/2018

Security and Privacy Issues in Deep Learning

With the development of machine learning, expectations for artificial in...
research
04/12/2019

Distributed Layer-Partitioned Training for Privacy-Preserved Deep Learning

Deep Learning techniques have achieved remarkable results in many domain...
research
11/01/2022

Strategies for Optimizing End-to-End Artificial Intelligence Pipelines on Intel Xeon Processors

End-to-end (E2E) artificial intelligence (AI) pipelines are composed of ...
research
10/02/2022

Automated Security Analysis of Exposure Notification Systems

We present the first formal analysis and comparison of the security of t...
research
12/07/2018

Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase

We present a practical method for protecting data during the inference p...

Please sign up or login with your details

Forgot password? Click here to reset