Securing Input Data of Deep Learning Inference Systems via Partitioned Enclave Execution

07/03/2018 ∙ by Zhongshu Gu, et al. ∙ 0

Deep learning systems have been widely deployed as backend engines of artificial intelligence (AI) services for their approaching-human performance in cognitive tasks. However, end users always have some concerns about the confidentiality of their provisioned input data, even for those reputable AI service providers. Accidental disclosures of sensitive user data might unexpectedly happen due to security breaches, exploited vulnerabilities, neglect, or insiders. In this paper, we systematically investigate the potential information exposure in deep learning based AI inference systems. Based on our observation, we develop DeepEnclave, a privacy-enhancing system to mitigate sensitive information disclosure in deep learning inference pipelines. The key innovation is to partition deep learning models and leverage secure enclave techniques on cloud infrastructures to cryptographically protect the confidentiality and integrity of user inputs. We formulate the information exposure problem as a reconstruction privacy attack and quantify the adversary's capabilities with different attack strategies. Our comprehensive security analysis and performance measurement can act as a guideline for end users to determine their principle of partitioning deep neural networks, thus to achieve maximum privacy guarantee with acceptable performance overhead.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent breakthroughs of deep learning (DL) are catalyzed by unprecedented amounts of data, innovations in learning methodologies, and the emergence of learning-accelerated hardware. DL-based approaches can achieve or surpass human-level performance on computer vision 

[37, 30, 59], speech recognition [28, 31, 25], machine translation [60], game playing [57, 58], etc. Today major cloud providers offer artificial intelligence (AI) services, e.g., Amazon AI [1], Google Cloud AI [4], IBM Watson [6], and Microsoft Azure [8], powered with DL backend engines to help end users augment their applications and edge devices with AI capabilities. End users of AI services can be individuals or collective entities that represent independent software vendors, enterprises, health facilities, educational institutions, governments, etc.

AI cloud providers generally offer two independent DL services, i.e., training and inference. End users can build customized DL models from scratch by feeding training services with their own training data. In case if end users do not possess enough training data, they can also leverage transfer learning techniques 

[49]

to repurpose and retrain existing models targeting similar tasks. After obtaining their trained models, end users can upload the models, which are in the form of hyperparameters and weights of deep neural networks (DNNs), to inference services — which might be hosted by different AI service providers as of training services — to bootstrap their AI cloud APIs. These APIs can be further integrated into mobile or desktop applications. At runtime, end users can invoke the remote APIs with their input data and receive prediction results from inference services.

Although end users always expect that service providers should be trustworthy and dependable, they may still have some concerns about the data privacy of their inputs. Accidental disclosures of confidential data might unexpectedly occur due to malicious attacks, misoperations by negligent system administrators, or data thefts conducted by insiders. Adversaries with escalated privileges may be able to extract sensitive data from disks (data-at-rest) or from main memory (runtime data) [51, 53, 52]. We have observed numerous data breach events [5, 9] in recent years. Similar incidents can also happen to user input data for AI cloud services. In addition, deep learning is often differentiated by processing raw input data, such as images, audio, and video, as opposed to hand-crafted features. This poses more privacy concerns if the input data are leaked or compromised.

To address the data privacy problem on AI cloud, researchers proposed cryptographic-primitives-based approaches [22, 46, 42]

to enable privacy-preserving predictions. Although they have made significant performance improvement increasingly, it is still far from being practical to apply such approaches to meet end users’ requirements. The other line of approaches to protecting data privacy are based on distributed machine learning 

[55, 48, 40], which intended to delegate part of the deep learning functionality to the client sides and transfer masked feature representations to the cloud. However, these approaches complicated program logic and consumed more network bandwidth for client devices, which are not supposed to handle computing-intensive workload. We also observed that Ohrimenko et al. [47]

proposed data-oblivious multi-party machine learning algorithms (including neural networks) and leverage Intel Software Guard Extensions (SGX) to make them privacy-preserving. However, the performance (inadequate acceleration for matrix computation and floating-point arithmetic) and memory capacity (the protected physical memory size for Intel Skylake CPU is 128MB 

111With memory paging support for Linux SGX kernel driver, the size of enclave memory can be expanded with memory swapping. But swapping on the encrypted memory will significantly affect the performance.

) constraints of SGX restrict the adaptability of their approach to deeper neural networks trained on large-scale datasets, e.g., ImageNet 

[17]. Recently, we also aware of ongoing research efforts, such as Chiron [34] and Slalom [62], which intend to address the similar data privacy protection problem in deep learning pipelines by leveraging SGX. Different from prior enclave-based efforts, our approach tends to exploit the intrinsic properties of deep learning model structures as the partitioning principle to achieve optimal security-performance balance.

In this paper, we present DeepEnclave, a privacy-enhancing deep learning inference system to mitigate information exposure of sensitive input data in the inference pipelines. The key innovation of DeepEnclave is to partition each deep learning model into a FrontNet and a BackNet by exploiting the layered structure of neural networks. End users are allowed to submit encrypted inputs and encrypted FrontNet to our system. We leverage the Intel SGX on cloud infrastructures to enforce enclaved execution of the FrontNet and cryptographically protect the confidentiality and integrity of user inputs. Meanwhile, the inference computation of the BackNet runs out of secure enclaves and can still benefit from the performance improvement if cloud machines are equipped with DL-accelerated chips.

The key challenge of this approach is to determine the optimal model-specific partitioning points that balance the privacy protection and performance requirements. We formulate the information exposure problem as a reconstruction privacy attack and quantify the adversary’s capabilities with different attack strategies and prior knowledge. We develop a neural network assessment framework to quantify the information leakage for the outputs of FrontNets with different numbers of layers and automate the process of finding optimal partitioning points for different neural network architectures. We conduct our security measurement on three ImageNet-level deep neural networks, i.e., Darknet Reference Model (17 layers) [2], Extraction Model (28 layers) [3], and DenseNet Model (306 layers) [33], with different network depths and architectural complexity. In addition, by protecting the confidentiality of both user inputs and FrontNet models, we ensure that our system can effectively defend against state-of-the-art input reconstruction techniques adaptable to deep neural networks. Our comprehensive security and performance analysis can be used as a guideline for end users to determine their own principle for partitioning DNNs, thus to achieve maximized privacy guarantee with acceptable performance overhead.

2 Motivation

Figure 1: The Information Exposure in a Deep Learning Inference Pipeline

By comparing the inputs and outputs of a deep learning based image classification system, we find that user input data might be unnecessarily exposed in deep learning inference pipelines. We give a motivating example in Figure 1

to empirically demonstrate such information exposure. We input a picture to a 1000-class image classification system with a Convolutional Neural Network (ConvNet) model trained on the ImageNet dataset. The output of the system is the Top-5 prediction class scores of the picture. Just like the idiom saying that “a picture is worth a thousand words”, we can learn rich information from this input picture. It is obvious to tell that this picture was taken in

Rio de Janeiro (1.a) by identifying the Sugarloaf Mountain (1.b). Anyone who is familiar with the geographical layout of Rio de Janeiro can also pinpoint Botafogo (1.c), Copacabana (1.d), and Guanabara Bay (1.e). In addition, we can further infer that the picture must be taken at the Cristo Redentor (1.g) because this is the only place you can have this specific view of the Sugarloaf Mountain. Based on the position (west side) of the sunlight, we can deduce that the time should be around sunset (2.a). Furthermore, if we check the EXIF metadata of this picture, we can verify our previous hypotheses with the GPS coordinates (1.f) and the DayTimeOriginal (2.b). We can also obtain the device type (3.a) and its configuration (3.b, 3.c) by reading the metadata. If combined with other contextual information, it is possible to reveal the identity of the person who took this picture and recover his or her travel history.

On the contrary, the output of the image classification system only reveals limited information indicating that this picture can be classified as a

promontory, volcano, valley, seashore, or lakeshore, with different confidence values. We consider that there is a privacy gap the represents the information discrepancy between the inputs and outputs at the two ends of the deep learning inference pipeline. The gap may disclose users’ sensitive information to AI cloud providers, and further the information may possibly be leaked to malicious adversaries.

The computation of a deep neural network is to distill feature representations layer by layer. The process can be formulated as a composited transformation function that maps raw inputs to outputs with a specified target. Each hidden layer performs its own transformation as a sub-function and generates an intermediate representation (IR) as an output. Conceptually, transformation on each hidden layer contributes more or less to make the intermediate representations converge towards the final outputs. Our work is inspired by the research efforts for understanding the internals of deep neural networks [68, 67, 56]. As indicated by Zeiler and Fergus [68], for an image classification ConvNet, the shallow layers respond more to low-level photographic information, such as edges, corners, contours, of the original inputs, while deep layers can represent more abstract and class-specific information related to the final outputs. From the privacy perspective, low-level photographic information can reveal more precise and specific information of the original inputs. Whereas, high-level abstract representations generated in deep layers contain less private information. Therefore, to protect the sensitive information within inputs, we can choose to enclose first several layers of a DNN into an isolated execution environment, which is kept confidential and tamper-resistant to external computing stacks.

3 Problem Definition

Based on the layered structure of deep neural networks, we partition each network into two independent subnet models, i.e., a FrontNet and a BackNet. Mathematically, a deep neural network can be defined as a function that maps the input to the output , i.e., . stands for the parameters that are learned in the training phase. The function is composed of (assuming the network has layers) sub-functions where . maps the input to the output on Layer . These sub-functions are connected in a chain. Thus . After partitioning the neural network at the -th layer where , the function for FrontNet can be represented as . is the input space applicable for a specific deep neural network and is the output space for the intermediate representations. and its output is the intermediate representation computed out of the FrontNet. The function for BackNet is , in which is the input.

We assume that the adversaries might have some background knowledge for reconstructing the sensitive original input . The background knowledge includes: (1) the domain knowledge of user inputs, e.g. input file types, natural image priors[43]; (2) the knowledge on the distribution of all bits of

, which can be described by a probability matrix

, where is the probability that the -th bit of takes the value , and , is the encoding alphabet, and .

Adversaries aim to reconstruct the inference input : given an of and the background knowledge , adversaries can devise an attack strategy to return , the reconstructed version of . The attack strategy can span from visually perceiving the intermediate representations to leveraging advanced input reconstruction techniques by approximating the inverse model. The FrontNet representation function is considered to violate the -privacy for , if there exists an attack , background knowledge and intermediate representation ,

(1)

where is the privacy parameter to bound the distances between and before and after observing and . The measures the distance between an original input and a reconstructed input . Specifically, considers that is reconstructed only based on adversaries’ background knowledge . Whereas in , is reconstructed based on both the adversaries’ background knowledge and the observed . Eq. 1 says that the privacy of the true inference input is breached if adversaries can significantly reduce the distance between and after obtaining the intermediate representation of .

4 Threat Model

In our threat model, we assume that adversaries are able to obtain data from machines of deep learning cloud systems. There are multiple ways for them to achieve that. For example, attackers may exploit some zero-day vulnerabilities to penetrate and compromise the system software of the server. Insiders, such as cloud administrators, can also retrieve and leak data from the servers on purpose. The data can be files on disks or snapshots of physical memory. We assume that adversaries understand the format of the files stored on disks and they are able to locate and extract structured data (of their interest) from memory snapshots. We also expect that adversaries master the state-of-the-art techniques[43, 18, 29] for reconstructing inputs from IRs.

However, we assume that the adversaries cannot break into the perimeters of CPU packages to track the code execution and data flow at the processor level. We do not intend to address the side-channel attacks against Intel SGX in this paper. But in Section 9 we introduce some recent representative SGX side-channel attacks, give an in-depth analysis why the core computation of deep neural networks is still resilient to side channel attacks, and the potential vulnerabilities.

We assume that adversaries do not have access to the training dataset, thus they cannot train a surrogate model. This is a reasonable assumption because end users only need to submit pre-trained models to AI cloud providers, rather than releasing their training dataset. Protecting training data privacy is out of the scope of this paper. If end users depend on third-party training providers to train their models, they may consider privacy-preserving training mechanisms [55, 45, 46, 47] to protect training data from being leaked to training providers. Furthermore, we do not expect that adversaries have access to the whole (or the superset of) inference input datasets. Otherwise, we consider in this scenario the inference data have already been leaked and adversaries only need to determine which samples have been used as inputs, rather than reconstructing the contents of the inputs. Having the privilege to access inference input datasets is not realistic in the general settings of AI cloud services.

In addition, we intend to guarantee the data privacy of user input data, but protecting the confidentiality and integrity of final outputs

of deep learning services is out of the scope of this paper. Here we propose some preliminary solutions for the readers interested in this problem. To protect the prediction results, end users can upload models that only output class ids, rather than meaningful class labels. Therefore, users can interpret the outputs on their local machines without leaking the classification results. In addition, end users can also deploy the DNN output layer, i.e., the last layer along with the softmax activation function, into a secure enclave and deliver the outputs directly to end users via a secure communication channel. To protect the integrity of outputs, end users may leverage statistical sampling methods to validate inference results via a local DNN that shares the same topology as its cloud counterpart.

5 System Design

In order to protect the confidentiality and integrity of user-provisioned inputs, we design DeepEnclave, a privacy-enhancing cloud-based deep learning inference system. Here we explain the key components of our system in detail.

5.1 Partitioning of DNNs

As defined in Section 3, the representation function for a FrontNet is and a BackNet is . The parameter of the original neural network is divided into and according to the network partition. The output shape of a FrontNet should be compatible with the input shape of its corresponding BackNet. We deliver as an input to a subsequent BackNet and continue the computation to get a result . Given the same input , we expect that should be equivalent to , which is the output of the original neural network before the partition.

5.2 Secure Remote DNN Computation

On the cloud side, the FrontNet and inputs from end users should be loaded into a Trusted Execution Environment (TEE) that can guarantee the confidentiality, integrity, and freshness of the protected memory for secure remote computation. We choose to use the Intel SGX enclave [44] as the TEE in our research prototype, but our approach in principle can also be generalized to other TEEs[12, 36]. With the protection of the memory access control mechanism and memory encryption engine (MEE) of the SGX, all non-enclave accesses from privileged system software or other untrusted components of systems will be denied. Thus the computational process of the user inputs with the FrontNet is kept within the perimeter of a specific CPU package and is invisible to the external world. The computation within an enclave is still naturally dedicated to distilling features for specific inference tasks, just exhibiting the same behaviors as its counterpart running out of the enclave. Furthermore, the enclave can attest to remote parties (i.e., the end users of AI cloud services) that the FrontNet is running in a secure environment hosted by a trusted hardware platform.

5.3 Confidentiality of Inference Inputs

In order to protect the contents of user inputs from being exposed on cloud servers, end users need to encrypt inputs with their symmetric keys and upload the encrypted files to cloud services. After finishing the remote attestation with the enclave, end users can provision the symmetric keys to the enclave via a secure communication channel. The code inside the enclave then decrypts the user inputs and passes the inputs to the FrontNet model, which should have been loaded in the same enclave. In addition, we leverage the Galois Counter Mode (GCM) to achieve authenticated encryption. Thus we can authenticate legitimate end users and render service abusing attacks ineffective. For the adversaries who tend to treat the in-enclave FrontNet as a black-box service and query to extract model information, they need to encrypt their inputs with the proper symmetric keys from the legitimate end users. Assuming that end users’ keys are not leaked, we can deny serving these illegitimate requests that fail the integrity check and prevent the leakage of FrontNet model information, which is considered to be crucial for reconstructing user inputs.

In our initial design, we do not require end users to provide encrypted FrontNet models. This can protect the user data privacy if adversaries only intend to infer original inputs from FrontNet’s IR outputs. We demonstrate in Section 6.1 that the quality of IR images (we map the IRs back to the pixel space) decay rapidly after passing first few layers. However, advanced adversaries may master the techniques to reconstruct inputs from IRs in neural networks. Although the convolution and pooling operations of ConvNet are not reversible, more powerful adversaries might have access to both the IRs and the model parameters. With some prior knowledge, adversaries can still approximately generate inputs that are similar to the original inputs [43, 18, 29]. In order to defend against such input reconstruction attacks, we enhance our design to support user-provisioned encrypted FrontNet. By protecting the confidentiality of both user inputs and the FrontNet model, all state-of-the-art (as far as we known) input reconstruction methods will no longer be effective. We give a detailed analysis on why we can neutralize input reconstruction techniques in Section 6.2.

5.4 Workflow

Input: img_enc Encrypted image input
fn_cfg FrontNet neural network configuration
fnw_enc Encrypted FrontNet neural network weights
bn_cfg BackNet neural network configuration
bnw BackNet neural network weights
clt Client key provisioning ip
1:############## Within SGX Enclave ##############
2: Encrypted Input/Encrypted Frontnet
3:function enclave_load_enc_model(fn_cfg, fnw_enc,clt)
4:       tls enclave_attestation(clt)
5:       fnw_key, img_key enclave_get_keys(clt, tls, fnw_t, img_t)
6:       fnw enclave_decrypt(fnw_enc, fnw_key)
7:       fn enclave_load_weights(fn_cfg, fnw)
8:
9:function enclave_inference_enc_img( img_enc)
10:       img enclave_decrypt(img_enc, img_key)
11:       ir enclave_network_inference(fn, img)
12:       return ir
13:
14:############## Out of SGX Enclave ##############
15: Encrypted Input/Encrypted Frontnet
16:function inf_enc_model_img(fn_cfg, fnw_enc,bn_cfg, bnw, img_enc, clt)
17:       eid init_enclave()
18:       enclave_load_enc_model (eid, fn_cfg, fnw_enc, client)
19:       ir enclave_inference_enc_img (eid, img_enc)
20:       bn load_weights(bn_cfg, bnw)
21:       result network_inference(bn, ir)
22:       return result
Algorithm 1 Privacy-Enhancing DNN Classification
Figure 2: The Workflow of an Image Classification Service via DeepEnclave

We summarize the workflow of DeepEnclave (using an image classification service as an example) by explaining the steps in Figure 2 and corresponding pseudo-code in Algorithm 1. In this case, an end user can provide both encrypted inputs and a pre-trained model (with an encrypted FrontNet).

❶ The end user needs to partition a model into a FrontNet and a BackNet (we implement a tool to automate offline model partitioning in our research prototype). The FrontNet should be kept in secret and encrypted with a symmetric key from the end user. We do not expect to protect the BackNet in our scenario and the configuration and weights of BackNet are shared to the cloud provider. As the BackNet is not supposed to run inside an SGX enclave for performance constraints, we omit the discussion of the protection of parametric data of the BackNet here. But it would be better to use standard encryption mechanisms and protocols to protect the BackNet in communication and at rest. In addition to the FrontNet model, the end user also needs to encrypt the inputs with her symmetric key.

❷ The end user uploads the encrypted model to the image classification service on the cloud. She only needs to provide the model once to initiate the service. After the service starts, the end user can continuously upload encrypted inputs for classification.

❸ On the cloud side, after receiving the encrypted image and the encrypted FrontNet model, DeepEnclave instantiates an Intel SGX enclave (init_enclave at line 17) and loads the encrypted FrontNet model (enclave_load_enc_model at line 18) for deep neural network computation into the enclave. Then the cloud service invokes the image classification API function (enclave_inference_enc_img at line 19) and securely copy the encrypted image into the enclave as the function argument.

❹ The end user and the SGX enclave need to perform remote attestation [10] procedure. The enclave can prove to the end user that it is running on top of a trusted hardware platform with legitimate code/data from a trusted cloud service provider. The detailed description of the standard attestation protocol can be found in an example [7] provided by Intel. Due to the unclear licensing procedure for registering SGX enclave code and the prerequisites for using the Intel Attestation Server (IAS), we currently skip this step and instantiate a TLS session directly between the end user and the enclave.

❺ After creating a secure Transport Layer Security (TLS) communication channel, the end user can provision symmetric keys (enclave_get_keys at line 5) directly into the enclave on the cloud.

❻ Inside the enclave, we verify the integrity of both the model and the input by checking their GCM authentication tags, and decrypt the FrontNet model (enclave_decrypt at line 6) and the input (enclave_decrypt at line 10) with the provisioned symmetric keys from the end user. Then we can build a deep neural network based on the FrontNet (enclave_load_weights at line 7), pass the decrypted input into the model (enclave_network_inference at line 11), and generate the IR from the FrontNet.

❼ The generated IR is securely copied out of the enclave through a controlled channel of SGX. We build another deep neural network based on the BackNet model (load_weights at line 20).

❽ We then pass the IR into the BackNet and get the final classification result (network_inference at line 21). The final result is an

-dimensional real-value vector that represents a probability distribution over

different possible classes.

❾ Based on the specification, the deep learning cloud service can choose to return the Top- classes with their corresponding probabilities back to the end user.

6 Security Analysis

After building the DeepEnclave system for partitioned enclave execution, we tend to address the problem of determining the optimal partitioning points for deep neural networks via a comprehensive security analysis.

Here we simulate two hypothetical adversaries, and , within the privacy reconstruction attack framework (defined in Section 3) and they tend to uncover the contents of original raw input after obtaining IRs out of the enclave. We consider both adversaries have no prior knowledge of input , i.e., probability matrix

holds the uniform distribution and

, but they have different (from weak to strong) attack strategies :

: This adversary is able to view IRs generated out of the FrontNet. The strategy is to pick the IR that reveals the most information of the original input. We tend to measure the information exposure by assessing IRs at different partitioning layers of a DNN.

: In addition to viewing the IRs, this more advanced adversary can further master these input reconstruction techniques for deep neural networks. Thus the strategy of the adversary is to derive an inverse function from and compute . The reconstructed may leak the information of the original input . We tend to demonstrate that DeepEnclave by design can render such attack ineffective.

6.1 Perception of IRs ()

Figure 3: The Architecture of the Neural Network Assessment Framework

Based on our threat model, we assume that the adversary is able to retrieve the IR data of the hidden layers located out of SGX enclaves, even though the IRs may only reside in the computer memory. Therefore, it is crucial to investigate whether this adversary can perceive and infer the contents of the original inputs by viewing the IRs. In ConvNet, IRs are organized in the forms of stacked feature maps. Thus we project all feature maps back to the pixel space and save them as IR images. For example, if a convolutional layer of a model has 64 filters and the output is a tensor, we can generate 64 IR images (112 in width and 112 in height) from its output. We conduct experiments for three ImageNet-level deep neural networks, i.e., Darknet Reference Model (17 layers with 5,040 IR images) [2], Extraction Model (28 layers with 12,880 IR images) [3], and DenseNet Model (306 layers with 107,888 IR images) [33].

One method to simulate this adversary is to let human subjects view all IR images and pick the ones that reveal the original input ’s information. However, this task is tedious and error-prone for human beings considering the quantity of IR images they need to inspect and is also difficult to quantify the distance between and IRs. Instead, we replace human subjects with another ConvNet (by exploiting ConvNet’s approaching-human visual recognition capability) to automatically assess all IR images and identify the ones revealing most input information at each layer. This approach is based on the insight that if an IR image retains similar content as the input image, it will be classified into similar categories with the same ConvNet. By measuring the similarity of classification results, we can deduce whether a specific IR image is visually similar to its original input. End users can further leverage the assessment results to determine the optimal partitioning points for different neural network architectures.

Neural Network Assessment Framework

In Figure 3, we present the Dual-ConvNet architecture of our neural network assessment framework. We submit an input to the IR Generation ConvNet (IRGenNet) and generate . Each contains multiple feature maps after passing Layer (). Then we project feature maps to IR images and submit them to the IR Validation ConvNet (IRValNet), which shares the same network architecture/weights as the IRGenNet. The outputs of both ConvNets are -dimensional ( is the number of classes) probability vectors with class scores. We use the Kullback-Leibler (KL) divergence to measure the similarity of classification results. At each Layer , we select the IR image with the minimum KL divergence with the input to quantitatively measure the :

(2)

where is the representation function shared by both IRGenNet and IRValNet. To determine the optimal partitioning point for each neural network, we compute where , the uniform distribution of the probability vector and is the number of classes. This represents that has no prior knowledge of before obtaining IRs and considers that will be classified to all classes with equal chance. Based on Eq. 1, we can compute and compare with the user-specified bound. For example, if the user chooses , to avoid violating -privacy, it is safe to partition at Layer only if . It is worth noting that comparison with the uniform distribution with is a very tight privacy bound for the information exposure. In the real-world scenario, end users can relax the constraint to specify their specific bound to satisfy their privacy requirements.

6.1.1 Model Analysis

(a) Layer 1: Conv
(b) Layer 2: MaxPool
(c) Layer 3: Conv
(d) Layer 4: MaxPool
(e) Layer 5: Conv
(f) Layer 6: MaxPool
Figure 4: The List of Most Similar-to-Input IR Images at Each Hidden Layer — Darknet Reference Model
(a) Layer 1: Conv
(b) Layer 2: MaxPool
(c) Layer 3: Conv
(d) Layer 4: MaxPool
(e) Layer 5: Conv
(f) Layer 6: Conv
Figure 5: The List of Most Similar-to-Input IR Images at Each Hidden Layer — Extraction Model
(a) Layer 1: Conv
(b) Layer 5: Route
(c) Layer 6: Conv
(d) Layer 21: Conv
(e) Layer 59: Conv
(f) Layer 205: Conv
Figure 6: The List of Most Similar-to-Input IR Images at Each Hidden Layer — DenseNet Model
Figure 7: KL Divergence for Intermediate Representations of Hidden Layers — Darknet Reference Model
Figure 8: KL Divergence for Intermediate Representations of Hidden Layers — Extraction Model
Figure 9: KL Divergence for Intermediate Representations of Hidden Layers — DenseNet Model
Darknet Reference Model

This is a relatively small neural network for ImageNet classification. Its parameter amount is only 1/10th of AlexNet [37], but it still retains the same prediction performance (Top-1: and Top-5: ). We display the result of our assessment for the first six layers in Figure 4. For each hidden layer, we choose the IR image that has the minimum KL divergence. For example, Layer is a convolutional layer and the most similar IR image to the original input is generated by the 6th filter of this layer. We can still visually infer that the original content in the first several layers, but getting more and more difficult for subsequent layers. In Figure 7, we present the range of KL divergence scores (black columns) for the IR images of all layers except the last three layers, i.e., average pooling, softmax, and cost layers, which do not generate IR images. For example, at Layer the minimum KL divergence is and the maximum is . We also highlight the line for the KL divergence of the uniform distribution, which is , with regard to . We can find that after Layer , the minimum KL divergence scores approach and surpass the line of uniform distribution’s KL, which indicates that viewing IRs after Layer cannot help reveal information from the original input anymore. Thus end users can choose to partition the network at Layer and enclose them to run within the enclave.

Extraction Model

Compared to the Darknet Reference Model, the Extraction Model is deeper and can achieve higher prediction accuracy (Top-1: and Top-5: ). We present the most similar-to-input IR images of its first six layers in Figure 5 and the KL divergence scores in Figure 8. We can observe a similar phenomenon that after Layer , the KL divergence score ranges exceed the KL divergence of uniform distribution. Thus the safe partitioning point for this neural network is at Layer .

DenseNet Model

In classical ConvNet architectures, each layer only obtains the input from its precedent layer. However, with the increase of network depth, it may lead to the vanishing gradient problem 

[23, 11]. To address this issue, researchers introduced short paths cross layers to make it practical to train very deep neural networks. The authors of the DenseNet Model [33] introduced the neural network topology with DenseBlocks. Within each DenseBlock, each layer obtains inputs from all preceding layers and also transfers its own IRs to all subsequent layers. Between two adjacent DenseBlocks, it contains transitional layers to adjust the IR’s size. We find that such special network structures, i.e., DenseBlocks and densely connected layers, can be consistently quantified with KL divergence. We show the KL divergence scores in Figure 9. The DenseNet Model has four DenseBlocks. In each DenseBlock, the minimum KL divergence scores plummet regularly every two layers. The reason is that there exist route layers (after every two convolutional layers) that receive inputs from all preceding layers in the same DenseBlock. For example, the minimum KL divergence of Layer (convolutional layer) is , while Layer (route layer) drops to . If we partition in the middle of DenseBlock 1, the following IRs in DenseBlock 1 can still reveal the input’s information. However, there is no densely connected path that crosses different DenseBlocks. Although there still exist fluctuations of KL divergence scores in the DenseBlock 2, the scores are significantly larger than layers in DenseBlock 1. In Figure 6, in addition to the Layer and , we also display the most similar-to-input IR images at all transitional layers (Layer , , and ) between different DenseBlocks. Based on the uniform distribution KL divergence (), the optimal partition point is at Layer (the last layer of DenseBlock 1), or for more cautious end users, they can choose to partition at Layer (the last layer of the DenseBlock 2).

Remarks

End users can choose to include fewer layers inside an enclave, which may leak some degree of information, or more layers for stronger privacy guarantee, but may face some performance and usability constraints. As we have shown in the experiments for the three representative deep learning models, different neural network architectures may have different optimal partitioning points. With our neural network assessment framework, end users can test and determine the optimal partitioning layer for their specific deep learning models on local machines before uploading the inputs and models to the deep learning inference services.

6.2 Input Reconstruction Techniques ()

With a stronger adversarial model, we expect that the adversary may master advanced input reconstruction techniques, hence aims to first reconstruct the inputs from the intermediate representations. We describe the general input reconstruction problem in deep neural networks formally as follows: the representation function at Layer of a given DNN is , which transforms the input to . Given an intermediate representation , the adversary tends to compute an approximated inverse function to generate an input that minimizes the distance between and . Here we qualitatively review the state-of-the-art input reconstruction techniques for deep neural networks, analyze the requirements or preconditions for these research works, and demonstrate that DeepEnclave can protect the data privacy of user inputs from powerful adversaries equipped with these techniques.

Mahendran and Vedaldi[43]

In this work, the authors proposed a gradient descent based approach to reconstructing original inputs by inverting the intermediate representations. Following the formal description of the input reconstruction problem above, the objective of their approach is to minimize the loss function, which is the Euclid distance between

and . Considering that should not be uniquely invertible, they restrict the inversion by adding a regularizer to enforce natural image priors. In our design, the user-provisioned deep learning models are partially encrypted. The FrontNet models are encrypted by end users and are only allowed to be decrypted inside SGX enclaves. Assume knows the input reconstruction technique from Mahendran and Vedaldi[43], the representation function , which is equivalent to the FrontNet in our case, is not available in a decrypted form out of the enclave. In addition, we can also prevent the adversaries from querying the online FrontNet as a black-box to conduct their optimization procedure. The reason is that we use GCM to enable authenticated encryption. The enclave code can deny illegitimate requests, whose authentication tags cannot be verified correctly with end users’ symmetric keys. Thus is not able to conduct the optimization procedure to compute both and .

Dosovitskiy and Brox [18]

This research has a similar goal of inverting the intermediate representations to reconstruct the original inputs. Compared to Mahendran and Vedaldi[43], the major difference is that they do not need to manually define the natural image priors, but learn the priors implicitly and generate reconstructed images with an up-convolutional neural network. They involve supervised training to build the up-convolutional neural network, in which the input is the intermediate representation and the target is the input . This reconstruction technique cannot work either in our security setting. needs to collect the training pairs . Because the FrontNet is guaranteed to be confidential, i.e., the is unknown, cannot retrieve the model parameters and generate the training data to train the up-convolutional neural networks. Similarly, we can also prevent from querying the online FrontNet as a black-box to generate training pairs, because does not possess the end users’ keys to encrypt the inputs and generate correct authentication tags. Thus enclave code will not return IRs to . Without the up-convolutional neural network, cannot reconstruct the original input either.

7 Implementation

We build our research prototype based on Darknet[50], which is an open source neural network implementation in C and CUDA. We support both user-provisioned encrypted inputs and encrypted FrontNet. End users can partition their DNNs at a specific layer first and encrypt both the FrontNet and inputs with tools provided by us on their local machines. We also implement a neural network assessment framework to measure the quality of IRs at different layers. It can guide end users to determine, for each specific deep learning model, the optimal number of layers to include in a FrontNet and run within an enclave. In total, we add and revise 23,154 SLOC in C and 474 SLOC in Python for the development of DeepEnclave.

8 Performance Evaluation

In the performance evaluation, we measure the performance overhead for different system settings and indicate the constraint factors in practice. By understanding the trade-off between security and performance, end users can determine the level of security protection they tend to achieve and the corresponding performance and usability cost they may have to pay. Our testbed is equipped with an Intel i7-6700 3.40GHz CPU with 8 cores, 16GB of RAM, and running Ubuntu Linux 16.04 with kernel version 4.4.0.

Figure 10: Performance Overhead of Running FrontNet in SGX Enclaves (compiled with -O2)
Figure 11: Performance Overhead of Running FrontNet in SGX Enclaves (compiled with -Ofast)

We measure the inference performance of DeepEnclave by passing testing samples through the Extraction Model. In the base case, we load the whole neural network without using SGX and obtain the average time for predicting these unencrypted images. To compare with the base case, we partition the network, load multiple layers as the FrontNet inside an SGX enclave and the following layers out of the enclave, and obtain the same performance metrics. We need to emphasize that both images and the FrontNet models are encrypted in these cases and are decrypted at runtime inside SGX enclaves. Due to the SGX memory limitation, we can load up to 10 layers of the Extraction Model into the enclave. We compiled DeepEnclave with both gcc optimization level -O2 and -Ofast (with -O3 and -ffast-math enabled) and present the normalized performance results in Figure 10 and 11 respectively. For each individual input, we include the one-time overhead of enclave initialization in the performance measurement. We distinguish the performance overhead contributed by enclave initialization, in-enclave computation, and out-of-enclave computation with bars of different colors. Layer 0 is the base case with unencrypted inputs and all layers run out of the SGX enclave.

For optimization level at -O2, we observe the performance overhead increase from 12% for running one layer inside an enclave to 28% for ten layers. Initializations of enclaves contribute to the most significant portion of the additional performance overhead. However, once the enclave is initialized, we observe that an inference task within an SGX enclave has even lower performance overhead as running out of the enclave. This is mainly due to the characteristic of deep neural network computation, which is computing-intensive and can benefit a lot from using the CPU cache and decrease the rate to read and write the encrypted memory, which is considered to be expensive in SGX. In the cloud scenario, we do not need to initialize and tear down an enclave for each service request, but can run one enclave as a long-time service to serve all client requests. For optimization level at -Ofast, we observe that the absolute time for enclave initialization is at the same level as in Figure 10. The in-enclave FrontNet computation causes 1.64x - 2.54x overhead compared to the base case. The BackNet still conduct inference computation at the same speed as the base case. We speculate that the slow down inside the enclave is due to the ineffective -ffast-math flag for floating arithmetic acceleration. We expect that in the future Intel will release optimized math library within SGX enclave to further reduce the floating arithmetic overhead.

Compared to cryptographic primitives based approaches [22, 46, 42] and running whole neural network within a single enclave [47], the performance overhead of DeepEnclave makes online privacy-preserving deep learning inference feasible and adaptable for production-level large-scale deep neural networks. The out-of-enclave BackNet computation can still benefit from hardware acceleration and we grant the freedom to end users to adjust network partitioning strategy to satisfy their specific performance requirements.

9 Discussion

SGX Side-Channel Attacks

Since the inception of SGX as a TEE with a strong threat model, i.e., to guarantee confidentiality and integrity of computation upon untrusted or adversarial system software, we have observed research studies on a spectrum of side-channels to extract sensitive information from enclave execution. These side-channels include page faults[66], high-resolution timers[27], branch history[39], memory management[65], CPU cache [27, 13, 24, 15], etc. In these case studies, side-channel attacks have been explored and applied to extract and further to infer sensitive information from various SGX-protected applications.

We have not observed any SGX side channel attacks to recover inputs of deep neural networks by now and we consider that such tasks are exceedingly difficult due to the special computation model of neural networks. For existing SGX side-channel attacks, the attackers need to eavesdrop the side-channel signals that leak the control transfer information. However, the memory access patterns of deep neural network computation do not depend on the input data. Thus it is impossible to recover input instance with the leaked side channel information. In addition, in our design for DeepEnclave, the FrontNet of the model is encrypted. This makes the prerequisites (availability of source code or binaries of target programs) for SGX side-channel attacks unsatisfied.

However, through side-channels, it is still possible to recover model hyperparameters, e.g., filter strides, activation function types, or the size of the inputs, which are pre-defined by deep learning model builders. Furthermore, if a deep learning image classification service needs to handle JPEG files as inputs, the file preprocessing procedure, i.e., the extraction of the pixel content of the images, may invoke some JPEG library functions, which have been demonstrated to be potentially vulnerable to side-channel attacks

[66, 27].

Applicable Deep Learning Models

DeepEnclave is based on the feature distillation process in DNNs. However, our system does not apply to these information-preserving

neural networks designed for specific learning tasks. One representative case is the autoencoder 

[32], which is used for efficient encoding, dimension reduction, and learning generative models. No matter where we partition the autoencoder network, adversaries can always recover the (approximated) original inputs in any hidden layer. The reason is that autoencoder is trained to minimize the reconstruction errors between inputs and outputs. The original input information is always preserved (though could be compressed) in the autoencoder network. We leave the privacy protection for information-preserving neural networks as our future work.

10 Related Work

In this section we list the research efforts that are closely related to our work and highlight our unique contributions compared to these works.

Cryptographic Schemes Based Machine Learning

Most of the existing privacy-preserving machine learning solutions are based on cryptographic schemes, such as secure multi-party computation (SMC), fully homomorphic encryptions (FHE) [21], etc. Solutions based on SMC protect intermediate results of the computation when multiple parties perform collaborative machine learning on their private inputs. SMC has been used for several fundamental machine learning tasks [41, 19, 63, 64, 35, 46]. Besides these protocol-based solutions, recently researchers also propose to leverage cryptographic primitives to perform deep learning inference. Gilad-Bachrach et al. [22] proposed CryptoNets, a neural network model that makes predictions on data encrypted by FHE schemes. This approach protects the privacy of each individual input in terms of confidentiality. MiniONN [42] is an approach that transforms existing neural networks to an oblivious neural network that supports privacy-preserving predictions. Considering the significant performance overhead of using cryptographic schemes, we propose to leverage Intel SGX technology to keep part of the deep neural network computation in confidential on the cloud side. Hence we can protect the privacy of user inputs at inference and can defend against state-of-the-art input reconstruction attacks.

Distributed Deep Learning

Shokri and Shmatikov  [55] designed a distributed privacy-preserving deep learning protocol by sharing selective parameters and gradients for training deep neural network in a differentially private way. Ossia et al. [48] proposed a distributed machine learning architecture to protect the user’s input privacy. Their framework consists of a feature extractor on the mobile client side and a classifier on the server side. The server side performs inference task on the dimension-reduced extracted features from the mobile client. PrivyNet [40]

is a splitting model deep learning training approach. They reused layers of pre-trained models for feature extraction on local machines and train the cloud neural network for the learning tasks based on the feature representations generated by the local neural network. Different from their work, our approach leverages the TEE on the cloud directly to guarantee the confidentiality of user inputs and the user-provisioned model. Thus we significantly simplify the client’s logic and relieve client devices, which are supposed to have limited computing capacity and power usage restriction, from heavyweight neural network computation. In addition, our approach does not involve transferring intermediate representations through the network, thus eliminating the additional performance overhead for dimension reduction or data compression.

SGX Applications

Researchers leverage SGX [16] technology to replace expensive cryptographic schemes for secure multi-party computation. Ohrimenko et al.[47] intended to leverage trusted SGX-enabled CPU for privacy-preserving multi-party machine learning. They also proposed to make machine learning algorithm data-oblivious, in order to prevent the SGX side-channel attacks. The SGX technique is also used for efficient two-party secure function evaluation[26], private membership test[61], trustworthy remote entity[38]. Different from the goals of these works that focus on the data sharing privacy in collaboration tasks, our work intends to protect the user input privacy from being exposed to cloud providers. SGX technology is also widely researched in cloud scenarios. VC3[54] ran distributed MapReduce computation within SGX enclaves on the cloud to keep the confidentiality of user’s code and data. Opaque[69] was a distributed data analytics platform introducing SGX-enabled oblivious relational operators to mask data access patterns. SecureKeeper[14] provided an SGX-enhanced ZooKeeper to protect the sensitive application data. HardIDX[20] leveraged SGX to help search over encrypted data. Different in application scenarios, we intend to leverage SGX to protect the user input privacy and propose the SGX-enhanced partitioned deep learning model for the cloud services.

11 Conclusion

We systematically study the information exposure in deep learning inference and propose DeepEnclave, a privacy-enhancing deep learning system to minimize the sensitive information disclosure of user inputs. The key innovation of our system is to partition each deep neural network into two subnet models by exploiting the layered compositional network structure. We further leverage Intel SGX to protect the confidentiality of both user inputs and user-specified deep neural network layers. In addition, we design a neural network assessment framework to quantify the privacy loss and can help end users determine the optimal partitioning layers for different neural network architectures. Our system by design can render existing state-of-the-art input reconstruction techniques ineffective, thus eliminating the channels for adversaries to invert the deep neural networks and reconstruct the user inputs.

References