Towards Characterizing and Limiting Information Exposure in DNN Layers

07/13/2019 ∙ by Fan Mo, et al. ∙ 0

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models. Based on the concept of generalization error, we propose a framework to measure the amount of sensitive information memorized in each layer of a DNN. Our results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. We find that, while the neuron of convolutional layers can expose more (sensitive) information than that of fully connected layers, the same DNN architecture trained with different datasets has similar exposure per layer. We evaluate an architecture to protect the most sensitive layers within the memory limits of Trusted Execution Environment (TEE) against potential white-box membership inference attacks without the significant computational overhead.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

On-device DNNs have achieved impressive performance on a broad spectrum of services based on images, audio, and text. Examples include face recognition for authentication 

(Vazquez-Fernandez and Gonzalez-Jimenez, 2016), speech recognition for interaction (McGraw et al., 2016)

and natural language processing for auto-correction 

(Bellegarda and Dolfing, 2017). However, DNNs memorize in their parameters information from the training data (Zhang et al., 2017; Yeom et al., 2018; Zeiler and Fergus, 2014). Thus, keeping DNNs accessible in user devices leads to privacy concerns when training data contains sensitive information.

Previous works have shown that a reconstruction of the original input data is easier from the first layers of a DNN, when using for inference the layer’s output (activation) (Gu et al., 2018; Osia et al., 2018, 2017). In addition, the functionality of the parameters of each layer is different. For example, parameters of first layers trained (on images) output low-level features, whereas parameters of later layers learn higher level features, such as faces  (Zeiler and Fergus, 2014).

We hypothesize that the memorization of sensitive information from training data differs across the layers of a DNN and, in this paper, present an approach to measure this sensitive information. We show that each layer behaves differently on the data they were trained on compared to data seen for the first time, by quantifying the generalization error (i.e. the expected distance between prediction accuracy of training data and test data (Yeom et al., 2018; Shalev-Shwartz et al., 2010)). We further quantify the risk of sensitive information exposure of each layer as a function of its maximum and minimum possible generalization error. The larger the generalization error, the easier the inference of sensitive information from training set data.

We perform experiments by training VGG-7 (Simonyan and Zisserman, 2014) on three image datasets: MNIST (LeCun et al., 2010), Fashion-MNIST (Xiao et al., 2017), and CIFAR-10 (Krizhevsky and Hinton, 2009). Our results show that last layers memorize more sensitive information about training data, and the risk of information exposure of a layer is independent of the dataset.

To protect the most sensitive layers from potential white-box attacks (Melis et al., 2019; Hitaj et al., 2017; Nasr et al., 2018), we leverage a resource-limited Trusted Execution Environment (TEE) (Chou et al., 2018; Ohrimenko et al., 2016; Hunt et al., 2018) unit, Arm’s TrustZone, as a protection example. Experiments are conducted by training last layers in the TEE and first layers outside the TEE. Results show that the overhead in memory, execution time and power consumption is minor, thus making it an affordable solution to protect a model from potential attacks.

2. Proposed approach

2.1. Problem Definition

Let be a DNN with layers, parameterized by , where is the matrix with the parameters of layer . Let be the training set of images . Let , a randomly selected subset of with , be the private dataset and , with , be the non-private dataset.

As training on might embed some information of in the parameters of each layer, we aim to quantify the exposure of sensitive information in each . The sensitive information we are interested in analyzing is the absence or presence of any in the training data.

Figure 1. The proposed framework for measuring the risk of exposing sensitive information in a deep neural network trained on a private dataset . and are obtained by fine-tuning the parameters of a target layer on the whole training set (i.e. both and non-private training set ) and , respectively.

2.2. Sensitive Information Exposure

We leverage the fact that , trained on , has a higher accuracy of predicting classes of data points from than from another dataset, . The difference in prediction accuracy indicates the generalization error (Yeom et al., 2018; Shalev-Shwartz et al., 2010) of and how easy is to recognize whether a data point was in during training. We define the risk of sensitive information exposure of each based on the maximum and minimum possible generalization errors (see Figure 1). A larger difference in the maximum and minimum of generalization error could show the more sensitive information exposure which results in inferring more accurately the absence or presence of data in the training data (i.e. membership inference attack (Yeom et al., 2018)).

To obtain the maximum generalization error, we increase the chance of overfitting to by fine-tuning and by freezing parameters of other layers of . We call this model . If is the distance between and measured by the cost function used in training, we quantify , the generalization error of , based on its different behaviour on and :


where is the mathematical expectation.

To obtain the minimum generalization error without forgetting , we create a baseline by fine-tuning on and by freezing the parameters of the other layers of . This fine-tuning makes generalized on both and , which can be quantified as:


, and share the same layers, except the target layer . Therefore, the differences in each pair of


are due to different parameters of layer .

We therefore quantify , the risk of sensitive information exposure of layer , by comparing the generalization error of and :


The larger , the higher the risk of exposing sensitive information.

3. Measuring information exposure

3.1. Model and Datasets

We use VGG-7 as the DNN

, which has six convolutional layers followed by one fully connected layer (16C3-16C3-MP-32C3-32C3-MP-32C3-32C3-MP-64FC-10SM). Each layer is followed by Rectifier Linear Unit (ReLU

(Nair and Hinton, 2010)activation function.

We use three datasets: MNIST, Fashion-MNIST, and CIFAR-10. MNIST includes 60k training images of handwritten digits of 10 classes (i.e. 0 to 9). Fashion-MNIST contains 60k images of 10 classes of clothing, namely T-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. CIFAR-10 includes 50k training images of 10 classes including airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.

We split each training set into set and set , as explained in Sec. 2.1

. We use 20 epochs for MNIST, 40 epochs for Fashion-MNIST, and 60 epochs for CIFAR-10. The accuracy of VGG-7 on MNIST, Fashion-MNIST, and CIFAR-10 is 99.29%, 90.55%, and 71.63%, respectively. We then fine-tune

as and with 10 epochs for MNIST, 20 epochs for Fashion-MNIST, and 30 epochs for CIFAR-10.

(b) Fashion-MNIST
(c) CIFAR-10
Figure 2. Generalization errors of and trained on half of the training set,

, of (a) MNIST, (b) Fashion-MNIST and (c) CIFAR-10 for fine-tuning each target layer. Error bars represent 95% confidence intervals.

3.2. Results and Discussion

Generalization error. Figure 2 shows the generalization errors of and . For all three datasets, the baseline model , as expected, has higher generalization errors than the model , whose layer is overfitted to dataset , while the generalization error of CIFAR-10 is greater than that of Fashion-MNIST that in turn is greater than that of MNIST. A more complex dataset (e.g. CIFAR-10) is associated to a larger difference between and compared to a less complex dataset (e.g. MNIST), so it is harder to generalize the model to predict by training with .

As we go through the convolutional layers, the generalization error of increases, while the generalization error of decreases until the or layer. A possible explanation is that first layers memorize generic information (e.g. colors, and corners), whereas last layers memorize more specific information that can be used to identify a specific image. For example, fine-tuning the last layers using leads to memorize specific information of , which consequentially increases the generalization errors of when predicting .

Sensitive information exposure. Figure 3 shows the risk of sensitive information exposure for each layer of VGG-7 on all three datasets. The first layer has the lowest risk, and the risk increases as we go through the layers, with the last convolutional layer having the highest sensitive information exposure, which is 0.63 for both MNIST and Fashion-MNIST and 0.5 for CIFAR-10. This confirms the bigger derivation of the generalization error of from in the last layers than the first layers. In addition, the order of layers in terms of sensitive information exposure is almost the same across all three datasets.

Figure 3. The risk of sensitive information exposure of VGG-7 per layer on MNIST, Fashion-MNIST and CIFAR-10. Error bars represent 95% confidence intervals.
Figure 4. Risk per neuron for each layer on MNIST, Fashion-MNIST and CIFAR-10. Error bars represent 95% confidence intervals.

We also compute the risk per neuron for each layer by normalizing the risk of sensitive information exposure by the total number of neurons of the layer (Figure 4). The results show the risk per neuron increases as we move through convolutional layers. Neurons in the late convolutional layers have high capabilities in memorizing sensitive information, whereas the fully connected layer (layer ) has a much smaller risk per neuron.

4. Trusted Execution Environment

4.1. Setup

Figure 5. Using a TEE to protect the most sensitive layers (last layers) of an on-device deep neural network.
(b) CIFAR-10
Figure 6. Execution time, memory usage and power usage for protecting layers of VGG-7 trained on MNIST (left column) and CIFAR-10 dataset (right column) using the TrustZone of device. The x-axis corresponds to several last layers included in the TrustZone. O refers to the calculation of cost function; SM, FC, D, MP, and C refer to the softmax, fully connected, dropout, maxpooling, convolutional layers of VGG-7. Number of layers with trainable parameters in the TrustZone are 1, 2, 3, and 4. The dash line represent the baseline, which runs all the layers outside the TrustZone. Error bars represent 95% confidence intervals.

In this section, we develop an implementation and evaluate the cost of protecting the last layers of an on-device DNN during fine-tuning by deploying them in the TrustZone of a device (see Figure 5). TrustZone is ARM’s TEE implementation that establishes a private region on the main processor. Both hardware and software approaches isolate this region to allow trusted execution. As TEEs are usually small, we only protect the most sensitive layers of the model and use the normal execution environment for the other layers.

We use Darknet (Redmon, 2016) DNN library in Open Portable TEE (OP-TEE)111, a TEE framework based on TrustZone, of a Raspberry Pi 3 Model B. This model of Raspberry Pi 3 runs instances of OP-TEE with 16 mebibytes (MiB) TEE’s memory. The choice of Darknet (Redmon, 2016) is due to its high performance and small dependencies. The scripts we used in our evaluation are available online222

We fine-tune the pre-trained VGG-7 (from the previous section) with MNIST and CIFAR-10, respectively. Continuous layers are deployed in the TrustZone from the end for simplicity, including both layers with (i.e. the convolutional and fully connected layer) and without (i.e. the dropout and maxpooling layer) trainable parameters.

4.2. Results and Discussion

Figure 6 shows the execution time (in seconds), memory usage (in MB), and power consumption (in Watt, using RuiDeng USB Tester (UM25C)333 of securing a part of the DNN in the TrustZone, starting from the last layer, and continuing adding layers until the maximum number of layers the zone can hold.

The resulting execution times are MNIST: , ; CIFAR-10: , and memory usage is MNIST: , ; CIFAR-10: , . The increase however is small compared to the baseline (Execution time: 1.94% for MNIST and 1.62% for CIFAR-10; Memory usage: 2.43% for MNIST and 2.19% for CIFAR-10). Moreover, running layers in the TrustZone did not significantly influence the power usage (MNIST: , ; CIFAR-10: , ).

Specifically, deploying the dropout layer and the maxpooling layer in the TEE increases both the execution time and memory usage. The reason is that these two types of layers have no trainable parameters, and for Darknet, the dropout and maxpooling are directly operated based on trainable parameters of their front layer. Therefore, to run these two types of layers in the TEE, their front layer (i.e. fully connected/convolutional layers) needs to be copied into the TEE, which increases the cost. For layers with parameters that we aim to protect (1, 2, 3, and 4 in Figure 6), deploying fully connected layers (i.e. 1, 2) in the TEE does not increase the execution time accumulated on first layers, and does not increase the memory usage. Deploying convolutional layers (i.e. 3 and 4) leads to an increase of execution time but does not increase memory usage when using MNIST. The second convolutional layer (i.e. 4) only increases memory usage when using CIFAR-10. However, exhausting the most available memory of the TEE can also cause an increase of overhead, so the reason for this increment of memory usage needs more analysis. Overall, for our implementation, protecting fully connected and convolutional layers has lower costs than other layers without trainable parameters with the TEE.

5. Conclusion

We proposed a method to measure the exposure of sensitive information in each layer of a pre-trained DNN model. We showed that the closer the layer is to the output, the higher the likelihood that sensitive information of training data is exposed, which is opposite to the exposure risk of layers’ activation from test data (Gu et al., 2018). We evaluated the use of TEE to protect individual sensitive layers (i.e. the last layers) of a deployed DNN. The results show that TEE has a promising performance at low cost.

Future work includes investigating the advantages of protecting the later layers of a DNN against, among others, white-box membership inference attacks (Nasr et al., 2018).


We acknowledge the constructive advice and feedback from Soteris Demetriou and Ilias Leontiadis. The research in this paper is supported by grants from the EPSRC (Databox EP/N028260/1, DADA EP/R03351X/1, and HDI EP/R045178/1).