Deep Private-Feature Extraction

02/09/2018 ∙ by Seyed Ali Osia, et al. ∙ 0

We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints. Using the selective exchange of information between a user's device and a service provider, DPFE enables the user to prevent certain sensitive information from being shared with a service provider, while allowing them to extract approved information using their model. We introduce and utilize the log-rank privacy, a novel measure to assess the effectiveness of DPFE in removing sensitive information and compare different models based on their accuracy-privacy tradeoff. We then implement and evaluate the performance of DPFE on smartphones to understand its complexity, resource demands, and efficiency tradeoffs. Our results on benchmark image datasets demonstrate that under moderate resource utilization, DPFE can achieve high accuracy for primary tasks while preserving the privacy of sensitive features.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The increasing collection of personal data generated by, or inferred from, our browsing habits, wearable devices, and smartphones, alongside the emergence of the data from the Internet of Things (IoT) devices are fueling many new classes of applications and services. These include healthcare and wellbeing apps, financial management services, personalized content recommendations, and social networking tools. Many of these systems and apps rely on data sensing and collection at the user side, and uploading the data to the cloud for consequent analysis.

While many of the data-driven services and apps are potentially beneficial, the underlying unvetted and opaque data collection and aggregation protocols can cause excessive resource utilization (i.e., bandwidth and energy) [1], and more importantly data security threats and privacy risks [2]

. Collection and processing of private information on the cloud introduces a number of challenges and tradeoffs, especially when scalability of data collection and uploading practice are taken into consideration. The data is often fed into machine learning models for extracting insights and features of commercial interest, where the information is exposed to data brokers and service providers. While certain features of the data can be of interest for specific applications (e.g., location-based services, or mobile health applications), the presence of additional information in the data can lead to unintended subsequent privacy leakages 

[3, 4]. Current solutions to this problem, such as cryptography [5, 6], complete data isolation and local processing [7] are not efficient for big data and techniques relying on deep learning [8]. In today’s data-driven ecosystem, these privacy challenges are an inherent side effect of many big data and machine learning applications.

In this paper, we focus on providing privacy at the first step of this ecosystem: the exchange of acquired user data between the end user and a service provider. We propose a novel solution based on a compromise between scalability and privacy. The proposed framework is based on the idea that when preparing data for subsequent analysis by service provider, the end user does not need to hide all the information by means of cryptographic methods, which can be resource-hungry or overly complex for the end-user device. Instead, it might suffice to remove the sensitive parts of the information (e.g., identity features in a face image), while at the same time preserving the necessary information for further analysis. This is also the case in many surveillance applications where a central node is required to process user data that may be sensitive in some aspects.

Fig. 1: The proposed hybrid framework for user-cloud collaboration.

The proposed hybrid framework in which the user and cloud collaborate to analyze the raw user data in a private and efficient manner is depicted in figure 1. Our work relies on the assumption that the service provider releases a publicly verifiable feature extractor module based on an initial training set. The user then performs a minimalistic analysis and extracts a private-feature from the data and sends it to the service provider (i.e., the cloud) for subsequent analysis. The private-feature is then analyzed in the cloud and the result yields back to the user. The fundamental challenge in using this framework is the design of the feature extractor module that removes sensitive information properly, and on the other hand does not impact scalability by imposing heavy computational requirements on the user’s device.

In the rest of this paper, we first discuss the privacy issues in different aspects of machine learning and consider the problem of “user data privacy in interaction with cloud services” as the main purpose of this paper. To design the feature extractor module, we express our privacy preservation concerns in an optimization problem based on mutual information and relax it to be addressable by deep learning. We then present the Deep Private-Feature Extractor (DPFE), a tool for solving the aforementioned relaxed problem. We then propose a new privacy measure, the log-rank privacy, to verify the proposed feature extractor, measure its privacy, and evaluate the efficiency of the model in removing sensitive information. The log-rank privacy can be interpreted from different perspectives, including entropy, -anonymity and classification error. We evaluate this framework under the facial attribute prediction problem by using face images. In this context, we remove the face identity information while keeping facial attribute information, and analyze the privacy-accuracy performance tradeoff. Finally, we implement different private-feature extractors on mobile phone to compare the performance of different solutions and addressing the scalability concern.

The main contributions of this paper are: (i) Proposing a hybrid user-cloud framework for the user data privacy preservation problem which utilizes a private-feature extractor as its core component; (ii) Designing the private-feature extractor based on information theoretic concepts leading to an optimization problem (Section 3

); (iii) Proposing a deep neural network architecture to solve the optimization problem (Section 

4); (iv) Proposing a measure to evaluate privacy and verify the feature extractor module (Section 5).111All the code and models for the paper are available on https://github.com/aliosia/DPFE

2 privacy in machine learning

Machine learning methods need to analyze sensitive data in many usecases to perform their desired tasks which may violate users’ privacy. This fundamental dichotomy has been appearing in different aspects of machine learning, as listed in Figure 2

. These concerns can be classified as

public dataset privacy, training phase privacy, training data privacy, model privacy and user data privacy which are discussed in the rest of this section.

Fig. 2: Privacy concerns may exist when: (i) data holder shares a public dataset: anonymity of individuals are threatened; (ii) data holders participate in a model training procedure with their private data; (iii) a model provider shares a publicly-learned model: the privacy of the individuals’ data used for training is at risk; (iv) an end user shares his/her data with the service provider: private information can be revealed to the service provider; (v) a service provider shares query answers with the end user: an attacker can infer the model itself by launching repeated queries.

2.1 Public Dataset Privacy

Training data is the crucial component of each learning system. Collecting and sharing rich datasets for data mining tasks can be highly beneficial to the learning community, although it might come with privacy concerns that make it a double-edged sword. Publishing a dataset that satisfies both parties by preserving the users’ privacy and other useful information for data mining tasks, is a challenging problem and has long line of work. Agrawal and Srikant [9] were some of the first to address the privacy concern in data mining for sharing a generic dataset for learning tasks, in addition to considering users’ privacy. They utilized a randomization technique, in which by adding noise to data they guaranteed its privacy. The resulting distribution of noisy data might have been different from the original distribution. To reconstruct the original distribution, a recovery method was introduced in the paper and extended by Agrawal et al. in [10]

. By utilizing this method, it is possible to train a learning model on reconstructed data with the same distribution as the original data. Many works have followed this trend and extended this idea, however, this approach faces two important obstacles: curse of dimensionality and non-robustness to attacks 

[11]

, which make it inefficient for high dimensional data with side informations.

-anonymity is another popular option for addressing the problem of anonymous dataset publishing, first introduced by Sweeney [12]. Publishing a health database that contains patient sensitive information is one of the favored instance of -anonymity usages. Assuming all data points have identity documents (IDs) that should be kept private, -anonymity deals with transforming a dataset in a way that, having an individual data features, one cannot infer its ID among at least identities. Many researches are presented to make a database -anonymous [13, 14, 15, 16] and most of them are based on the generalization (e.g. removing the last digit of the patient zip code) or suppression of features (e.g. removing the name). Nevertheless, this approach deals with some important challenges when facing attacks [11], although [17], [18] and [19] tried to overcome these challenges. Furthermore, they are only well-suited for structured databases with high level features (e.g. relational databases) which makes them hard to deploy for other type of data (e.g. image and video). Newton et al.  [20] published a -anonymous image dataset by proposing the -same algorithm. While they build the desired dataset by constructing average images among identities, their employed models are not reliable today.

2.2 Training Phase Privacy

A common problem of centralized learning is the collection of training data, especially when dealing with individual’s sensitive data (e.g. health information). People are usually reluctant in sharing data that includes their habits, interests, and geographical positions. The upcoming solution to this problem is federated learning, where data holders keep their data private, while they communicate with a central node in order to train a learning model in a cooperative manner. [21]

tried to address this problem by using distributed stochastic gradient descent (

SGD), where each party loads the latest parameters, update them using SGD and upload the new selected parameters to the central node that holds the global model. While in that case direct leakage of private data can be prevented, the uploaded gradients might still include sensitive information from the training data. Thus, a differentially private algorithm is required for sharing the gradients which is proposed in that work. This approach still has some major problems e.g. loose privacy bound addressed by [22] and potential threats by generative adversarial networks addressed by [23]. An alternative solution to this problem could be the use of cryptographic techniques like secure multi-party computation, recently used by [24]. However, these techniques are still not applicable on complex neural networks, due to their low efficiency and accuracy.

2.3 Training-Data Privacy

The growing popularity of public learning models raises the concern of privacy of the individuals involved in the training dataset. Differentially private algorithms brought us a rigorous answer to this problem, by providing a method to answer queries from a statistical database, without disclosing individuals’ information, as formalized by [25]

. An algorithm is called differentially private if the conditional likelihood ratio of presence and absence of an individual, given the transformed statistic, is close to one. Adding noise to the original statistic is one popular method leading to differential privacy. We can consider a learning model as a complex statistic of its training data which should not reveal information about the individuals. Answering complex queries by combining simple queries is the way various learning models, such as Principal Component Analysis and

-means, can be made differentially private (see the surveys by [26] and [27]). Recently differentially private deep models were proposed by [28]. The authors in [22] introduced a privacy preservation framework by utilizing differential privacy which is not specific to the learning model and possesses a state of the art privacy-accuracy tradeoff.

2.4 Model Privacy

Model privacy is the concern of the service provider and deals with keeping the learning model private, while returning the inference results to the user. Throughout these years, less attention has been paid to the model privacy, although some works such as [29] studied this problem. In general, an adversary can infer the model parameters by making many queries to the learning model and aggregate the answers. [29]

considered this approach for some basic models, e.g. logistic regression, multilayer perceptron and decision tree.

2.5 User Data Privacy

The increasing usage of cloud-based systems has triggered a situation where preserving privacy is a challenging but important task. That is, when the user data and the pre-trained learning model is not accessible from the same place, inevitably user data must be sent to the service provider for further analysis. Usually, cryptographic schemes are prevalent in these situations, where two parties do not trust each other. Focusing on the deep models offered by a cloud service, [30] introduced this problem and proposed a homomorphic encryption method to execute the inference directly on the encrypted data. Even though this work is an interesting approach to the problem, a number of shortcomings makes it impractical. In fact, approximating a deep neural network with a low degree polynomial function may not be feasible without sacrificing accuracy. Furthermore, the complexity of the encryption is relatively high which makes it inefficient for the real-world online applications. An alternative to homomorphic encryption was suggested by [31]. They used garbled circuit protocol and address some of the discussed challenges, however they were limited to employing simple neural networks and had very high computational cost.

In summary, using cryptographic techniques on complex deep neural networks is not feasible yet, while the problem of user data privacy is getting more and more important everyday in the cloud computing era. In this paper we are targeting this challenge and try to address it with a machine learning solution, based on a specific kind of feature extraction model, formulated in the next section.

3 Problem Formulation

In this section, we address the user data privacy challenge in a different manner from encryption-based methods. The key intuition is that for many applications we can remove all of the user’s sensitive (unauthorized) information while retaining the ability to infer the primary (authorized) information. This is as opposed to encryption-based solutions that try to encode all the information such that only authorized users can access it. For instance, we may want to focus on hiding individuals’ identities in a video surveillance system, but still allow to count the number of participants. In this scenario, a trivial solution is to censor people’s faces in the frames, however this solution fails when the purpose is to measure the facial attributes such as emotion or gender. Henceforth, we address this problem as a privacy preservation problem and use the terms primary and sensitive

information as the information needed to be preserved and removed, respectively. Assuming the service provider knows the primary and sensitive random variables, we abstract this concept as an optimization problem by utilizing mutual information (see Appendix 

A for information theoretic preliminaries).

Let x be the input, z the primary, and y the sensitive variables. We would like to extract a feature f, by applying a function g on x, which is informative about the primary variable and non-informative about the sensitive variable. We refer to the extracted feature as private-feature. More specifically, the desired private-feature is obtained through maximizing mutual information between the feature and primary variable , while minimizing mutual information between the feature and sensitive variable , as follows:

where represents the mutual information between two random variables and .

Even though at the first glance it seems that the optimal solution of this problem is to set f

equal to the best estimation of

z, this is not applicable in many real world applications because: (a) the optimal model which perfectly predicts z can be too complicated, and hence using such a feature extractor in the client-side is impossible; and (b) the service provider may not share the whole model with the client for some reasons such as copyright issues. Assuming we can accurately estimate f by using a member of family of functions , then the optimization problem becomes:

(1)
Fig. 3: Private-feature extraction probabilistic graphical model.

where, f is a deterministic function of the input variable, parameterized by . The graphical model of this problem is shown in Figure 3.

Optimizing mutual information has been widely used in many information theoretic approaches of machine learning problems. The authors in [32] formulated the Infomax and tried to address the problem of unsupervised deterministic invertible feature extraction by maximizing the mutual information between the input and feature. [33] relaxed the limiting invertibility constraint and used a variational approach which leads to the IM algorithm for maximizing the mutual information. Recently, [34] used a similar method to maximize the mutual information in generative adversarial networks. These works can be considered as the fundamental works in the problem of unsupervised feature extraction from information theoretic viewpoint, however, since we are utilizing a supervised approach, those methods cannot be applied to our case. Among works considering supervised feature extraction, The information bottleneck introduced in [35] is the most relevant work. In general, Information bottleneck provides an information theoretic framework for analyzing the supervised feature extraction procedure. Although their optimization problem almost looks similar to ours, there is a fundamental difference between the two approaches. More specifically, they use instead of , meaning that irrelevant information to z should be removed by minimizing in the process of feature extraction. Therefore, they can not directly consider the privacy constraints about y

. Moreover, their optimization problem is solved through an analytical approach that assumes that the joint probability distribution

, is known. However, in practice this distribution is often unavailable. Although their analytical method is impractical, nevertheless, their framework provides a powerful tool for analysis of the supervised feature extraction methods.

Similar to the information bottleneck optimization problem, the private-feature extraction problem (Equation 1) is non-convex and can not be solved through the known convex optimization algorithms. To overcome this challenge, it is common to bound the optimization problem, and then by using the iterative methods similar to [33] and [36], obtain the desired results. To this end, we first obtain the lower and upper bounds for and respectively, and then try to maximize the lower bound of Equation 1. Henceforth, we assume y to be a discrete sensitive variable in order to address the classification privacy problem.

Lower bound for . We derive a variational lower bound for mutual information by first expressing Lemma 1 and then proving Theorem 2.

Lemma 1

For any arbitrary conditional distribution , we have:

(2)

Proof  See Appendix B.1.  

Theorem 2

The lower bound for is given by:

(3)

Proof  For all members of a parametric family of distributions , the right hand side of Equation 2 can be considered as the lower bound for mutual information. The equality happens when is equal to . Therefore, if we consider a rich family of distributions for in which a member can approximate well enough, we can obtain a tight enough lower bound for mutual information by maximizing the right hand side of Equation 2 with respect to . By utilizing the definition of Entropy, we obtain as the desired lower bound.  

Upper bound for . A two-step procedure can be used to find an upper bound for the mutual information. First, we use the Lemma 3 and Jensen inequality to prove Theorem 4, and obtain as the primitive upper bound for

. Then we use kernel density estimation (

KDE) (see [37]) and use Lemma 5 and 6 to obtain as the desired upper bound for through Theorem 7.

Lemma 3

Assume y

is a discrete random variable with

as its range, then:

Proof  By substituting with and then with in the main formula of , we obtain the desired relation.  

By utilizing Jensen inequality222see Appendix A and manipulating Lemma 3, we can compute as a primitive upper bound for mutual information, as follows.

Theorem 4

The upper bound for is given by:

(4)

Proof  See Appendix B.2.  

Since computing by Equation 4 is not tractable, we use an approximation technique to obtain the upper bound. By employing the kernel density estimation, we can efficiently estimate [38]. We then utilize the Silverman’s rule of thumb [39]

and use a Gaussian kernel with the desired diagonal covariance matrix. Next, by normalizing each dimension of the feature space to have zero mean and unit variance, we acquire a symmetric Gaussian kernel with

fixed covariance matrix, , where is a constant depending on the dimensionality of the feature space and the size of the training data. This kind of normalization is a common process in machine learning [40] and is impartial of relations among different dimensions including independency and correlation. Finally, conditioning on y, for each we can think of

as a Gaussian Mixture Model (

GMM) (see [37]) and use the following lemmas from [41] to obtain a reliable upper bound.

Lemma 5

[41]

For two multidimensional Gaussian distributions,

and , with and as their expected values and the same covariance matrix , we have:

Lemma 6

[41] For two given GMMs and , we have:

where for each and , and are Gaussian distributions forming the mixtures.

We can use Theorem 4, Lemma 5 and Lemma 6 to derive the desired upper bound for .

Theorem 7

Having large training data, the upper bound for is given by:

(5)

where is the extracted feature from data and its corresponding label . The sum is over pairs of points with different y labels.

Proof  See Appendix B.3.  

In other words,

is an upper bound that is proportional to the average Euclidean distance between each pairs of feature vectors having different

y labels. This value is practically hard to estimate, especially when we use SGD, and have large number of classes. Therefore, as stated in Theorem 8 and Corollary 9, we use an equivalent relation for which is easier to optimize.

Theorem 8

Constraining the variance of each dimension of feature space to be 1, we have:

(6)

where is a constant function of feature space dimension and number of training data.

Proof  See Appendix B.4.  

Corollary 9

We can optimize the right hand side of Equation 6 instead of Equation 5 fo obtain .

Considering Corollary 9 together with Theorem 8, we realize that for a random pair of feature points, we should decrease their distance if the labels are different, and increase their distance if they are the same. This is very similar to the Contrastive loss idea presented in [42]

which is a popular loss function for

Siamese architecture [43]. Siamese networks are used for metric learning purposes and tends to form a feature space in which similar points are gathered near each other. This is the opposite of what we aim to achieve: increase the distance of similar points and decrease the distance of dissimilar points.

By utilizing the suggested lower and upper bounds, we can substitute the original private-feature extraction problem (Equation 1) with the following relaxed problem.

(7)
Fig. 4: The private-feature extraction framework. and are the primary and sensitive variables, respectively. and are two independent samples, and and are their corresponding features. z-predictor just uses to compute the first term of loss function, whereas y-remover uses both and to compute the second term of loss function (see Equation 7). Solid lines show data flow and dotted lines indicate affection.

Considering the above equation, we should optimize an objective function that consists of two loss functions: the loss of the primary variable preservation modeled by a classification loss (first term), and the loss of sensitive variable elimination modeled by a contrastive loss (second term). Thus, the general training framework of the private-feature extractor contains three main modules: feature extractor, primary variable predictor and sensitive variable remover, as shown in Figure 4. Note that according to the second term of Equation 7, the loss function of removing sensitive variable is defined on the pairs of samples, and as a result the y-remover module also operates on pairs of features.

We propose a general deep model along with SGD-based optimizers to solve the optimization problem in Equation 7, as explained in the next section.

4 Deep Architecture

By utilizing the latest breakthroughs in the area of deep neural networks, we can practically find good local optimums of non-convex objective functions through SGD based algorithms, and accurately estimate complex non-linear functions. Today, a large portion of state of the art learning models are deep. Therefore, having a general framework for privacy preserving deep inference is necessary. In this paper, we focus on image data (in the context of identity v.s. gender, expression, or age recognition), and propose a deep architecture based on CNN (Fig. 5) to optimize the objective function of the relaxed problem (Equation 7

). It is worth mentioning that the proposed framework can be generalized to other applications and deep architectures (e.g. recurrent neural networks).

We call the proposed Deep Private-Feature Extractor architecture; DPFE. We consider two consecutive CNNs; one as the feature extractor and the other as the primary variable predictor. A simple strategy for building these modules is the layer separation mechanism introduced in [44]

. We can also employ batch normalization layer

[45] and normalize each dimension of the feature space, as stated in Section 3. In the following, we first introduce the layer separation mechanism, and then proceed with dimensionality reduction and noise addition issues that can enhance the preservation of privacy.

4.1 Layer Separation Mechanism

Fig. 5: Deep CNN architecture for private-feature extraction (DPFE architecture). and are independent random samples and and are their corresponding sensitive labels. y-remover first checks the equality of sensitive labels and then apply the information removal loss function.

To deploy our framework, we can start from a pre-trained recognizer of primary variable (e.g. a deep gender recognition model), and make it private to the sensitive variable (e.g. identity). In order to do this, we choose the output of an arbitrary intermediate layer of the pre-trained model as the preliminary private-feature and simply partition the layers of the model into two sets: the elementary and the secondary layers that form the feature extractor and the primary variable predictor, respectively. In this way, the same model can be easily fine-tuned by just appending the contrastive loss function and continuing the optimization process, leading to the private-feature as the intermediate layer’s output. This procedure is shown in Fig. 5.

One may argue that separating the layers of a deep model is sufficient to obtain an ideal private-feature in the intermediate layer due to the nature of deep networks. In general, the higher layers of a deep architecture provide a more abstract representation of the data and drop the irrelevant information including the sensitive information [46] and preserve the primary variable  [47]. Therefore, there is no need to fine-tune the model with the suggested DPFE architecture. However, this argument can easily be rejected by considering the counter example provided by deep visualization techniques. For example, [48] provided a method to reconstruct the input image from intermediate layers of a deep network. Osia et.al. used this method in [44] and demonstrated that the original face image can be reconstructed from some intermediate layers of the gender recognition model. Thus, there is no guarantee that the intermediate layers drop the sensitive information (identity in this case).

4.2 Dimensionality Reduction

Imagine that the extracted private-feature has a low dimension (in Section 5 we use a 10-dimensional feature space). In this case, we will benefit from the following advantages:

  • We can highly decrease the communication cost between user and the service provider, because instead of sending the raw input data to the cloud, the user will only send the private-feature of the input.

  • As shown in Section 5, we need to estimate an expectation to measure the privacy, thus lower dimension will help us to avoid the curse of dimensionality during the approximation process.

  • Reducing the dimension of the private-feature will intrinsically improve privacy as suggested by [44] and [49].

Nevertheless, a potential disadvantage of the dimensionality reduction is that it can negatively affect on the accuracy of the primary variable prediction. However, we show in our experiments that the adverse effect of dimensionality reduction is negligible.

Reducing the dimensionality can be done as a preprocessing step on the pre-trained network. In fact, after choosing the intermediate layer, we can first execute the following operations: (i) Embed an auto-encoder with a low dimensional hidden layer on top of the chosen layer; (ii) Fine-tune the model to obtain the new primary variable predictor, and (iii) Choose the auto-encoder’s hidden layer as the new intermediate layer which is low dimensional. Consequently, we can fine-tune the model with DPFE architecture to get a low dimensional private-feature.

4.3 Noise Addition

As mentioned earlier, many of the privacy preservation methods, from randomization technique to differentially private algorithms, rely on noise addition to gain privacy as it increases the uncertainty. We can utilize this technique after finishing the training procedure, in the test phase, when the dimensionality reduction is employed and the granularity of the sensitive variable is finer than the primary variable (e.g. identity is finer than gender).

Adding noise to the private-feature will smooth out the conditional distributions of both primary and sensitive variables and form a tradeoff between privacy (of sensitive variable) and accuracy (of primary variable). This tradeoff can be helpful in real world applications, because one can choose the desired point on privacy-accuracy curve, based on the the importance of privacy or accuracy in a specific application. We will discuss this tradeoff in detail in Section 6.

5 Privacy Measure

In this section, we propose a method for evaluating the quality of privacy algorithms. Considering the problem formulation by mutual information (Equation 1), one may suggest the negative of mutual information between the extracted private-feature and the sensitive variable (), as a privacy measure. Since and is constant, this approach is equivalent to considering as the privacy measure. However, this measure has two shortcomings: (i) it is difficult to obtain an efficient estimation of ; and (ii) there is no intuitive interpretation of this measure for privacy. In order to resolve these problems, we can relax the definition of uncertainty. We achieve this by partitioning the conditional probabilities by their rank order and build a lower bound for the conditional entropy:

It is known that among some numbers which sums into one, the ’th highest value is lower than or equal to . So if we consider as the rank of in the set of sorted descending, we have:

(8)

which leads to the following definition by dividing all formulas by , in order to have a normalized measure between zero and one.

Definition 10 (Log-Rank Privacy)

The log-rank privacy of a discrete sensitive variable y, given the observed feature vector f, is defined as:

(9)

where r is a random function of f, and y corresponds to the rank of in the set of which has been sorted in a descending order.

Assuming we have an estimation of , log-rank privacy can be empirically estimated by the sample mean of training in the rank logarithm:

where for is the rank of in the descending ordered set . In the following, we provide some intuition about the log-rank privacy and its relation to entropy, k-anonymity and classification error.

20-questions game interpretation. Consider the 20-questions game, in which we want to guess an unknown object by asking yes/no questions from an oracle. As stated in [50], the entropy is equivalent to the minimum number of questions one could ask in order to find the correct answer. Now consider the situation where we cannot ask any kind of yes/no questions, but only the questions in which we guess to be a candidate for the final answer, e.g. ’is the answer a chair?’. Also assume that if we can guess the correct answer after questions, we would be penalized by ; so that the wrong guesses are punished more at the beginning. Evidently, the optimal strategy to have the minimum expected penalty is to guess the objects in the same order with their probabilities. Using this strategy, the expected penalty would be equal to the log-rank privacy.

k-anonymity and expected rank. k-anonymity deals with the number of entities we are equally uncertain about. Expected rank can be considered as a soft interpretation of this number, relaxing the equal uncertainty with the weighted sum of ranks. Thus, the rank variable expectation can be thought as the expected number of entities that we are in doubt about.

Classification error extension. One could suggest using classification error (zero-one loss) as the privacy measure, as it represents the deficiency of the classifier. Using this measure is equal to considering zero and one penalty for the correct and wrong guesses of the first question, respectively. Thus, two situations where we can find the correct label in the second and tenth question are considered equal and both penalized by one. The log-rank privacy handles this issue by penalizing different questions using their ranks’ logarithm and can be considered as an extension of classification error.

Sensitivity analysis. Empirically approximating an expected value by drawing samples from a probability distribution is a common method in machine learning [37]. For comparing empirical estimation of log-rank privacy with entropy, we need to estimate the order of probabilities in the former, while the exact values of probabilities are needed in the later. In general, approximating the log-rank privacy is less sensitive to the error of the density estimation and can gain lower variance. Detailed sensitivity analysis is out of scope of this paper and will be considered in future work.

6 Evaluation

In this section, we evaluate the proposed private-feature extractor by considering the problem of facial attribute prediction. We use each face image as an input and infer its facial attributes such as gender, expression, or age, in a supervised manner. We extract a feature for facial attribute prediction, which at the same time is non-informative with respect to the identity of the person (sensitive attribute). In all of our experiments, we used the CelebA face dataset, presented in [51], which includes 40 binary facial attributes, such as gender (male/female), age (young/old), and smiling (yes/no) with the corresponding identity labels. In the following, first we explain the experiment setting and then we discuss the results.

6.1 Experiment Setting

0:  training data, intermediate layer, attribute set
   attribute prediction model
   (e.g. conv7)
   size of ’s output
   (e.g. {Gender & Age})
   linear auto-encoder with input/output size of
   hidden layer of (private-feature layer)
  Initialize with PCA weights on ’s output
   embed to on top of
   fine-tune on
  
   fine-tune with DPFE architecture
  : simple model, : DPFE fine-tuned model, : private-feature layer
Procedure 1 DPFE Training Phase

In our evaluations, we used the layer separation mechanism followed by dimensionality reduction and noise addition. We selected the state of the art pre-trained facial attribute prediction model presented in [52] and called it the original model.333We used a similar implementation from https://github.com/camel007/caffe-moon which used the tiny darknet architecture from https://pjreddie.com/darknet/tiny-darknet/.

Then, we chose an attribute set (e.g. {gender & age}) to preserve its information as the private-feature. Next, we selected an intermediate layer (e.g. layer conv7) of the chosen network. Since this layer can also be a high dimensional tensor, we embeded a linear auto-encoder and applied batch normalization on its hidden layer to obtain the normalized intermediate features. Finally, by fine-tuning the network, we may have an attribute prediction model with low dimensional intermediate feature, which we refer to as the

Simple model in the rest of the paper. While the low-dimensional feature preserves the information of attributes (see Theorem 2), it does not necessarily omit the sensitive information. Hence, we should fine-tune the network with the proposed DPFE architecture (figure 5) to remove identity information from the intermediate features. We refer to this model as the DPFE model. These steps are depicted in Procedure 1. We implemented all the models with the Caffe framework [53], utilizing the Adam optimizer [54], and a contrastive loss function.

We evaluated each fine-tuned model based on the following criteria:

  • Accuracy of the facial attribute prediction: achieving higher accuracy implies that the primary variable information is well preserved.

  • Identity privacy: we evaluate the privacy of the feature extractor, using two different measures. First, the log-rank privacy measure, introduced in Section 5. Second, we utilize 1NN identity classifier and consider its misclassification rate which must be high in order to preserve privacy (although this condition is not sufficient as discussed in Section 5. We also use the deep visualization technique presented in [48] to demonstrate that the higher layers of the deep network may not be reliable.

To show the generality of the proposed method, we consider four different intermediate layers (conv4-2, conv5-1, conv6-1 and conv7) together with five attribute sets (listed below), and the results for twenty Simple and twenty DPFE models.

  • G: {gender}

  • GA: {gender, age}

  • GAS: {gender, age, smiling}

  • GASL: {gender, age, smiling, big lips}

  • GASLN: {gender, age, smiling, big lips, big nose}

In what follows, we first explain the accuracy-privacy tradeoff based on the log-rank privacy measure and 1NN misclassification rate (Subsection 6.2). We then present the visualization result (Subsection 6.3), and finally address the complexity issue of the private-feature extractor by implementing the proposed framework on a smartphone (Subsection 6.4).

6.2 Accuracy vs. Privacy

To evaluate Simple and DPFE models, we designed the following four experiments and assessed different models based on their accuracy-privacy trade-off:

  1. We compared Simple and DPFE models to show the superiority of DPFE fine-tuning;

  2. We assessed the effect of different intermediate layers to indicate the appropriateness of higher layers;

  3. We evaluated the effect of extending attribute set and showed that preserving privacy becomes harder;

  4. We considered mean and standard deviation of Rank-privacy measure to guarantee privacy.

0:  test data, intermediate and private-feature layers, attribute set, model
   private-feature layer
   attribute set
   model
   covariance matrix of in
  for  do
      Gaussian noise layer with covariance
      embed as an additive noise on in
      output of
      identity privacy of
      average accuracy of on
  end for
  plot accuracy-privacy curve using
  accuracy-privacy trade-off
Procedure 2 DPFE Test Phase

In order to adjust the accuracy-privacy trade-off, we used the noise addition mechanism. After the training phase, we estimate the covariance matrix of the feature space, scale it with different ratios and use it as a covariance matrix of a Gaussian noise. By increasing the amount of noise, the accuracy of the primary variable prediction decreases but the privacy of the sensitive variable increases. As a result, we can build the accuracy-privacy trade-off curves in a manner similar to the trade-off in rate-distortion theory (see [50]). The evaluation steps are shown in Procedure 2. The accuracy-privacy curves of different models can be compared based on the following definition.

Definition 11 (Acc-Priv superiority)

For two models that try to preserve privacy of a sensitive variable and maintain accuracy of a primary variable, the one which always results in higher value of privacy for a fixed value of accuracy, is Acc-Priv superior.

Considering Equation 7, it seems that the relative importance of accuracy and privacy can be controlled by changing the values of parameter . However, this is not feasible in practice due to the challenges in the training stage. For example, training with a constant and consequent noise addition mechanism, it is possible to set different accuracy-privacy strategies by utilizing a single trained model. This is not the case when we have various models by considering different values for . We used cross validation, in order to choose a suitable fixed value for in our experiments.

We computed the accuracy-privacy trade-off on the test data with 608 identities. Setting noise to zero, for all intermediate layers and attribute sets, Simple and DPFE models reached the same accuracy level as the original model with an error margin of less than .444In order to report the accuracy of an attribute set, we consider the average accuracy of predicting each binary attributes in the set. Therefore, we can conclude that all Simple and DPFE models preserve the facial attribute information, and we may concentrate on their privacy performance.

Fig. 6: DPFE vs. Simple models: fine-tuned models with DPFE architecture achieve Acc-Priv superiority to corresponding Simple models in all layers and attribute sets.

Effect of DPFE fine-tuning. In order to verify the superiority of DPFE fine-tuning over Simple fine-tuning, we compared the accuracy-privacy curve of different models, fine-tuned with DPFE or Simple architectures. Figure 6 shows the results for the combination of two layers and two attribute sets, with different privacy measures. In all cases, DPFE models have the Acc-Priv superiority over Simple models. In other words, for a fixed value of accuracy, DPFE consistently achieves higher levels of privacy.

Fig. 7: Layer Comparison: in general, higher layers achieve Acc-Priv superiority to lower layers. In this figure, all models are fine-tuned with the DPFE architecture.

Effect of higher layers. Comparison of the accuracy-privacy curves of different layers on the same attribute set is depicted in Figure 7. The results illustrate the Acc-Priv superiority of higher layers for two attribute sets and for both privacy measures. This observation is inline with our earlier assumptions about the higher layers.

Fig. 8: Comparison of Gender accuracy-privacy trade-offs when putting more preservation constraints on the model. The intermediate layer is set to conv7.

Effect of attribute set extension. The accuracy-privacy trade-off of the DPFE fine-tuned models for different attribute sets with conv7 as the intermediate layer, are shown in figure 8. The results show that as we enlarge the attribute set and restrict the model with preserving the information, then preserving privacy becomes more challenging due to the intrinsic correlation of the identity with facial attributes.

Fig. 9: Comparison of mean and standard deviation of Rank variable for DPFE and Simple models for layer conv7.

Guaranteeing privacy. As discussed in Section 5, instead of log-rank, we could also consider the rank itself by analyzing its mean and variance. This idea is depicted in figure 9 for Simple and DPFE models. The results show that the DPFE

model has Acc-Priv superiority over the Simple model. More importantly, it forces the conditional distribution of the sensitive variable to converge to an uniform distribution, at least in the rank-mean and standard deviation sense. In fact, the mean and the standard deviation of rank measure for the discrete uniform distribution are

and , respectively. As shown in figure 9, when privacy increased, the statistics for the DPFE

model converge to their corresponding values for the uniform distribution. If we consider the normal distribution for the rank variable, we can provide an

privacy guarantee, similar to the method used in differential privacy [25]. For example, as depicted in figure 9, we can achieve the gender accuracy of up to with a rank-mean of and standard deviation of . Hence, with a probability of we can claim that the rank-privacy is greater than , and we have achieved anonymity.

6.3 Visualization

Visualization is a method for understanding the behavior of deep networks. It provides an insightful intuition about the flow of information through different layers. We used an auto-encoder objective visualization technique [48] to validate the sensitive information removal in DPFE. The reconstruction of images is done by feeding the private-feature to the Alexnet decoder proposed in [48]. Therefore, we may visually verify the identity removal property of the private-feature by comparing the original and reconstructed images. These images are shown in figure 10 for different layers of the original and DPFE fine-tuned models.

The results can be analyzed in two aspects: accuracy of desired attributes and privacy of identities. From the privacy perspective, the identity of the people in the reconstructed images of the original model can be readily observed in the last layers (e.g. conv7), while that is not the case for DPFE models. Therefore, just relying on the output of higher layers in the original model can not assure acceptable privacy preservation performance, while the DPFE models assure the privacy of identities. Regarding the accuracy, we can observe and detect the facial attributes in both models.

Fig. 10: Visualization of different layers for different models: from top to bottom rows show input images, reconstructed images from original model and reconstructed images from DPFE model. The second row shows that separating layers of a deep model and relying on specificity of higher layers does not provide identity privacy.

6.4 Complexity vs. Efficiency

(a) Layers time comparison
(b) Layers memory usage comparison
Fig. 11: Comparison of different layers on mobile phone.
Google (Huawei) Nexus 6P
Memory 3 GB LPDDR4 RAM
Storage 32 GB
CPU Octa-core Snapdragon 810 v2.1
GPU Adreno 430
OS Android 7.1.2
TABLE I: Device Specification

Although higher intermediate layers may achieve better accuracy-privacy trade-off, in some cases, such as low-power IoT devices or smartphones, their computational complexity may not be acceptable. Therefore, due to the limited resources on these devices (both memory and computational power) a privacy-complexity trade-off should also be considered. In order to address this problem, we evaluated the original architecture without dimensionality reduction on a smartphone and measured its complexity in different layers. The results are shown in figure 11. By gradually reducing the complexity of the private-feature extractor (considering lower intermediate layers in the layer separation mechanism), we also managed to reduce the inference time, memory and CPU usage, while hiding the user’s sensitive information.

We evaluated the proposed implementation on a modern handset device, as shown in Table I. We evaluated the intermediate layers cumulatively, and compared them with the on-premise solution (full model). We used Caffe Mobile v1.0 [53] for Android to load each model and measured the inference time (figure 10(a)) and model memory usage (figure 10(b)) of each of the 17 configurations. We configured the model to only use one core of the device’s CPU, as the aim of this experiment was a comparison between the different configurations on a specific device.

Results show a large increase in both inference time and memory use when loading the on-premise solution due to the increased size of the model, proving the efficiency of our solution. More specifically, and by considering the layer conv4_2 as a baseline, we experienced a 14.44% inference time and 8.28% memory usage increase in conv5_1, 43.96% inference time and 22.10% memory usage increase in conv6_1, 90.81% inference time and 35.05% memory usage increase in conv7, and 121.76% inference time and 54.91% memory usage increase in all layers (on premise). CPU usage also increases per configuration, however due to the multitasking nature of an android device, it is challenging to isolate the CPU usage of a single process and naturally the results fluctuates. Moreover, use of the lower intermediate layers can significantly reduce the complexity of private-feature extractors, especially when dealing with implementing complex deep architectures e.g. VGG-16 on edge devices and smartphones [55].

Analyzing the complexity of different layers can lead us to considering accuracy-privacy-complexity trade-offs. As an example, consider Figure 7 and suppose we want to preserve the gender information. Comparing conv7 with conv4-2 and setting the accuracy to 95%, we obtain 10% more log-rank privacy with the cost of about 90% more inference time. In this way we can choose the right strategy based on the importance of accuracy, privacy and complexity. Also by using the dimensionality reduction we can highly decrease the communication cost (compare the size of an image to size of 10 floating point numbers), although in this case we should consider the effect of dimensionality reduction on the complexity which is negligible.

We conclude that our algorithm can be implemented on a modern smartphone. By choosing a proper privacy-complexity trade-off and using different intermediate layers, we were able to significantly reduce the cost when running the model on a mobile device, while at the same time preserving important user information from being uploaded to the cloud.

7 conclusion and future work

In this paper, we proposed a hybrid framework for user data privacy preservation. This framework consists of a feature extractor and an analyzer module. The feature extractor provides a user with a private-feature which does not contains the user’s desired sensitive information, but still maintains the required information to the service provider, so it can be used by the analyzer module in the cloud. In order to design the feature extractor, we used an information theoretic approach to formulate an optimization problem and proposed a novel deep architecture (DPFE) to solve it. To measure the privacy of the extracted private-feature and verify the feature extractor, we proposed a new privacy measure called log-rank privacy. Finally, we considered the problem of facial attribute prediction from face image, and attempted to extract a feature which contains facial attributes information while it does not contain identity information. By using DPFE fine-tuning and implementing the model on mobile phone, we showed that we can achieve a reasonable tradeoff between facial attribute prediction accuracy, identity privacy and computational efficiency.

Our work can be extended in a number of ways. We used the proposed framework in an image processing application, while it can be used in other learning applications e.g. speech or text analysis and can be extended to other deep architectures e.g. recurrent neural networks. We formulated the problem for discrete sensitive variables but it can be extended for general cases. Analyzing the log-rank privacy measure can also have many potential applications in the privacy domain. An interesting future direction could be involving the log-rank privacy in the design of learning to rank algorithms. In an ongoing work, we are considering the challenge of privacy in a Machine Learning-as-a-Service platform.

Acknowledgments

We acknowledge constructive feedback from Sina Sajadmanesh, Amirhossein Nazem and David Meyer. Hamed Haddadi was supported by the EPSRC Databox grant (Ref: EP/N028260/1), EPSRC IoT-in-the-Wild grant (Ref: EP/L023504/1), and a Microsoft Azure for Research grant.

References

  • [1] N. Vallina-Rodriguez, J. Shah, A. Finamore, Y. Grunenberger, K. Papagiannaki, H. Haddadi, and J. Crowcroft, “Breaking for commercials: characterizing mobile advertising,” in Proceedings of the 2012 Internet Measurement Conference.   ACM, 2012, pp. 343–356.
  • [2] A. Acquisti, L. Brandimarte, and G. Loewenstein, “Privacy and human behavior in the age of information,” Science, vol. 347, no. 6221, pp. 509–514, 2015.
  • [3] M. Haris, H. Haddadi, and P. Hui, “Privacy leakage in mobile computing: Tools, methods, and characteristics,” arXiv preprint arXiv:1410.4978, 2014.
  • [4] H. Haddadi and I. Brown, “Quantified self and the privacy challenge,” Technology Law Futures, 2014.
  • [5] F. D. Garcia and B. Jacobs, “Privacy-friendly energy-metering via homomorphic encryption,” in International Workshop on Security and Trust Management.   Springer, 2010, pp. 226–238.
  • [6] C. Fontaine and F. Galand, “A survey of homomorphic encryption for nonspecialists,” EURASIP Journal on Information Security, vol. 2007, no. 1, p. 013801, 2007.
  • [7]

    P. Garcia Lopez, A. Montresor, D. Epema, A. Datta, T. Higashino, A. Iamnitchi, M. Barcellos, P. Felber, and E. Riviere, “Edge-centric computing: Vision and challenges,”

    ACM SIGCOMM Computer Communication Review, vol. 45, no. 5, pp. 37–42, 2015.
  • [8] S. A. Osia, A. S. Shamsabadi, A. Taheri, H. R. Rabiee, N. Lane, and H. Haddadi, “A hybrid deep learning architecture for privacy-preserving mobile analytics,” arXiv preprint arXiv:1703.02952, 2017.
  • [9] R. Agrawal and R. Srikant, “Privacy-preserving data mining,” in ACM Sigmod Record, vol. 29, no. 2.   ACM, 2000, pp. 439–450.
  • [10] D. Agrawal and C. C. Aggarwal, “On the design and quantification of privacy preserving data mining algorithms,” in ACM Symposium on Principles of Database Systems, 2001, pp. 247–255.
  • [11] C. C. Aggarwal and S. Y. Philip, “A general survey of privacy-preserving data mining models and algorithms,” in Privacy-preserving Data Mining, 2008, pp. 11–52.
  • [12] L. Sweeney, “k-anonymity: A model for protecting privacy,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 10, no. 05, pp. 557–570, 2002.
  • [13] B. C. Fung, K. Wang, and P. S. Yu, “Top-down specialization for information and privacy preservation,” in IEEE International Conference on Data Engineering, 2005, pp. 205–216.
  • [14] K. Wang, P. S. Yu, and S. Chakraborty, “Bottom-up generalization: A data mining solution to privacy protection,” in IEEE International Conference on Data Mining, 2004, pp. 249–256.
  • [15] R. J. Bayardo and R. Agrawal, “Data privacy through optimal k-anonymization,” in IEEE International Conference on Data Engineering, 2005, pp. 217–228.
  • [16] K. LeFevre, D. J. DeWitt, and R. Ramakrishnan, “Mondrian multidimensional k-anonymity,” in IEEE International Conference on Data Engineering, 2006, pp. 25–25.
  • [17] A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubramaniam, “l-diversity: Privacy beyond k-anonymity,” ACM Transactions on Knowledge Discovery from Data, vol. 1, no. 1, p. 3, 2007.
  • [18] N. Li, T. Li, and S. Venkatasubramanian, “t-closeness: Privacy beyond k-anonymity and l-diversity,” in IEEE International Conference on Data Engineering, 2007, pp. 106–115.
  • [19] D. Rebollo-Monedero, J. Forne, and J. Domingo-Ferrer, “From t-closeness-like privacy to postrandomization via information theory,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 11, pp. 1623–1636, 2010.
  • [20] E. M. Newton, L. Sweeney, and B. Malin, “Preserving privacy by de-identifying face images,” IEEE transactions on Knowledge and Data Engineering, vol. 17, no. 2, pp. 232–243, 2005.
  • [21] R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in ACM Conference on Computer and Communications Security, 2015, pp. 1310–1321.
  • [22] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar, “Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data,” in Proceedings of the International Conference on Learning Representations (ICLR), 2017.
  • [23] B. Hitaj, G. Ateniese, and F. Pérez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.   ACM, 2017, pp. 603–618.
  • [24] P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” in IEEE Symposium on Security and Privacy.   IEEE, 2017, pp. 19–38.
  • [25] C. Dwork, “Differential privacy,” in International Colloquium on Automata, Languages and Programming, 2006, pp. 1–12.
  • [26] ——, “Differential privacy: A survey of results,” in International Conference on Theory and Applications of Models of Computation, 2008, pp. 1–19.
  • [27] Z. Ji, Z. C. Lipton, and C. Elkan, “Differential privacy and machine learning: A survey and review,” arXiv preprint arXiv:1412.7584, 2014.
  • [28] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.   ACM, 2016, pp. 308–318.
  • [29] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction apis.” in USENIX Security Symposium, 2016, pp. 601–618.
  • [30] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, “Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy,” in International Conference on Machine Learning, 2016, pp. 201–210.
  • [31] B. D. Rouhani, M. S. Riazi, and F. Koushanfar, “Deepsecure: Scalable provably-secure deep learning,” arXiv preprint arXiv:1705.08963, 2017.
  • [32] A. J. Bell and T. J. Sejnowski, “An information-maximization approach to blind separation and blind deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129–1159, 1995.
  • [33] D. Barber and F. Agakov, “The im algorithm: a variational approach to information maximization,” in Proceedings of the 16th International Conference on Neural Information Processing Systems.   MIT Press, 2003, pp. 201–208.
  • [34] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in Neural Information Processing Systems, 2016, pp. 2172–2180.
  • [35] N. Tishby, F. Pereira, and W. Bialek, “The information bottleneck method,” in Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, 1999, pp. 368–377.
  • [36] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” arXiv preprint arXiv:1612.00410, 2016.
  • [37] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics).   Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2006.
  • [38] T. Duong and M. L. Hazelton, “Convergence rates for unconstrained bandwidth matrix selectors in multivariate kernel density estimation,”

    Journal of Multivariate Analysis

    , vol. 93, no. 2, pp. 417–433, 2005.
  • [39] B. W. Silverman, Density estimation for statistics and data analysis.   CRC press, 1986, vol. 26.
  • [40] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, “Efficient backprop,” in Neural networks: Tricks of the trade.   Springer, 1998, pp. 9–50.
  • [41]

    J. R. Hershey and P. A. Olsen, “Approximating the kullback leibler divergence between gaussian mixture models,” in

    IEEE International Conference on Acoustics, Speech and Signal Processing, 2007, pp. IV–317.
  • [42] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. 1735–1742.
  • [43] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 539–546.
  • [44] S. A. Osia, A. S. Shamsabadi, A. Taheri, K. Katevas, H. R. Rabiee, N. D. Lane, and H. Haddadi, “Privacy-preserving deep inference for rich user data on the cloud,” arXiv preprint arXiv:1710.01727, 2017.
  • [45] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning, 2015, pp. 448–456.
  • [46] R. Shwartz-Ziv and N. Tishby, “Opening the black box of deep neural networks via information,” arXiv preprint arXiv:1703.00810, 2017.
  • [47] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” in Neural Information Processing Systems, 2014, pp. 3320–3328.
  • [48] A. Dosovitskiy and T. Brox, “Inverting visual representations with convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4829–4837.
  • [49]

    M. Malekzadeh, R. G. Clegg, and H. Haddadi, “Replacement autoencoder: A privacy-preserving algorithm for sensory data analysis,” in

    The 3rd ACM/IEEE International Conference of Internet-of-Things Design and Implementation, 2018.
  • [50] T. M. Cover and J. A. Thomas, Elements of information theory.   John Wiley & Sons, 2012.
  • [51] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of International Conference on Computer Vision (ICCV), 2015.
  • [52] E. M. Rudd, M. Günther, and T. E. Boult, “Moon: A mixed objective optimization network for the recognition of facial attributes,” in European Conference on Computer Vision.   Springer, 2016, pp. 19–35.
  • [53] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014.
  • [54] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [55] Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin, “Compression of deep convolutional neural networks for fast and low power mobile applications,” arXiv preprint arXiv:1511.06530, 2015.
  • [56] S. S. Haykin, Neural networks and learning machines.   Pearson Upper Saddle River, NJ, USA, 2009.

Appendix A Preliminaries

Quantizing some intuitive concepts like uncertainty and information, is one of the main information theory advantages. In this part we briefly discuss these phenomenons and refer the readers to further detailed discussion in [50] and [56].

The entropy of a discrete random variable x is defined as:

which can be used to measure the uncertainty we have about x. Differential entropy

is the extention of this definition for the continuous random variables:

, where here

is the probability density function of

x

. We can also define entropy for joint and conditional probability distributions:

Based on these definitions, we can define the mutual information between two random variables, which tries to measure the amount of uncertainty reduction about one of them, given the other one:

It is also equal to -divergence between and . -divergence between two probability distributions and is a non-negative distance measure between them, define as:

So we have . These are the information theoretic definitions we used to define and solve the privacy preservation problem. Further information can be accessed through [50].

Appendix B

b.1 Proof of Lemma 1

From positivity of -divergence we know that:

So we have:

Also we know that:

Thus:

b.2 Proof of Theorem 4

From Lemma 3 we know that:

So by using Jensen inequality we have:

We can manipulate as:

So we get:

b.3 Proof of Theorem 7

By using Lemma 5 and 6 we get:

(10)

b.4 Proof of Theorem 8

In order to prove this theorem, first we need to address the following lemma:

Lemma 12

Assuming and are two samples from with mean and covariance matrix we have:

So by normalizing the feature space to has variance one for each dimension, is fixed and equal to where is the dimension.

Now we can state the proof of Theorem 8. Considering as samples from and setting we have:

We can also split pairs with their y labels similarity:

and get:

where is the number of similar pairs in the training data.

Seyed Ali Osia received his B.Sc. degree in Software Engineering from Sharif University of Technology in 2014. He is currently a Ph.D. candidate at the department of computer engineering, Sharif University of Technology. His research interests includes Statistical Machine Learning, Deep Learning, Privacy and Computer Vision.

Ali Taheri received his B.Sc. degree in Software Engineering from Shahid Beheshti University in 2015. He received his M.Sc. degree in Artificial Intelligence from Sharif University of Technology in 2017. His research interests includes Deep Learning and Privacy.

Ali Shahin Shamsabadi received his B.S. degree in electrical engineering from Shiraz University of Technology, in 2014, and the M.Sc. degree in electrical engineering (digital) from the Sharif University of Technology, in 2016. Currently, he is a Ph.D. candidate at the Queen Mary University of London. His research interests include deep learning and data privacy protection in distributed and centralized learning.

Kleomenis Katevas received his B.Sc. degree in Informatics Engineering from the University of Applied Sciences of Thessaloniki in 2006, and an M.Sc. degree in Software Engineering from Queen Mary University of London in 2010. He is currently a Ph.D. candidate at Queen Mary University of London. His research interests includes Mobile & Ubiquitous Computing, Applied Machine Learning, Crowd Sensing and Human-Computer Interaction.

Hamed Haddadi received his B.Eng., M.Sc., and Ph.D. degrees from University College London. He was a postdoctoral researcher at Max Planck Institute for Software Systems in Germany, and a postdoctoral research fellow at Department of Pharmacology, University of Cambridge and The Royal Veterinary College, University of London, followed by few years as a Lecturer and consequently Senior Lecturer in Digital Media at Queen Mary University of London. He is currently a Senior Lecturer (Associate Professor) and the Deputy Director of Research in the Dyson School of Design Engineering, and an Academic Fellow of the Data Science Institute, at The Faculty of Engineering at Imperial College of London. He is interested in User-Centered Systems, IoT, Applied Machine Learning, and Data Security & Privacy. He enjoys designing and building systems that enable better use of our digital footprint, while respecting users’ privacy. He is also broadly interested in sensing applications and Human-Data Interaction.

Hamid R. Rabiee received his B.S. and M.S. degrees (with great distinction) in electrical engineering from California State University, Long Beach, CA, in 1987 and 1989, respectively; the EEE degree in electrical and computer engineering from the University of Southern California (USC), Los Angeles, CA; and the Ph.D. degree in electrical and computer engineering from Purdue University, West Lafayette, IN, in 1996. From 1993 to 1996, he was a Member of the Technical Staff at AT&T Bell Laboratories. From 1996 to 1999, he worked as a Senior Software Engineer at Intel Corporation. From 1996 to 2000, he was an Adjunct Professor of electrical and computer engineering with Portland State University, Portland, OR; with Oregon Graduate Institute, Beaverton, OR; and with Oregon State University, Corvallis, OR. Since September 2000, he has been with the Department of Computer Engineering, Sharif University of Technology, Tehran, Iran, where he is a Professor of computer engineering, and Director of Sharif University Advanced Information and Communication Technology Research Institute (AICT), Digital Media Laboratory (DML), and Mobile Value Added Services Laboratory (MVASL). He is also the founder of AICT, Advanced Technologies Incubator (SATI), DML, and VASL. He is currently on sabbatical leave (2017-2018 academic year) as visiting professor at Imperial College of London. He has been the Initiator and Director of national and international-level projects in the context of United Nation Open Source Network program and Iran National ICT Development Plan. He has received numerous awards and honors for his industrial, scientific, and academic contributions. He is a Senior Member of IEEE, and holds three patents. He has also initiated a number of successful start-up companies in cloud computing, SDP, IoT, and storage systems for big data analytics. His research interests include statistical machine learning, Bayesian statistics, data analytics and complex networks with applications in multimedia systems, social networks, cloud and IoT data privacy, bioinformatics, and brain networks.