Privacy Preserving Face Recognition Utilizing Differential Privacy

05/21/2020
by   M. A. P. Chamikara, et al.
RMIT University
0

Facial recognition technologies are implemented in many areas, including but not limited to, citizen surveillance, crime control, activity monitoring, and facial expression evaluation. However, processing biometric information is a resource-intensive task that often involves third-party servers, which can be accessed by adversaries with malicious intent. Biometric information delivered to untrusted third-party servers in an uncontrolled manner can be considered a significant privacy leak (i.e. uncontrolled information release) as biometrics can be correlated with sensitive data such as healthcare or financial records. In this paper, we propose a privacy-preserving technique for "controlled information release", where we disguise an original face image and prevent leakage of the biometric features while identifying a person. We introduce a new privacy-preserving face recognition protocol named PEEP (Privacy using EigEnface Perturbation) that utilizes local differential privacy. PEEP applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm. As a result, the trained model will not be vulnerable to privacy attacks such as membership inference and model memorization attacks. Our experiments show that PEEP exhibits a classification accuracy of around 70 - 90

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

02/18/2022

Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches

Biometric data, such as face images, are often associated with sensitive...
09/12/2018

Privacy-preserving mHealth Data Release with Pattern Consistency

Mobile healthcare system integrating wearable sensing and wireless commu...
12/16/2017

One-sided Differential Privacy

In this paper, we study the problem of privacy-preserving data sharing, ...
02/24/2019

When Relaxations Go Bad: "Differentially-Private" Machine Learning

Differential privacy is becoming a standard notion for performing privac...
09/19/2020

Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images

Unprecedented data collection and sharing have exacerbated privacy conce...
03/08/2021

Differentially Private Imaging via Latent Space Manipulation

There is growing concern about image privacy due to the popularity of so...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Face recognition has many applications in the fields of image processing and computer vision; advancements in related technologies allow its efficient and accurate integration in many areas from individual face recognition for unlocking a mobile device to crowd surveillance. Offender tracking, surveillance, and activity detection are some of the examples where facial recognition systems are heavily used. Companies have also invested heavily in this field; Google’s facial recognition in the Google Glass project 

mandal2014wearable , Facebook’s DeepFace technology macaulay2016queen , and Apple’s patented face identification system bhagavatula2015biometric are examples of the growing number of facial identification systems. However, existing face recognition technologies and the widespread use of biometrics introduce a massive threat to individuals’ privacy, exacerbated by the fact that biometric identification is often done quietly, without telling it to the observed people.

Information privacy can be defined in different ways, but the main purpose of any definition is to show two basic aspects: information that has to be hidden and information that needs to be revealed dinur2003revealing . Accordingly, we can define information privacy as “controlled information release” that permits an anticipated level of utility via a private function that protects the identity of the data owners chamikara2019efficient . In privacy-preserving face recognition, we need to identify someone from an image without revealing the essential biometric features of the image owner, hence, involve at least two main parties: one needs to recognize an image (party 1), and the other holds the database of images (party 2). Data encryption would allow party 1 to learn the result without learning the execution of the recognition algorithm or its parameters, whereas party 2 would not learn the input image or the result of the recognition process erkin2009privacy . However, the high computational complexity and the need to trust the parties for their respective responsibilities can be major issues. Proposed in this paper is data perturbation, which is significantly less computationally complex, but incurs a certain level of utility loss. Perturbation allows “controlled information release” and all parties be untrusted chamikaraprocal . The parties will learn only the classification result (e.g. name/tag of the image) with a certain level of confidence, but will not have access to the original image.

This paper looks at privacy leaks from face recognition systems that use untrusted servers, such as in the cloud, to run the face recognition algorithm. We consider the scenario where a face image is acquired and sent to an untrusted server for processing, such as in criminal investigations chamikara2016fuzzy

. The literature identifies two major application scenarios of recognition technologies in which a third party server is used for face recognition. They are (1) the use of biometric data such as face images and fingerprint to identify and authenticate a person e.g. at border crossings, and (2) deploy surveillance cameras in public places to automatically match or identify faces (e.g. the UK uses an estimated 4.2 million surveillance cameras to monitor public areas) 

erkin2009privacy . These techniques do not require any explicit consent from the persons being watched. Facial images directly reflect the owners’ identity, and they can be easily linked to other sensitive information such as health records and financial records. This can pose a serious threat to a person’s privacy.

There are a few methods that are based on encryption to provide privacy-preserving face recognition erkin2009privacy ; sadeghi2009efficient ; xiang2016privacy , which need one or more trusted third parties in a server-based setting (e.g. cloud servers). However, in an environment where no trusted party is present, such semi-honest approaches raise the question of utility, as the authorized trusted parties are still allowed to access the original image data (raw or encrypted). Moreover, an encryption based mechanism for scenarios that process millions of faces would be extremely inefficient and difficult to maintain. The methods such as  newton2005preserving for preserving privacy by de-identifying face images can avoid the necessity of a trusted third-party. However, such methods introduce utility issues in large scale scenarios with millions of faces, due to the limitations of the underlying privacy models used (e.g. chamikaraprocal . We identify five main types of issues (TYIS) with the existing privacy-preserving approaches for face recognition. They are as follows. TYIS 1: face biometrics should not be linkable to other sensitive data, TYIS 2: the method should be scalable and resource friendly, TYIS 3: face biometrics should not be accessible by anyone (i.e. use one-way transformation), TYIS 4: face biometrics of the same person from two different applications should not be linkable, and TYIS 5: face biometrics should be revocable (if data is leaked, the application should have a way of revoking them to prevent any malicious use).

This paper proposes a method to control privacy leakage from face recognition, answering the five TYIS better than the existing privacy-preserving face recognition approaches. We propose an approach that stores data in a perturbed form. The method utilizes differential privacy to devise a novel technique (named PEEP: Privacy using EigEnface Perturbation) for privacy-preserving face recognition. PEEP uses the properties of local differential privacy to apply perturbation on input image data to limit potential privacy leaks due to the involvement of untrusted third-party servers and users. To avoid the necessity of a trusted third party, we apply randomization to the data used for training and testing. Due to the extremely low complexity, PEEP can be easily implemented on resource-constrained devices, allowing the possibility of perturbation at the input end. The ability to control the level of privacy via adjusting the privacy budget is an additional advantage of the proposed method. The privacy budget is used to signify the level of privacy provided by a privacy-preserving algorithm; the higher the privacy budget, the lower the privacy. PEEP utilizes local differential privacy at the cost of as low as points drop in accuracy, e.g. to with a privacy budget of where is considered as an acceptable level of privacy abadi2016deep ; arachchige2019local . Moreover, PEEP is capable of adjusting the privacy-accuracy trade-off by changing the privacy budget through added noise.

The rest of the paper is organized as follows. The foundations of the proposed work are briefly discussed in Section 2. Section 3 provides the technical details of the proposed approach. The results are discussed in Section 4, and Section 5 provides a summary of existing related work. The paper is concluded in Section 6.

2 Foundations

In this section, we describe the background of the techniques used in the proposed solution. PEEP conducts privacy preserving face recognition utilizing the concepts of differential privacy and eigenface recognition.

2.1 Differential Privacy (DP)

DP is a privacy model that is known to render maximum privacy by minimizing the chance of individual record identification kairouz2014extremal . In principle, DP defines the bounds to how much information can be revealed to a third party/adversary about someone’s data being present in a particular database. Conventionally (epsilon) is used to denote the level of privacy rendered by a randomized privacy-preserving algorithm () over a particular database (); is called the privacy budget that provides an insight into the privacy loss of a DP algorithm. The higher the value of , the higher the privacy loss.

Let’s take two adjacent datasets of , and , where differs from only by (plus or minus) one person. Then satisfies ()-DP if Equation (1) holds. Assume, datasets and as being collections of records from a universe and denotes the set of all non-negative integers including zero.

Definition 1.

A randomized algorithm with domain and range : is -differentially private if for every adjacent , and for any subset

(1)

2.2 Global vs. Local Differential Privacy

Global differential privacy (GDP) and local differential privacy (LDP) are the two main approaches to DP. In the GDP setting, there is a trusted curator who applies carefully calibrated random noise to the real values returned for a particular query. The GDP setting is also called the trusted curator model chan2012differentially . Laplace mechanism and Gaussian mechanism dwork2014algorithmic are two of the most frequently used noise generation methods in GDP dwork2014algorithmic . A randomized algorithm, provides -GDP if Equation (1) holds. LDP randomizes data before the curator can access them, without the need of a trusted curator. LDP is also called the untrusted curator model  kairouz2014extremal . LDP can also be used by a trusted party to randomize all records in a database at once. LDP algorithms may often produce too noisy data, as noise is applied to achieve individual record privacy. LDP is considered to be a strong and rigorous notion of privacy that provides plausible deniability and deemed to be a state-of-the-art approach for privacy-preserving data collection and distribution. A randomized algorithm provides -LDP if Equation (2) holds  erlingsson2014rappor .

Definition 2.

A randomized algorithm satisfies -LDP if for all pairs of users’ inputs and and for all , and for () Equation (2) holds. is the set of all possible outputs of the randomized algorithm .

(2)

2.3 Sensitivity

Sensitivity is defined as the maximum influence that a single individual can have on the result of a numeric query. Consider a function , the sensitivity () of can be given as in Equation (3) where x and y are two neighboring databases (or in LDP, adjacent records) and represents the

norm of a vector 

wang2016using .

(3)

2.4 Laplace Mechanism

The Laplace mechanism is considered to be one of the most generic approaches to achieve DP dwork2014algorithmic . Laplace noise can be added to a function output () as given in Equation 5 to produce a differentially private output. denotes the sensitivity of the function . In local differentially private setting, the scale of the Laplacian noise is equal to , and the position is the current input value ().

(4)
(5)

2.5 Eigenfaces and Eigenface recognition

The process of face recognition involves data classification where input data are images, and output classes are persons’ names. A face recognition algorithm needs to be first trained with an existing database of faces. The trained model will then be used to recognize a person’s name using an image input. The training algorithm needs various images to have high accuracy. When the model needs to be trained to recognize a large number of persons, the training algorithm also needs a large number of training images. Image data are often large, and the higher the number of faces to be trained, the slower the algorithm. However, facial recognition systems need high efficiency, as many of them are employed in real-time systems such as citizen surveillance zhang1997face

. When an artificial neural network (ANN) is used for face recognition, the input images need to be flattened into 1-d vectors. An image with the dimensions

will result in an

vector. High-resolution images will result in extremely long 1-d vectors, which leads to slow training and testing of the corresponding ANN. Dimensionality reduction methods can be used to avoid such complexities, and allow face recognition to concentrate on the essential features, and to ignore the noise in the input images. In dimensionality reduction, the points are projected onto a higher-dimensional line, which is named as a hyperplane. Principal component analysis (PCA) is a dimensionality reduction technique that represents a hyperplane with maximum variance. This hyperplane can be determined using eigenvectors, which can be computed using the covariance matrix of input data  

zhang1997face .

Input: normalized and centered examples expected number of PCA components Output: matrix of eigenfaces 1 for each  do 2       flatten to produce vector 3compute the mean face vector (), ; 4 for each  do 5       ; 6       7generate covariance matrix, ,
, where, ];
8 calculate the eigenvectors of
since, can be extensive, derive from the eigenvectors of , where, ;
9 compute the best eigenvectors such that, ; return eigenvectors which corresponds to the

largest eigenvalues

Algorithm 1 Generating Eigenfaces

Algorithm 1 shows the steps for generating Eigenfaces. As shown in the algorithm, an eigenface turk1991eigenfaces utilizes PCA to represent a dimensionality-reduced version of an input image. A particular eigenface considers a predefined number of the largest eigenvectors as the principal axes that we project our data on to, hence producing reduced dimensions zhang1997face . We can reduce the dimensions of an image into a dimensional eigenface where is the largest eigenvectors. By doing this, we can consider only the most essential characteristics of an input image and increase the speed of a facial recognition algorithm while preserving high accuracy. Equation 6 provides the mathematical representation of an eigenface where is a new face, is the mean or the average face, is an EigenFace, and are scalar multipliers which we have to choose in order to create new faces.

(6)

3 Our Approach: PEEP

In this section, we discuss the steps employed in the proposed privacy-preserving face recognition approach (named as PEEP). We utilize DP to apply confidentiality to face recognition. PEEP applies randomization upon the eigenfaces to create privacy-preserving versions of input images. We assume that any input device which is used to capture the facial images use PEEP to apply randomization before sending the images to the storage devices/servers.

Figure 1: Privacy-preserving face recognition using PEEP. The figure shows the placement of PEEP in a face recognition system. As shown, PEEP randomizes both training and testing images so that the untrusted third-party servers do not leak any private data to untrusted users. The callout figure in the left-hand side shows the basic flow of randomization inside PEEP, which applies Laplacian noise over eigenfaces.

As depicted by the callout box in Figure 1, PEEP involves three primary steps to enforce privacy on face recognition. They are, 1. accepting original face images, 2. generating eigenfaces, and 3. adding Laplacian noise to randomize the images. In the proposed setting, the face recognition model (e.g. MLPClassifier) will be trained solely using randomized data. An untrusted server will hold only a privacy-preserving version of the face recognition model.

3.1 Distributed eigenface generation

When the number of input faces increases to a large number, it is important that the eigenface calculation (generation) can be distributed in order to maintain efficiency. Algorithm 2 shows an incremental calculation approach of eigenfaces where a central computer (CC) in the local edge contributes to the calculation of eigenfaces in a distributed fashion. As shown in step 2 in Algorithm 2, the mean face vectors, that are generated for each partition of input data are collected and merged (using Equation 7) by the CC to generate the global mean face vector . Similarly, the CC generates the global covariance matrix, (refer step 2 Algorithm 2) using the covariance matrices generated for each partition using Equation 10. In this way, PEEP manages to maintain the efficiency of eigenface generation for extensive datasets.

(7)

In Equation 7, refers to the number of eigenfaces in the partition, whereas refers to the mean of the index of the partition. To merge the covariance matrices, the pairwise covariance update formula introduced in bennett2009numerically is adapted as shown in Equation 10 chamikara2020privacy . The pairwise covariance update formula for the two merged two column ( and ) data partitions, and , can be written as shown in Equation 8 where the merged dataset is denoted as .

(8)

Where, are means of and of the two data partitions and , respectively. and

are the co-moments of the two data partitions

and where the co-moment of a two column ( and ) dataset is represented as,

(9)

Therefore, the variance-covariance matrix update formula of the two data partitions and can be written as shown in Equation 10,

(10)

In Equation 10, assume that and are the covariance matrices returned for the data partitions and respectively, where represents the global partition (concatenation of all the former partition), whereas represents the new partition introduced to the calculation. is the merged dataset of the the data partitions, and . and are mean vectors of and respectively. represents the number of eigenfaces in the corresponding dataset. Equation 10 will be iteratively calculated for all the data partitions to generate the final value of . is initialized with the first partition, and will start from the second partition and,

(11)

We can also run Algorithm 2 in distributed computing nodes (DCN) within the local edge to conduct efficient eigenface generation. In such a setting, DCNs will communicate with a central computer (in the local edge) to generate the global mean face () and the global covariance matrix (). In this way, an agency can deal with a large number of input faces by maintaining a feasible number of DCNs.

Input: normalized and centered example partition, expected number of PCA components Output: matrix of eigenfaces 1 for each  do 2       flatten to produce vector 3compute the mean face vector (), ; 4 collect at a central computer (CC) in the local edge ; 5 receive global mean face vector, from the CC; 6 for each  do 7       ; 8       9generate covariance matrix, ,
, where, ];
10 collect at the CC; 11 receive global covariance matrix, from the CC; 12 calculate the eigenvectors of , where =
since, can be extensive, derive from the eigenvectors of , where, ;
13 compute the best eigenvectors such that, ; return eigenvectors which corresponds to the largest eigenvalues
Algorithm 2 Incremental calculation of Eigenfaces using data partitions

3.2 Generation of the principal components

After accepting the image inputs, PEEP normalizes the images to match a predefined resolution (which is accepted by PEEP as an input). We consider a default resolution normalization of . However, based on the input image sizes and the computational power of the edge devices, the users can increase or decrease the values of and suitably. Following the steps of Algorithm 1, PEEP calculates the principal components by considering the eigenvectors using the corresponding covariance matrix. The largest (the number of principal components) number of eigenvectors are used to create a particular eigenface ( is taken as input). The higher the , the higher the representation of input features, the lower the efficiency. It is important to select a suitable number for that can provide both high accuracy and high efficiency at the same time. A reliable number for can be determined by investigating the change of accuracy of the trained model.

3.3 Declaring the sensitivity before noise addition

PEEP scales the indices of the identified PCA vectors within the interval [0,1] as the next step after generating the eigenfaces. In LDP, the sensitivity is the maximum difference between two adjacent records. In PEEP, the inputs are images, and each image is dimensionality reduced to form a vector by using PCA (PCA_vectors). As PEEP adds noise to these vectors (PCA_vectors), the sensitivity of PEEP is the maximum difference between two such PCA_vectors which can be denoted by Equation 12, where represents a flattened image vector scaled within the interval [0,1], is adjacent to . Since PEEP examines the Cartesian system, we can consider the maximum Euclidean distance for the sensitivity, which is equal to a maximum of where is the number of principal components. As the normalized PCA_vectors are bounded by 0 and 1, a sensitivity much greater than 1 would entail a substantial level of noise, which can reduce the utility drastically as we use LDP for the noise application mechanism. Therefore, we select the sensitivity to be the maximum difference between two indices, which is equal to 1. Now the scale of the Laplacian noise will be equal to . As future work, we are conducting a further algebraic analysis of sensitivity to improve the precision and flexibility of the Laplace mechanism in the proposed approach of face recognition. After defining the position and scale parameters, PEEP adds Laplacian noise to each index of PCA_vectors. We take the position of the noise to be the index values and the scale of the noise to be . To generate the private versions of images (), we can perturb each index according to Equation 13, where represents an index of the flattened image vectors scaled within the interval [0,1].

(12)

3.4 Introducing Laplacian noise

After defining the position and scale parameters, PEEP adds Laplacian noise to each index of PCA_vectors. We take the position of the noise to be the index values and the scale of the noise to be . To generate the private versions of images (), we perturb each index according to Equation 13, where represents an index of the flattened image vectors scaled between 0 and 1. The user can provide a suitable value depending on the amount of privacy required and after considering the following guidelines. The higher the value, the lower the privacy. As a norm, is considered as an acceptable level of privacy abadi2016deep . We recommend to follow the same standard and use an upper limit of 9 for .

(13)
Input: examples number of images per face privacy budget pixel width (default = 47) pixel height (default = 62) number of PCA components Output: privacy preserving facial recognition model 1 Find the minimum width of all image (); 2 Fine the minimum height of all image (); 3 if   then 4       5       6normalize the example resolution to ; 7 if   then 8       9generate the flattened vectors () for each ; 10 generate the first PCA components () for each input, , according to Algorithm 1; 11 scale all the indices of between and  to generate ; 12 apply to each index of with ; 13 feed and corresponding targets to the classification model; 14 train the classification model using the randomized data to produce a differentially private classification model (); 15 release the ; Algorithm 3 Differentially private facial recognition: PEEP

3.5 Algorithm for generating a differentially private face recognition model

Algorithm 3 shows the steps of PEEP in conducting privacy-preserving face recognition model training. As shown in the algorithm, and parameters are used to increase the resolution of the input images. We use the input parameter, , to accept the number of images considered per single face (person). Since the main task of face recognition is image classification, each face represents a class. In order to produce good accuracy, a classification model should have a good image representation. Consequently, is a valuable parameter that directly influences the accuracy, where a higher value of will certainly contribute to higher accuracy due to the better representation of images between the classes (faces). Hence, allows the algorithm to extract eigenfaces that provide a better representation of the input images resulting in better accuracy. Step 3 makes sure that the number of PCA components selected does not go beyond the allowed threshold.

3.6 Privacy preserving face recognition using PEEP

As shown in Figure 1, each image input will be subjected to PEEP randomization before training or testing. The Eigenface generation and randomization take place within the local edge bounds. We assume that all input devices communicate with the third party servers only through PEEP, and the face recognition database stores only the perturbed images. Since the face recognition model (e.g. MLPClassifier) is trained only using perturbed images (perturbed eigenfaces), the trained model will not leak private information. Any untrusted access to the server will not allow any loss of valuable biometric data to malicious third parties. Since PEEP perturbs testing data, there is minimal privacy leak from testing data (testing image inputs) as well.

3.7 Theoretical privacy guarantee of PEEP on trained classifier

Although additional computations are carried out on the outcome of a differentially private algorithm, they do not weaken the privacy guarantee. So, the results of additional computations on -DP outcome will still be -DP. This property of DP is called the postprocessing invariance/robustness bun2016concentrated . Since PEEP utilizes DP, PEEP also inherits postprocessing invariance. The postprocessing invariance property guarantees that the trained model of perturbed data also satisfies the same privacy imposed by PEEP. Therefore, the proposed method ensures that there is a minimal level of privacy leak from the third party untrusted servers. However, we further investigate the privacy strength of PEEP using empirical evidence under Section 4.

3.8 Datasets

We used the open face image dataset and the large-scale CelebFaces Attributes (CelebA) dataset (see Figure 2 for sample images) to test the performance of PEEP. Open face image dataset named lfw-funneled is available at the University of Massachusetts website named “Labeled Faces in the Wild”111http://vis-www.cs.umass.edu/lfw/. The lfw-funneled dataset has 13,233 gray images. We limit the minimum number of faces per person to 100, which limits the number of images to 1,140 with five classes; “Colin Powell”, “Donald Rumsfeld”, “George W Bush”, “Gerhard Schroeder”, and “Tony Blair”222The diversity of the classes of the dataset are as follows, “Colin Powell”: 236, “Donald Rumsfeld”: 121, “George W Bush”: 530, “Gerhard Schroeder”: 109, and “Tony Blair”: 144.. Figure 2 shows the appearance of 8 sample images that are available in the datasets used. We used 70% of the input dataset for training and 30% for testing. CelebA333http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html dataset has more than 200K celebrity images, each with 40 attribute annotations. CelebA has 10,177 number of identities, 202,599 number of face images, and 5 landmark locations, 40 binary attributes annotations per image.

Figure 2: Sample images of the two databases. The lfw-funneled dataset is composed of gray images whereas the CelebA dataset is composed of colored images.

3.9 Eigenfaces and Eigenface perturbation

Figure 3 shows 8 sample eigenfaces before perturbation. As the figure shows, eigenfaces already hide some features of the original images due to the dimensionality reduction aggarwal2004condensation . However, eigenfaces alone would not provide enough privacy as they display the most important biometric features, and there are effective face reconstruction techniques turk1991eigenfaces ; pissarenko2002eigenface for eigenfaces as demonstrated in Figure 4, which shows the same set of eigenfaces (available in Figure 3) after noise addition by PEEP with . As the figure shows, the naked eye cannot detect any biometric features from the perturbed eigenfaces. Even at an extreme case of a privacy budget , the perturbed eigenfaces show mild levels of facial features to the naked eyes, as shown in Figure 5.

Figure 3: Eigenfaces. The figure shows a collection of sample eigenfaces generated from the input face images. The eigenfaces show only the most essential features of the input images.
Figure 4: Perturbed eigenfaces at . The randomized images appear to show no biometric features to the naked eye at .
Figure 5: Perturbed eigenfaces at . Here we try to demonstrate that even at an extreme case of the privacy budget (which is 100 and is not an acceptable value for , since is considered as the acceptable range for  abadi2016deep ), PEEP is capable of hiding a lot of biometric features from the eigenfaces.

4 Results and Discussion

In this section, we discuss the experiments, experimental configurations, and their results. We used MLPClassifier to test the accuracy of face recognition with PEEP. MLPClassifier is a multi-layer perceptron classifier available in the scikit learn

444https://scikit-learn.org/stable/index.html Python library. We conducted all the experiments on a Windows 10 (Home 64-bit, Build 17134) computer with Intel (R) i5-6200U (6 generation) CPU (2 cores with 4 logical threads, 2.3 GHz with turbo up to 2.8 GHz) and 8192 MB RAM. Then we provide an efficiency comparison and a privacy comparison of PEEP against two other privacy-preserving face recognition approaches developed by Zekeriya Erkin et al. (we abbreviate it as ZEYN for simplicity) erkin2009privacy and Ahman-Reza Sadehi et al. (we abbreviate it as ANRA for simplicity)  sadeghi2009efficient . Both ZEYN and ANRA are cryptographic methods that use homomorphic encryption.

4.1 Training the MLPClassifier for perturbed eigenface recognition

We trained the MLPClassifier555

Settings used for the MLP classifier; activation=‘relu’, batch_size=100, early_stopping =False, hidden_layer_sizes=(512, 1024, 2014, 1024, 512), max_iter =200, shuffle=True, and solver=‘adam’, alpha=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, learning_rate=‘constant’, learning_rate_init=0.001, momentum=0.9, nesterovs_momentum=True, power_t=0.5, random_state=None, tol=0.0001, validation_fraction=0.1, verbose=True, warm_start=False.

under different levels of ranging from 0.5 to 8 as plotted in Figure 7. Due to the heavy noise, the datasets with lower privacy budgets exhibited difficulty for training the MLPClassifier. However, we didn’t conduct any parameter tuning to increase the performance of the MLPClassifier in order to make sure that we investigate the absolute impact of perturbation on the model. Figure 6 shows the model loss of the training process of MLPClassifier when

. As the figure shows, the model converges after around 14 epochs.

Figure 6: Model loss when PEEP with . As shown in the figure, the MLPClassifier converges after around 14 epochs.

4.2 Classification accuracy vs. privacy budget

We recorded the accuracy of the trained MLPClassifier in the means of the weighted average of precision, recall, and f-score against varying levels of privacy budget, and plotted the corresponding data as shown in Figure 7. As discussed in Section 3.8, the class, “George W Bush” showed a higher performance as there was a higher proportion of the input image instances related to that class. As shown in Figure 7, increasing the privacy budget increases accuracy, as higher privacy budgets impose less amount of randomization on the eigenfaces. We can see that PEEP produces reasonable accuracy for privacy budgets greater than 4 and less than or equal to 8, where is considered as an acceptable level of privacy abadi2016deep .

Figure 7: Performance of face recognition with privacy introduced by PEEP. WP refers to the instance of classification model without privacy where no randomization is applied to the input images.

Figure 8 shows the classification results of 8 random input images in the testing sample at

. According to the figure, only in one case out of eight have been misclassified. The parameters such as the minimum number of faces per each class, the size of the input dataset, and the hyperparameters of the MLPClassifier have a direct impact on accuracy. We can improve the accuracy of the MLPClassifier by changing the input parameters and conducting hyperparameter tuning. Moreover, the dataset has a higher number of instances for the class “George W Bush” compared to the other classes. A more balanced dataset would also provide better accuracy. However, in this paper, we investigate only the absolute effect of the privacy parameters on the performance of the MLPClassifier.

Figure 8: Instance of the face recognition when the images are randomized using PEEP at (the randomized images at are shown in Figure 4). The figure shows the predicted labels of the images against the original true labels.

4.3 Effect of on the performance of face recognition

In this section, we test the effect of (the number of images per single face) on the performance of face recognition (refer Figure 9). During the experiment, we maintained an value of 8 and the number of PCA components at 128. As shown in the plots, the performance of classification improves with . This is a predicted observation as face recognition is a classification problem. A higher value of provides a higher representation for the corresponding face (class), generating higher accuracy. Hence, the proposed concept prefers a higher value for . This feature encourages having the highest value possible for , in order to generate the highest accuracy possible.

Figure 9: Performance of face recognition Vs. .

4.4 Effect of the number of PCA components on the performance of face recognition

In this section, we investigate the effect of the number of PCA components on the performance of face recognition. During the experiment, we maintained an value of 8, and was maintained at 100. As shown by the plot (refer Figure 10), there is an immediate increment of performance when the number of PCA components increased from 10 to 20. As the number of PCA components increase, there is a gradual increase in performance after 20 PCA components. This is due to the first 20 to 40 PCA components representing the most significant features of the input images. Although the effect of the number of PCA components after 40 is low, the improved performance suggests that it is better to have a higher number of PCA components to produce better performance.

Figure 10: Performance of face recognition Vs. the number of PCA components.

4.5 Face reconstruction attack setup

It is essential that the randomized images cannot be used to reconstruct the original images that reveal the identity of the owners. We prepared an experimental setup to investigate the robustness of PEEP against face reconstruction turk1991eigenfaces ; pissarenko2002eigenface applied by adversaries on the randomized images.

Figure 11: Face reconstruction from perturbed eigenfaces. The figure shows the experimental setup used for the reconstruction of the original input face images using the perturbed eigenfaces.

As shown in Figure 11, first, we create a PCAmodel (PCA: Principal Component Analysis) using 2,000 training images (first 1,000 images of the CelebA database and the vertically flipped versions of them). The resolution of each image is . The trained PCAmodel has the 2,000 eigenvectors of length 29,103 () and the mean vector (of the 2,000 eigenvectors) of length 29,103. Next, the testing image (of size ) is read and flattened to form a vectorized form of the original image. The mean vector is then subtracted from it, and the resulting vector is randomized using PEEP to generate the privacy-preserving representation of the testing vector (). Finally, we generate the eigenfaces () and the average face by reshaping the eigenvectors () and mean vector available in the PCAmodel. Now we can reconstruct the original testing image from using Equation 14 where is the number of training images used for the PCAmodel, and is the recovered image.

(14)

4.6 Empirical privacy of PEEP

Figure 12 shows the effectiveness of eigenface reconstruction attack (explained in Section 4.5) of a face image. The figure includes the results of the attack on two testing images. Figure 4 provides the empirical evidence to the level of privacy rendered by PEEP in which the lower the , the higher the privacy. At , the attack is not successful in generating any underlying features of an image. At and above, we can see that the reconstructed images have some features, but they are not detailed enough to identify the person shown in that image.

Figure 12: Reconstructing images using the setup depicted in Figure 11. The first row shows original images. The second row shows the reconstructed images using the eigenfaces of the images of the first row without privacy. The three remaining rows show the face reconstruction at the privacy levels of equals to , and , respectively.

4.7 Performance of PEEP against other approaches

In this section, we discuss the privacy guarantee of PEEP and the comparable methods with regards to five privacy issues (TYIS 1, 2, 3, 4, and 5) in face recognition systems, as identified in Section 1. The first six rows of Table 1 provide the summary of the evaluation, where a tick mark indicates effective addressing of a particular issue, while a cross mark shows failure. Partially addressed issues are denoted by a “” symbol. PEEP satisfies TYIS 1 and TYIS 4 by randomizing the input images (both training and testing) so that the randomized images do not provide any linkability to other sensitive data. Both ZEYN and ANRA are semi-honest mechanisms and need database owners to maintain the facial image databases. ZEYN and ANRA satisfy TYIS 1, if and only if the database owners are fully trusted, which can be challenging in a cloud setting, as untrusted third parties with malicious intent can access the cloud servers. As shown in Section 4.6, the randomized eigenfaces cannot be used to reconstruct original images. As the PEEP stores only randomized data in the servers, PEEP does not have to worry about the security of the cloud server. As a result, any leak of data from the cloud server will not have an adverse effect on user privacy. The scalability results of the three methods given in the last row of Table 1

show that PEEP satisfies TYIS 2 by providing better scalability than ZEYN and ANRA. PEEP satisfies TYIS 3 because it uses no trusted party, whereas ZEYN and ANRA must have trusted database owners. PEEP provides some level of guarantee towards TYIS 5 by randomizing all the subsequent face image inputs related to the same person, which can come from the same device or different devices. Consequently, two input images related to the same person will have two different levels of randomization, leaving a low probability of linkability.

Qualitative
comparison
Type of issue
(TYIS)
ZEYN ANRA PEEP
1. biometric should
not be linkable to
other sensitive data
2. scalable and
resource friendly
3. biometrics should
not be accessible
by a third-party
4. biometrics of the
same person from
two applications
should not be
linkable
5. biometrics should
be revocable
Quantitative
comparison
Average time to
recognize one
image in seconds
when the database
has 798 images
24 to 43
10
0.006


= fully satisfied, = partially satisfied, = not satisfied

Table 1: Performance of PEEP against other approaches

4.8 Computational complexity

PEEP involves two independent segments (components) in recognizing a particular face image. Component 1 is the randomization process, and component 2 is the recognition process. The two components conduct independent operations; hence they need independent evaluations for computational complexity. Moreover, as PEEP does not need a secure communication channel, the complexity behind maintaining a secure channel does not have any influence on the performance of PEEP. For a particular instance of PEEP (refer to Algorithm 3), step 3 to step 3 display linear complexity of , where is the number of principal components, and the image resolution (width in pixels, height in pixels) will remain constant during a particular instance of perturbation and recognition. When width in pixels=47, height in pixels=62, and the number of PCA components=128, PEEP takes around 0.004 seconds to randomize a single input image. Component 2 can be composed of any suitable classification model; in our case, we use the MLPClassifier (refer Section 4.1) as the facial recognition module that was trained using 798 images. Under the same input settings (width in pixels=47, height in pixels=62, and the number of PCA components=128), the trained model takes 0.002 seconds to recognize a facial image input. Since the prediction is always done on a converged model, the time taken for prediction will be constant and follow a complexity of . For randomization and prediction PEEP roughly consumes around 0.006 seconds under the given experimental settings. The runtime plots shown in Figure 13 further validate the computational complexities evaluated above. According to the last row of Table 1, PEEP is considerably faster than comparative methods; PEEP provides a more effective and efficient approach towards the recognition of images against millions of faces in a privacy-preserving manner.

Figure 13: The time consumption of PEEP to randomize and recognize one input image against the increasing number of principal components used for the eigenface generation.

5 Related Work

Literature shows a vast advancement in the area of face recognition that has employed different approaches, such as input image preprocessing heseltine2003face , statistical approaches tsalakanidou2003use ; delac2005appearance

, and deep learning 

parkhi2015deep . The continuous improvements in the field have drastically improved the face recognition accuracy making it a vastly used approach in many fields parkhi2015deep . Furthermore, the approaches, such as proposed by Cendrillon et al., show the dynamic capabilities of face recognition approaches that allow real-time processing cendrillon2000real . However, biometric data analysis is a vast area not limited to face recognition. With biometric data, a major threat is privacy violation bhargav2007privacy . Biometric data are almost always non-revocable and can be used to identify a person in a large set of individuals easily; hence, it is essential to apply some privacy-preserving mechanism when using biometrics, e.g. for identification and authentication bringer2013privacy . Literature shows a few approaches to address privacy issues in face recognition. Zekeriya Erkin et al. (ZEYN) erkin2009privacy introduced a privacy-preserving face recognition method based on a cryptographic protocol for comparing two Pailler-encrypted values. Their solution focuses on a two-party scenario where one party holds the privacy-preserving algorithm and the database of face images, and the other party wants to recognize/classify a facial image input. ZEYN requires O(log M) rounds, and it needs computationally expensive operations on homomorphically encrypted data to recognize a face in a database of images, hence not suitable for large scale scenarios. Ahman-Reza Sadehi et al. (ANRA)  sadeghi2009efficient introduced a relatively efficient method based on homomorphic encryption with garbled circuits. Nevertheless, the complexity of ANRA also has the same problem of failing to address large scale scenarios. Xiang et al. tried to overcome the computational complexities of the previous methods by introducing another cryptographic mechanism that uses the cloud xiang2016privacy for outsourced computations. However, being a semi-honest model, introducing another untrusted module such as the cloud increases the possibility of privacy leak. The proposed cryptographic methods cannot work without a trusted third party, and these trusted parties may later behave maliciously. Newton et al. proposed a de-identification approach for face images (named as ), which does not need complex cryptographic operations newton2005preserving . The proposed method is based on  chamikaraprocal ; chamikara2019infosci . However,

tends to reduce accuracy and increase information leak when introduced with high dimensional data 

chamikaraprocal . The same problem can occur when using for large scale scenarios involving the surveillance of millions of people. In addition to these works, researchers have looked at complementary techniques such as developing privacy-friendly surveillance cameras dufaux2006scrambling ; yu2008privacy , but these methods do not provide sufficient accuracy for privacy-preserving face recognition.

Fingerprint data and iris data are two other heavily used biometrics for identification and authentication; privacy-preserving finger code authentication barni2010privacy , and privacy-preserving key generation for iris biometrics rathgeb2010privacy are two approaches that apply cryptographic methods to maintain the privacy of fingerprint and iris data. However, these solutions also need more efficient procedures, as cryptographic approaches are inefficient in calculations. Privacy-preserving fingerprint and iris analysis can be possible future applications for PEEP, but this needs further investigation. Classification is the most commonly applied data mining technique that is used in biometric systems brady1999biometric . Encryption and data perturbation are two main approaches also used for privacy-preserving data mining (PPDM) yang2017efficient . Data perturbation often entails lower computational complexity than encryption at the expense of utility. Hence, data perturbation is better at producing high efficiency in large scale data mining. Noise addition, geometric transformation, randomization, condensation, and hybrid perturbation are a few of the perturbation approaches zhong2012mu ; chamikaraprocal . As data perturbation methods do not change the original input data formats, they may concede some privacy leak machanavajjhala2015designing . A privacy model defines the constraints to the level of privacy of a particular perturbation mechanism machanavajjhala2015designing ; , , , and differential privacy (DP) are some of such privacy models chamikaraprocal . DP is entrusted to provide a better level of privacy guarantee compared to previous privacy models that are vulnerable to different privacy attacks  dwork2009differential ; 9000905 . Laplace mechanism, Gaussian mechanism  chanyaswad2018mvg , geometric mechanism, randomized response  qin2016heavy , and staircase mechanism  kairouz2014extremal are a few of the fundamental mechanisms used to achieve DP. There are many practical examples where these fundamental mechanisms have been used to build differentially private algorithms/methods. LDPMiner  qin2016heavy , PINQ mcsherry2009privacy , RAPPOR erlingsson2014rappor , and Deep Learning with DP abadi2016deep are a few examples of such practical applications of DP.

6 Conclusions

We proposed a novel mechanism named PEEP for privacy-preserving face recognition using data perturbation. PEEP utilizes the properties of differential privacy, which can provide a strong level of privacy to facial recognition technologies. PEEP does not need a trusted party and employs a local approach where randomization is applied before the images reach an untrusted server. PEEP forwards only randomized data, which requires no secure channel. PEEP is an efficient and lightweight approach that can be easily integrated into any resource-constrained device. As the training and testing/recognition of facial images done solely on the randomized data, PEEP does not incur any efficiency loss during the recognition of a face. The differentially private notions allow users to tweak the privacy parameters according to domain requirements. All things considered, PEEP is a state of the art approach for privacy-preserving face recognition.

Using the proposed approach with different biometric algorithms and areas like fingerprint and iris recognition will be looked at in the future, in particular with regards to effectiveness and sensitivity in different domains of inputs.

References

  • (1) B. Mandal, S.-C. Chia, L. Li, V. Chandrasekhar, C. Tan, J.-H. Lim, A wearable face recognition system on google glass for assisting social interactions, in: Asian Conference on Computer Vision, Springer, 2014, pp. 419–433.
  • (2) M. MacAulay, M. D. Moldes, Queen don’t compute: reading and casting shade on facebook’s real names policy, Critical Studies in Media Communication 33 (1) (2016) 6–22.
  • (3) R. Bhagavatula, B. Ur, K. Iacovino, S. M. Kywe, L. F. Cranor, M. Savvides, Biometric authentication on iphone and android: Usability, perceptions, and influences on adoption, in: USEC ’15, Internet Society, 2015.
  • (4) I. Dinur, K. Nissim, Revealing information while preserving privacy, in: Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, ACM, 2003, pp. 202–210.
  • (5) M. A. P. Chamikara, P. Bertok, D. Liu, S. Camtepe, I. Khalil, An efficient and scalable privacy preserving algorithm for big data and data streams, Computers & Security 87 (2019) 101570.
  • (6) Z. Erkin, M. Franz, J. Guajardo, S. Katzenbeisser, I. Lagendijk, T. Toft, Privacy-preserving face recognition, in: International Symposium on Privacy Enhancing Technologies Symposium, Springer, 2009, pp. 235–253.
  • (7) M. A. P. Chamikara, P. Bertok, D. Liu, S. Camtepe, I. Khalil, Efficient data perturbation for privacy preserving and accurate data stream mining, Pervasive and Mobile Computing 48 (2018) 1–19. doi:https://doi.org/10.1016/j.pmcj.2018.05.003.
  • (8) M. A. P. Chamikara, A. Galappaththi, R. D. Yapa, R. D. Nawarathna, S. R. Kodituwakku, J. Gunatilake, A. A. C. A. Jayathilake, L. Liyanage, Fuzzy based binary feature profiling for modus operandi analysis, PeerJ Computer Science 2 (2016) e65.
  • (9) A.-R. Sadeghi, T. Schneider, I. Wehrenberg, Efficient privacy-preserving face recognition, in: International Conference on Information Security and Cryptology, Springer, 2009, pp. 229–244.
  • (10) C. Xiang, C. Tang, Y. Cai, Q. Xu, Privacy-preserving face recognition with outsourced computation, Soft Computing 20 (9) (2016) 3735–3744.
  • (11) E. M. Newton, L. Sweeney, B. Malin, Preserving privacy by de-identifying face images, IEEE transactions on Knowledge and Data Engineering 17 (2) (2005) 232–243.
  • (12) M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, L. Zhang, Deep learning with differential privacy, in: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016, pp. 308–318. doi:https://doi.org/10.1145/2976749.2978318.
  • (13) M. A. P. Chamikara, P. Bertok, I. Khalil, D. Liu, S. Camtepe, M. Atiquzzaman, Local differential privacy for deep learning, IEEE Internet of Things Journaldoi:https://doi.org/10.1109/JIOT.2019.2952146.
  • (14) P. Kairouz, S. Oh, P. Viswanath, Extremal mechanisms for local differential privacy, in: Advances in neural information processing systems, 2014, pp. 2879–2887.
  • (15) T.-H. H. Chan, M. Li, E. Shi, W. Xu, Differentially private continual monitoring of heavy hitters from distributed streams, in: International Symposium on Privacy Enhancing Technologies Symposium, Springer, 2012, pp. 140–159.
  • (16) C. Dwork, A. Roth, et al., The algorithmic foundations of differential privacy, Foundations and Trends® in Theoretical Computer Science 9 (3–4) (2014) 211–407. doi:http://dx.doi.org/10.1561/0400000042.
  • (17) Ú. Erlingsson, V. Pihur, A. Korolova, Rappor: Randomized aggregatable privacy-preserving ordinal response, in: Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, ACM, 2014, pp. 1054–1067. doi:https://doi.org/10.1145/2660267.2660348.
  • (18) Y. Wang, X. Wu, D. Hu, Using randomized response for differential privacy preserving data collection., in: EDBT/ICDT Workshops, Vol. 1558, 2016.
  • (19) J. Zhang, Y. Yan, M. Lades, Face recognition: eigenface, elastic matching, and neural nets, Proceedings of the IEEE 85 (9) (1997) 1423–1435.
  • (20) M. Turk, A. Pentland, Eigenfaces for recognition, Journal of cognitive neuroscience 3 (1) (1991) 71–86.
  • (21) J. Bennett, R. Grout, P. Pébay, D. Roe, D. Thompson, Numerically stable, single-pass, parallel statistics algorithms, in: Cluster Computing and Workshops, 2009. CLUSTER’09. IEEE International Conference on, IEEE, 2009, pp. 1–8.
  • (22) M. Chamikara, P. Bertok, I. Khalil, D. Liu, S. Camtepe, Privacy preserving distributed machine learning with federated learning, arXiv preprint arXiv:2004.12108.
  • (23) M. Bun, T. Steinke, Concentrated differential privacy: Simplifications, extensions, and lower bounds, in: Theory of Cryptography Conference, Springer, 2016, pp. 635–658.
  • (24) C. C. Aggarwal, P. S. Yu, A condensation approach to privacy preserving data mining, in: EDBT, Vol. 4, Springer, 2004, pp. 183–199. doi:https://doi.org/10.1007/978-3-540-24741-8_12.
  • (25) D. Pissarenko, Eigenface-based facial recognition, December 1st.
  • (26) T. Heseltine, N. Pears, J. Austin, Z. Chen, Face recognition: A comparison of appearance-based approaches, in: Proc. VIIth Digital image computing: Techniques and applications, Vol. 1, 2003.
  • (27)

    F. Tsalakanidou, D. Tzovaras, M. G. Strintzis, Use of depth and colour eigenfaces for face recognition, Pattern recognition letters 24 (9-10) (2003) 1427–1435.

  • (28) K. Delac, M. Grgic, P. Liatsis, Appearance-based statistical methods for face recognition, in: 47th International Symposium ELMAR-2005, 2005, pp. 151–158.
  • (29) O. M. Parkhi, A. Vedaldi, A. Zisserman, Deep face recognition.
  • (30) R. Cendrillon, B. Lovell, Real-time face recognition using eigenfaces, in: Visual Communications and Image Processing 2000, Vol. 4067, International Society for Optics and Photonics, 2000, pp. 269–276.
  • (31) A. Bhargav-Spantzel, A. C. Squicciarini, S. Modi, M. Young, E. Bertino, S. J. Elliott, Privacy preserving multi-factor authentication with biometrics, Journal of Computer Security 15 (5) (2007) 529–560.
  • (32) J. Bringer, H. Chabanne, A. Patey, Privacy-preserving biometric identification using secure multiparty computation: An overview and recent trends, IEEE Signal Processing Magazine 30 (2) (2013) 42–52.
  • (33) M. A. P. Chamikara, P. Bertok, D. Liu, S. Camtepe, I. Khalil, Efficient privacy preservation of big data for accurate data mining, Information Sciencesdoi:https://doi.org/10.1016/j.ins.2019.05.053.
  • (34) F. Dufaux, T. Ebrahimi, Scrambling for video surveillance with privacy, in: 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), IEEE, 2006, pp. 160–160.
  • (35) X. Yu, K. Chinomi, T. Koshimizu, N. Nitta, Y. Ito, N. Babaguchi, Privacy protecting visual processing for secure video surveillance, in: 2008 15th IEEE International Conference on Image Processing, IEEE, 2008, pp. 1672–1675.
  • (36) M. Barni, T. Bianchi, D. Catalano, M. Di Raimondo, R. Donida Labati, P. Failla, D. Fiore, R. Lazzeretti, V. Piuri, F. Scotti, et al., Privacy-preserving fingercode authentication, in: Proceedings of the 12th ACM workshop on Multimedia and security, ACM, 2010, pp. 231–240.
  • (37) C. Rathgeb, A. Uhl, Privacy preserving key generation for iris biometrics, in: IFIP International Conference on Communications and Multimedia Security, Springer, 2010, pp. 191–200.
  • (38) M. J. Brady, Biometric recognition using a classification neural network, uS Patent 5,892,838 (Apr. 6 1999).
  • (39) K. Yang, Q. Han, H. Li, K. Zheng, Z. Su, X. Shen, An efficient and fine-grained big data access control scheme with privacy-preserving policy, IEEE Internet of Things Journal 4 (2) (2017) 563–571. doi:https://doi.org/10.1109/JIOT.2016.2571718.
  • (40) J. Zhong, V. Mirchandani, P. Bertok, J. Harland, -fractal based data perturbation algorithm for privacy protection., in: PACIS, 2012, p. 148.
  • (41) A. Machanavajjhala, D. Kifer, Designing statistical privacy for your data, Communications of the ACM 58 (3) (2015) 58–67. doi:https://doi.org/10.1145/2660766.
  • (42) C. Dwork, The differential privacy frontier, in: Theory of Cryptography Conference, Springer, 2009, pp. 496–502. doi:https://doi.org/10.1007/978-3-642-00457-5_29.
  • (43) M. A. P. Chamikara, P. Bertok, I. Khalil, D. Liu, S. Camtepe, M. Atiquzzaman, A trustworthy privacy preserving framework for machine learning in industrial iot systems, IEEE Transactions on Industrial Informatics (2020) 1–1doi:10.1109/TII.2020.2974555.
  • (44) T. Chanyaswad, A. Dytso, H. V. Poor, P. Mittal, Mvg mechanism: Differential privacy under matrix-valued query, arXiv preprint arXiv:1801.00823.
  • (45) Z. Qin, Y. Yang, T. Yu, I. Khalil, X. Xiao, K. Ren, Heavy hitter estimation over set-valued data with local differential privacy, in: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016, pp. 192–203. doi:https://doi.org/10.1145/2976749.2978409.
  • (46) F. D. McSherry, Privacy integrated queries: an extensible platform for privacy-preserving data analysis, in: Proceedings of the 2009 ACM SIGMOD International Conference on Management of data, ACM, 2009, pp. 19–30. doi:https://doi.org/10.1145/1559845.1559850.