DeepObfuscator: Adversarial Training Framework for Privacy-Preserving Image Classification

09/09/2019 ∙ by Ang Li, et al. ∙ Duke University Tsinghua University 21

Deep learning has been widely utilized in many computer vision applications and achieved remarkable commercial success. However, running deep learning models on mobile devices is generally challenging due to limitation of the available computing resources. It is common to let the users send their service requests to cloud servers that run the large-scale deep learning models to process. Sending the data associated with the service requests to the cloud, however, impose risks on the user data privacy. Some prior arts proposed sending the features extracted from raw data (e.g., images) to the cloud. Unfortunately, these extracted features can still be exploited by attackers to recover raw images and to infer embedded private attributes (e.g., age, gender, etc.). In this paper, we propose an adversarial training framework DeepObfuscator that can prevent extracted features from being utilized to reconstruct raw images and infer private attributes, while retaining the useful information for the intended cloud service (i.e., image classification). DeepObfuscator includes a learnable encoder, namely, obfuscator that is designed to hide privacy-related sensitive information from the features by performingour proposed adversarial training algorithm. Our experiments on CelebAdataset show that the quality of the reconstructed images fromthe obfuscated features of the raw image is dramatically decreased from 0.9458 to 0.3175 in terms of multi-scale structural similarity (MS-SSIM). The person in the reconstructed image, hence, becomes hardly to be re-identified. The classification accuracy of the inferred private attributes that can be achieved by the attacker drops down to a random-guessing level, e.g., the accuracy of gender is reduced from 97.36 intended classification tasks performed via the cloud service drops by only 2

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the past decade, deep learning has achieved great success in many computer vision applications, such as face recognition

[15] and image segmentation [9]. However, running deep learning models on mobile devices is technically challenging due to limited computing resources. Many large-sized deep-learning-based applications are often deployed on cloud servers to offer various services, such as Amazon Rekognition, Microsoft Cognitive Services, etc. These cloud-based services require users to send data (e.g., images) to the cloud service provider. However, this requirement raises users’ concern about privacy leakage, since various private information may be contained in the images (e.g., age, gender, etc.). One widely adopted solution to address this privacy issue is to upload only the extracted features rather than the raw image [13, 14]. Unfortunately, the extracted features still contain rich information which can breach users’ privacy. Specifically, an attacker can exploit the eavesdropped features to reconstruct the raw image, and hence the person on the raw image can be re-identified from the reconstructed image [12]

.In addition, the extracted features can also be exploited by an attacker to infer private attributes, such as gender, age, etc. An attacker, who may be an authorized internal staff of the service provider or an external hacker, can perform the above reconstruct attack and private attribute inference by training an adversary reconstructor and an adversary classifier, respectively. Such adversary models can be trained by an attacker through continuously querying the cloud service to collect the eavesdropped features as inputs, and the ground truth of the queried data can be used as the labels.

Figure 1: An example of reconstruct attack and private attribute leakage in a cloud service for facial attribute recognition.

Figure 1 shows an example where reconstruct attack and private attribute leakage occur in a cloud-based facial attribute recognition service. When a mobile user uploads an image for detecting facial attribute, the encoder will extract features and then send the features to the cloud server. The classifier deployed on the server takes the extracted features as inputs and then predicts whether the person in the received image is smiling or not. Note that a complete model is jointly trained in an end-to-end manner, and then split into the encoder and the classifier. However, the extracted features can be eavesdropped by an attacker. By continuously querying the cloud service, the attacker can collect the eavesdropped features to train an decoder, which is denoted as the adversary reconstructor, for recovering the raw images. Besides, the eavesdropped features can also be exploited to train an adversary classifier for inferring the private attributes associated with the raw images.

There are a few studies have been performed for defending against reconstruct attacks. They perturbed either the raw data [3] or the extracted features [13, 14] through adding random noises. But these methods inevitably incur accuracy drop. Feutry et al. [2] also propose an image anonymization approach to hide sensitive features related to private attributes. However, none of these previous studies has investigated whether defending against only reconstruct attack or private attribute leakage is inclusive to each other. Our experiments demonstrate that defending only reconstruct attack cannot prevent private attribute leakage, and vice versa. For example, Figure 2 shows two examples of reconstructed images when we defend against only either the reconstruct attack or private attribute leakage. The column (a) shows the raw images, and the column (b) displays the reconstructed images when we defend only private attribute leakage. These reconstructed images still contain many details of raw images, and allow an attacker to re-identify the person in the images. The reconstructed images when defending against only reconstruct attack are shown in the column (c). Although almost all distinguishable information has been masked, we can still achieve a 93.7% accuracy in detecting the gender of the person in the reconstructed images. More details about this experiment can be found in Section 4.2.

Figure 2: Reconstructed images: defending against only private attribute leakage vs. defending against only reconstruct attack. Column (a) is raw images, column (b) shows the reconstructed images when we defend against only private attribute leakage, and column (c) displays reconstructed images when defending against only reconstruct attack.

In this work, we propose DeepObfuscator – an adversarial training framework to learn an obfuscator that can hide sensitive information that can be exploited for reconstructing raw images and inferring private attributes, and still keep useful features for image classifications. Although we focus on image classifications in this paper, DeepObfuscator can be easily extended to many other tasks, e.g., speech recognition. As Figure 3 shows, DeepObfuscator consists of four modules: obfuscator, classifier, adversary reconstructor and adversary classifier. The key idea is to apply adversarial training for maximizing the reconstruct error of the adversary reconstructor and the classification error of the adversary classifier, but minimizing the classification error of the classifier.

The main contributions of this paper are summarized as follows:

  1. To the best of our knowledge, DeepObfuscator is the first adversarial training framework that can simultaneously defend against both reconstruct attack and private attribute leakage while maintaining the accuracy of image classifications;

  2. We are the first to experimentally demonstrate that defending against only the reconstruct attack or private attribute leakage is not inclusive to each other;

  3. We quantitatively evaluate DeepObfuscator on CelebA dataset. The results show that the quality of reconstructed images from the obfuscated features is significantly decreased from 0.9458 to 0.3175 in terms of MS-SSIM, indicating that the person on the image is hardly to be re-identified. The classification accuracy of the inferred private attributes drops by around 30% to a random-guessing accuracy, but the accuracy of the intended classification tasks performed via the cloud service drops by only 2%.

In the rest of this paper, Section 2 reviews the related work. Section 3 elaborates the design of DeepObfuscator. Section 4 evaluates the performance of DeepObfuscator. Section 5 concludes this work.

2 Related Work

Some works have been done to realize privacy-preserving image classifications. One common solution is so-called perturbation-based approach, which modifies raw data before sending them to the service provider, such as adding random noise [1], down-sampling [17], replacing the identity of a face [5], etc. However, these solutions are task-specific, meaning that they need to carefully pre-analyze the intended classification tasks and only perturb the features that are not related to the intended tasks. Recent works apply deep learning techniques to protect the privacy of raw images. Osia et al. [13, 14]

design a client-server paradigm by splitting a complete convolutional neural network (CNN) model into a feature extractor and a classifier. The extracted features will be sent to the classifier deployed on the server instead of sending raw images. Noise is also added to the extracted features before sending them to service providers. However, such solutions incur considerable classification accuracy drop. Adversarial training

[8, 20, 6] is adopted to obfuscate features so that an attacker cannot reconstruct the raw image from the eavesdropped features. Feutry et al. [2] also propose an adversarial training algorithm to hide the sensitive information from features so that private attributes cannot be inferred from the obfuscated features.

Unfortunately, none of the existing works can simultaneously defend against both reconstruct attack and private attribute leakage, generating significant concern on the data user privacy in such scenario. Our proposed DeepObfuscator is designed solve this issue.

3 Design of DeepObfuscator

As Figure 3 shows, DeepObfuscator consists of three additional neural network modules: classifier (), adversary reconstructor () and adversary classifier (). The classifier works for the intended classification service. The adversary reconstructor and adversary classifier simulate an attacker in the adversarial training procedure, aiming to recover raw images and infer private attributes from the eavesdropped features. All the four modules are end-to-end trained using our proposed adversarial training algorithm.

Figure 3: The design of DeepObfuscator.

Before presenting the details of each module, we give the following notations. We denote as the images in the dataset, where is the number of images, and represents the reconstructed images that are generated by the adversary reconstructor. Let denote the set of the target classes that the classifier is trained to predict, and denotes the corresponding labels of each class. Similarly, we adopt to denote the set of private classes that the adversary classifier aims to infer, and denotes the corresponding labels of each private class.

3.1 Obfuscator

The obfuscator (

) is a typical encoder which consists of an input layer, multiple convolutional layers, max-pooling layers and batch-normalization layers. The obfuscator is trained to hide privacy-related information while retaining useful information for intended classification tasks.

3.2 Classifier

The classifier () is jointly trained with the obfuscator as a complete CNN model. A service provider can choose any neural network architecture for the classifier based on task requirements and available computing resources. In DeepObfuscator, we adopt a popular CNN architecture VGG16 [18], and split it into the obfuscator and the classifier.

The performance of the classifier

is measured using the cross-entropy loss function, which is expressed as:

(1)

where () denote the ground truth labels for the th data sample, and () are the corresponding predictions. Therefore, the obfuscator and the classifier can be optimized by minimizing the above loss function as:

(2)

where and are the parameters of the obfuscator and classifier, respectively.

3.3 Adversary Classifier

By continuously querying the cloud service, an attacker can train the adversary classifier () using the eavesdropped features as inputs and the interested private attributes as labels. An attacker can infer private attributes via feeding the eavesdropped features to the trained adversary classifier. In DeepObfuscator, we apply the same architecture to both the classifier and the adversary classifier. However, the attacker can choose any architecture for the adversary classifier. As we shall show in Section 4.4, the performance of using different architectures in both the classifier and the adversary classifier will not be significantly different from the one that is achieved using the same architecture.

Similar to the classifier, the performance of the adversary classifier is also measured using the cross-entropy loss function as:

(3)

where () denote the ground truth labels for the th eavesdropped feature, and () stand for the corresponding predictions. When we simulate an attacker who tries to enhance the accuracy of the adversary classifier as high as possible, the adversary classifier needs to be optimized by minimizing the above loss function as:

(4)

where is the parameter set of the adversary classifier. On the contrary, when defending against private attribute leakage, we train the obfuscator in our proposed adversarial training procedure that aims to degrade the performance of the adversary classifier while improving the accuracy of the classifier. Consequently, the obfuscator can be trained using Eq. 5 when simulating a defender:

(5)

where is a tradeoff parameter.

3.4 Adversary Reconstructor

The adversary reconstructor (), which is trained to recover the raw image from the eavesdropped features, also plays an attacker role. The attacker can apply any neural network architecture in the adversary reconstructor design. However, the worst case happens when an attacker knows the architecture of the obfuscator, and then builds the most powerful reconstructor, i.e., an exactly mirrored obfuscator by performing a layer-to-layer reversion. In DeepObfuscator, we adopt the most powerful reconstructor as the adversary reconstructor. The experiments in Section 4.4 show our trained obfuscator can successfully defend against reconstruct attack even if an attacker trains the reconstructor with different neural network architectures.

When playing as an attacker, the adversary reconstructor is trained to optimize the quality of the reconstructed image as close as the original image . In DeepObfuscator, we adopt MS-SSIM [19, 11] to evaluate the performance of the adversary reconstructor, which is expressed as:

(6)

MS-SSIM values range between 0 and 1. The higher the MS-SSIM value is, the more perceptually similarity can be found between the two compared images, indicating better quality of the reconstructed images. Consequently, an attacker can optimize the adversary reconstructor as:

(7)

where is the parameter set of the adversary reconstructor. On the contrary, a defender expects to degrade the quality of the reconstructed image as much as possible. To this end, we generate one additional Gaussian noise image . The adversary reconstructor is trained to make each reconstructed image similar to but different from , and the performance of the classifier should be maintained. When playing as a defender, the obfuscator can be trained as:

(8)
(9)

where is a tradeoff parameter.

3.5 Adversarial Training Algorithm

0:  Dataset
0:  
1:  Input: Dataset
2:  for  do
3:     for  do
4:        if  then
5:           Defend against AR: 
6:           
7:        else if  then
8:           Defend against AC: 
9:           
10:        else if  then
11:           Reconstruct attack:
12:            
13:           Infer private attributes:
14:            
15:        else
16:           Recover C:
17:            
18:        end if
19:     end for
20:  end for
Algorithm 1 Adversarial Training Algorithm

Algorithm 1 summarizes the proposed four-stage adversarial training algorithm. Before performing the adversarial training, we first jointly train the obfuscator and the classifier without privacy concern to obtain the optimal performance on the intended classification tasks. Similarly, we also pre-train the adversary classifier and adversary reconstructor for initialization. As Algorithm 1

shows, within each epoch of training, each adversarial training iteration consists of four batches. In the first two batches, we train the obfuscator to defend against the adversary reconstructor and the adversary classifier while keeping the classifier unchanged. For the third batch, we optimize the adversary reconstructor and the adversary classifier by simulating an attacker, but the parameters of the obfuscator and the classifier are fixed. Finally, we optimize the classifier to improve the classification accuracy on the intended tasks.

4 Evaluation

4.1 Experiment Setup

We implement DeepObfuscator with PyTorch, and train it on a server with 4

NVIDIA TITAN RTX GPUs. We apply mini-batch technique in training with a batch size of 64, and adopt the AdamOptimizer [7] with an adaptive learning rate in all four stages in the adversarial training procedure. We set tradeoff parameters in our experiments. The architecture configurations of each module are presented in Table 1.

We adopt CelebA [10] for the training and testing of DeepObfuscator. CelebA consists more than 200K face images. Each face image is labeled with 40 binary facial attributes. The dataset is split into 160K images for training and 40K images for testing.

Obfuscator Adversary Reconstructor Classifier & Adversary Classifier
conv3-64 Upsample 3conv3-256
conv3-64 deconv3-128 maxpool
maxpool deconv3-64 3conv3-512
conv3-128 Upsample maxpool
conv3-128 deconv3-64 3conv3-512
maxpool deconv3-3 maxpool
2FC-4096
FC-label length
sigmoid
Table 1: The architecture configurations of each module.

4.2 Motivation

Before presenting our performance evaluations, we first verify our motivation that defending against only reconstruct attack or private attribute leakage is not inclusive to each other. We apply our proposed adversarial training algorithm to defend against only one of them each time, and evaluate the attack performance on the other. In this experiment, we select ‘gender’ as the private attribute. The results presented in Section 1 verified our motivation.

One naïve solution of the exclusion of defending against reconstruct attack and private attribute leakage is to first train an obfuscator to defend against one of these two scenarios using the adversarial training approach, and then continue to train the obfuscator to defend against the other one. However, this naïve solution can not simultaneously defend against both the scenarios because the parameters of the obfuscator keep updated in the second step. The above limitation motivates the design of DeepObfuscator.

4.3 Baselines

Two baseline models are selected to compare with our design in the experiments. The first baseline model is directly adding Gaussian noise [1, 3] onto the raw images so that the image quality is degraded until it becomes just unrecognizable. This baseline model makes sure that the classification accuracy is affected as less as possible. The second baseline model has the same architectures as the DeepObfuscator but is trained without using our proposed adversarial training approach.

4.4 Effectiveness of Defending Against Reconstruct Attack

We quantitatively evaluate the quality of reconstructed images and validate the quantitative results through a human perceptual study. Before showing quantitative results, we perform an experiment to simulate reconstruct attack using different reconstructor architectures. As introduced in Section 3.4, we adopt the most powerful decoder, i.e., exactly the reverse of the obfuscator, as the adversary reconstructor for training. However, an attacker may not be able to know the architecture of the obfuscator, and hence the attacker may make a brute-force attack using the reconstructor with different architectures. We implement three additional reconstructors as attackers in our experiments. The architectural configurations of those reconstructors are presented in Table 2. URec#1 and URec#2 are built based on the architecture of U-net [16], and ResRec is implemented using ResNet [4] architecture. Each reconstructor is separately trained with the same pre-trained obfuscator.

We adopt the MS-SSIM to evaluate the quality of the reconstructed images that are generated by each reconstructor, i.e., comparing the similarity between the reconstructed image and the corresponding raw image. A smaller value of MS-SSIM means less similarity between the reconstructed image and the raw image, indicating a more effective defense against reconstruct attacks. Table 3 presents the average MS-SSIM for attacking reconstructors on testing data. The results show that although we apply the mirrored obfuscator architecture for the adversary reconstructor when training the obfuscator, the trained obfuscator can effectively defend against reconstruct attacks no matter what kinds of architecture are adopted by an attacker in the reconstructor design.

URec#1 URec#2 ResRec
Input (5444128 feature maps)
conv3-64
conv3-64
conv3-64
conv3-64
conv3-64
transconv3-64
2
Upsample
conv3-128
conv3-128
conv3-128
conv3-128
conv3-128
2
Upsample
conv3-256
conv3-256
conv3-256
conv3-256
conv3-256
conv1-3
conv3-3
conv1-3
conv3-3
conv1-3
sigmoid
sigmoid sigmoid
Table 2: Adversary reconstructor configurations.
Training Reconstructor Attack Reconstructor
AR in DeepObfuscator URec#1 URec#2 ResRec
AR in DeepObfuscator 0.3175 0.3123 0.3095 0.3169
Table 3: MS-SSIM for different attack reconstructors.

Quantitative Evaluation. In addition to MS-SSIM, we also adopt the Peak Signal to Noise Ratio (PSNR) – a widely used metric of image quality, to evaluate the quality of the reconstructed images. In this experiment, the obfuscator is trained by setting the intended classification task as ‘smile’ and the private attribute as ‘gender’. Smaller values of MS-SSIM and PSNR indicate a stronger defense against reconstruct attack. Table 4 presents the average MS-SSIM and PSNR on the testing data of DeepObfuscator and two baseline models. The result shows that DeepObfuscator is the most effective one to defend against reconstruct attack and the baseline model without adversarial training can hardly hide privacy information from the features. Figure 4 illustrates several examples of the reconstructed images. With our proposed adversarial training, the images that are reconstructed from the obfuscated features become unrecognizable. Even though directly adding noise to raw image can hide more private information than the baseline model without adversarial training, the person in the image that is reconstructed from noisy features can still be re-identified. In summary, both quantitative evaluations and visual results show that DeepObfuscator can effectively defend against reconstruct attack.

Metric DeepObfuscator Baseline (without adversarial training) Baseline (noise)
MS-SSIM 0.3175 0.9458 0.7263
PSNR 6.32 27.81 16.97
Table 4: Average PSNR and MS-SSIM for DeepObfuscator and two baseline models.
Figure 4: The comparison between reconstructed images with DeepObfuscator and two baseline models.

Human Perceptual Study. We also conduct an online human perceptual study to directly examine whether a person in a reconstructed image can be re-identified by humans. This study consists of 10 questions, each of which includes one reconstructed image and four raw images as options. One of the four options contains the person in the reconstructed image. Participants are instructed to choose the option that looks like the person the most in the reconstructed image. Figure 5 shows one example question in the survey. It is very difficult to find hints from the reconstructed image to identify the correct answer, i.e., Figure 5(e). We collect 40 responses in total for this study. The average re-identification accuracy of 10 questions is 28%, which is very close to a random guess, i.e., 25% for 4 options. Furthermore, we can imagine if there is no option offered to an attacker, it will become more challenging to re-identify the person from the reconstructed image alone.

(a)
Figure 5: An example question of the human perceptual study. (a) is the reconstructed image, and (b)-(e) are the four options.

4.5 Effectiveness of Defending Against Private Attribute Leakage

We compare the performance of the classifier and the adversary classifier which are trained in the way adopted by DeepObfuscator and two baseline models. Specifically, we choose detecting ‘smile’ and ‘high cheekbone’ as the intended classification tasks, and ‘gender’ and ‘heavy makeup’ as the private attributes that the attacker aims to infer from the obfuscated features. We design 6 testing sets using different combinations of those attributes: (1) {smile, gender}; (2) {high cheekbone, gender}; (3) {smile, high cheekbone, gender}; (4) {smile, gender, heavy makeup}; (5) {high cheekbone, gender, heavy makeup}; (6) {smile, high cheekbone, gender, heavy makeup}. In fact, those six testing sets can be divided into two groups: the first three sets only contain one private attribute, and the last three sets include two private attributes. Within each group, we explore how the different numbers of the intended classification tasks will affect the protection of the private attributes. In addition, if we compare each testing set between two groups accordingly, we can investigate how the accuracy of the intended tasks will change with the number of the private attributes that need to be protected.

Figure 6 shows the training loss of the classifier , the adversary classifier and the adversary reconstructor . The loss is recorded every 100 batches during the adversarial training procedure. In general, and significantly increase from 0.02 to 0.6 with the adversarial training procedure, indicating a significant performance drop of the adversary classifier and the adversary reconstructor. On the contrary, increases slightly, implying a small performance drop.

(a) (Smile, Gender)
(b) (High Cheekbone, Gender)
(c) (Smile, High Cheekbone, Gender)
(d) (Smile, Gender, Heavy Makeup)
(e) (High Cheekbone, Gender, Heavy Makeup)
(f) (Smile, High Cheekbone, Gender, Heavy Makeup)
Figure 6: Training loss for each testing set.

Figure 7 shows the average accuracy of intended tasks and private attributes using the classifier and adversary classifier which are trained in the way adopted by DeepObfuscator and two baseline models, respectively. With the proposed adversarial training, DeepObfuscator can effectively prevent private labels from being inferred by an attacker while only incurring a small accuracy drop on the intended classification tasks. For example, Figure 7(a) shows that the accuracy of ‘gender’ dramatically decreases from 97.36% to 58.85%, while the accuracy of ‘smile’ only drops by 1.15%. On the contrary, the baseline model that adding noise to the raw image defends against private attribute leakage much less effectively, but incurring a much larger performance drop on the intended tasks: as illustrated by Figure 7(a), the accuracy of ‘gender’ only decreases from 97.36% to 90.15% so that the private attribute can still be effectively inferred with high confidence. The accuracy of ‘smile’ significantly drops by 5.36%.

Figure 7: Accuracy of intended tasks and private attributes using DeepObfuscator and baseline models.

If we compare Figure 7(c) vs. Figure 7(a-b) and Figure 7(f) vs. Figure 7(d-e), it can be observed that performing more intended tasks will weaken the defense against private attribute leakage due to the intrinsic correlation between the intended tasks and the private attributes that an attacker aims to infer. Specifically, the accuracy of ‘gender’ slightly increases from 58.85% in Figure 7(a) and 59.79% in Figure 7(b) to 63.96% in Figure 7(c). Similarly, the accuracy of ‘gender’ and ‘heavy makeup’ increase by 1% from Figure 7(d-e) to Figure 7(f). In addition, by comparing Figure 7(a-c) vs. Figure 7(d-f), we found that protecting more private attributes leads to slight decrease in the accuracy of the intended tasks. For example, the accuracy of ‘smile’ slightly decreases from 91.53% in Figure 7(a) to 90.85% in Figure 7(d). The reason is that the feature related to the private attributes has some intrinsic correlations to the feature related to the intended tasks. Therefore, more correlated features may be hidden if more private attributes need to be protected. As a result, the performance of the intended tasks becomes harder to be maintained.

5 Conclusion

We proposed an adversarial training framework DeepObfuscator for privacy-preserving image classifications by simultaneously defending against both reconstruct attack and private attribute leakage. DeepObfuscator consists of an obfuscator, a classifier, an adversary reconstructor and an adversary classifier. The obfuscator is trained using our proposed end-to-end adversarial training algorithm to hide sensitive information which can be exploited to reconstruct raw images and infer private attributes by an attacker. Useful features for the intended classification tasks can still be retained by the obfuscator. The adversary reconstructor and adversary classifier play an attacker role in the adversarial training procedure, aiming to reconstruct the raw image and infer private attributes from the eavesdropped features. Evaluations on CelebA dataset show that the quality of the reconstructed images from the obfuscated features is significantly decreased from 0.9458 to 0.3175 in terms of MS-SSIM, indicating the person on the re-constructed images is hardly to be re-identified. The classification accuracy of the inferred private attributes that can be achieved by the attacker drops by around 30% to a random-guessing level, but the accuracy of the intended classification tasks performed via the cloud service drops by only 2%. Although DeepObfuscator is applied to image classifications in this paper, it can be easily extended to many other deep-learning-based applications such as speech recognition.

References

  • [1] C. Dwork, A. Smith, T. Steinke, and J. Ullman (2017) Exposed! a survey of attacks on private data. Annual Review of Statistics and Its Application 4, pp. 61–84. Cited by: §2, §4.3.
  • [2] C. Feutry, P. Piantanida, Y. Bengio, and P. Duhamel (2018) Learning anonymized representations with adversarial neural networks. arXiv preprint arXiv:1802.09386. Cited by: §1, §2.
  • [3] J. He and L. Cai (2017) Differential private noise adding mechanism: basic conditions and its application. In 2017 American Control Conference (ACC), pp. 1673–1678. Cited by: §1, §4.3.
  • [4] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §4.4.
  • [5] A. Jourabloo, X. Yin, and X. Liu (2015) Attribute preserved face de-identification. In 2015 International conference on biometrics (ICB), pp. 278–285. Cited by: §2.
  • [6] T. Kim, D. Kang, K. Pulli, and J. Choi (2019) Training with the invisibles: obfuscating images to share safely for learning visual recognition models. arXiv preprint arXiv:1901.00098. Cited by: §2.
  • [7] D. P. Kingma and J. Ba (2014-12) Adam: A Method for Stochastic Optimization. arXiv.org, pp. arXiv:1412.6980. Note: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015 External Links: 1412.6980 Cited by: §4.1.
  • [8] S. Liu, A. Shrivastava, J. Du, and L. Zhong (2019) Better accuracy with quantified privacy: representations learned via reconstructive adversarial network. arXiv preprint arXiv:1901.08730. Cited by: §2.
  • [9] Z. Liu, X. Li, P. Luo, C. Loy, and X. Tang (2015) Semantic image segmentation via deep parsing network. In Proceedings of the IEEE international conference on computer vision, pp. 1377–1385. Cited by: §1.
  • [10] Z. Liu, P. Luo, X. Wang, and X. Tang (2015-12) Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), Cited by: §4.1.
  • [11] K. Ma, Q. Wu, Z. Wang, Z. Duanmu, H. Yong, H. Li, and L. Zhang (2016) Group mad competition-a new methodology to compare objective image quality models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1664–1673. Cited by: §3.4.
  • [12] A. Mahendran and A. Vedaldi (2015) Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196. Cited by: §1.
  • [13] S. A. Osia, A. S. Shamsabadi, A. Taheri, K. Katevas, H. R. Rabiee, N. D. Lane, and H. Haddadi (2017) Privacy-preserving deep inference for rich user data on the cloud. arXiv preprint arXiv:1710.01727. Cited by: §1, §1, §2.
  • [14] S. A. Osia, A. Taheri, A. S. Shamsabadi, M. Katevas, H. Haddadi, and H. R. Rabiee (2018) Deep private-feature extraction. IEEE Transactions on Knowledge and Data Engineering. Cited by: §1, §1, §2.
  • [15] O. M. Parkhi, A. Vedaldi, A. Zisserman, et al. (2015) Deep face recognition.. In bmvc, Vol. 1, pp. 6. Cited by: §1.
  • [16] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §4.4.
  • [17] M. S. Ryoo, B. Rothrock, C. Fleming, and H. J. Yang (2017) Privacy-preserving human activity recognition from extreme low resolution. In

    Thirty-First AAAI Conference on Artificial Intelligence

    ,
    Cited by: §2.
  • [18] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §3.2.
  • [19] Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2, pp. 1398–1402. Cited by: §3.4.
  • [20] Z. Wu, Z. Wang, Z. Wang, and H. Jin (2018) Towards privacy-preserving visual recognition via adversarial training: a pilot study. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 606–624. Cited by: §2.