Adversarial representation learning for synthetic replacement of private attributes

06/14/2020 ∙ by John Martinsson, et al. ∙ 3

The collection of large datasets allows for advanced analytics that can lead to improved quality of life and progress in applications such as machine cognition and medical analysis. However, recently there has been an increased pressure to guarantee the privacy of users when collecting data. In this work, we study how adversarial representation learning can be used to ensure the privacy of users, and to obfuscate sensitive attributes in existing datasets. While previous methods using adversarial representation learning for privacy only aims at obfuscating the sensitive information, we find that adding new information in its place can improve the strength of the provided privacy. We propose a method building on generative adversarial networks that has two steps in the data privatization. In the first step, sensitive data is removed from the representation. In the second step, a sample which is independent of the input data is inserted in its place. The result is an approach that can provide stronger privatization on image data, and yet be preserving both the domain and the utility of the inputs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The list of success stories from big data analytics can be made long. Statistical studies for medical diagnosis (Litjens et al., 2017) and business analytics (Chen et al., 2012) are merely a couple of examples. The amounts of data required for training machine learning models for the processing of natural language and images are growing with the scale and capabilities of the models (Goodfellow et al., 2016). However, collecting large datasets which potentially contain sensitive information about individuals can be difficult because getting the consent from people may be challenging. Furthermore, privacy laws are being incorporated in many countries to protect the rights of individuals, making it hard to use data with sensitive information. Being able to give privacy guarantees on a dataset may be a way to allow the distribution of the data, while protecting the rights of individuals, and thus unlocking the large benefits for individuals and for society that big datasets can provide.

In this work, we study techniques for selective anonymization of image datasets. The end goal is to provide the original data as detailed as possible and retain the most information from it, while at the same time making it hard for an adversary to detect specific sensitive attributes. The proposed solution is agnostic to the downstream task, with the objective to make the data as private as possible given a distortion constraint.

Previous research has addressed this issue using adversarial representation learning with some success: a filter model is trained to hide sensitive information while an adversary model is trained to recover the information. While previous work had the objective of training the filter model so that it is hard for the adversary to uncover the sensitive attributes (Huang et al., 2018), we instead explore this task under the assumption that it is easier to hide sensitive information if you replace it with something else: a sample which is independent from the input data.

In our setup, the adversary can make an arbitrary number of queries to the model, each time another sample will be produced from the distribution of the sensitive data, while keeping as much as possible of the non-sensitive information about the requested data point.

Besides the adversary module, our proposed solution includes two main components: one filter model that is trained to remove the sensitive information, and one generator model that inserts a synthetically generated new value for the sensitive attribute. The generated sensitive information is entirely independent from the sensitive information in the original input image. Following a body of work in privacy-related adversarial learning we evaluate the proposed model on faces from the CelebA dataset, and consider, for example, the smile of a person to be the sensitive attribute. The smile is an attribute that carries interesting aspects in the transformations of a human face. Even if the obvious change reside close to the mouth, subtle changes occur in many other parts of the face when a person smiles: eyelids tighten, dimples may occur and the skin may wrinkle. The current work also includes a thorough analysis of the dataset, including correlations of such features. These correlations make the task interesting and challenging, reflecting the real difficulty that may occur when anonymizing data. What is the right trade-off between preserving the utility as defined by allowing information about other attributes to remain, and removing the sensitive information? Our method is easily generalizable and can be applied to other visual attributes such as gender, race, background cues, etc.

2 Related work

Adversarial learning is the process of training a model with the objective of being able to fool an adversary (a second model). The adversary is trained simultaneously, and both become increasingly good at their respective task during the training process. This approach has been successfully used to learn image-to-image transformations (Isola et al., 2017), and synthesis of properties such as facial expressions (Song et al., 2017; Tang et al., 2019). Privacy-preserving adversarial representation learning studies how to utilize this learning framework to learn representations of data that hide some sensitive information, yet retain the utility. Adversarial learning has also been used to train generative models (Goodfellow et al., 2014).

A body of prior work has been dedicated to employ adversarial representation learning to hide sensitive attributes while retaining the useful information (Edwards and Storkey, 2016; Zhang et al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al., 2017). Bertran et al. (2019) presented a privacy preserving mechanism that minimizes the mutual information between the utility variable and the input image data conditioned on the learned representation. Chandan Roy and Naresh Boddeti (2019) proposed that the generator in the adversarial learning setup should be optimized to maximize the entropy of the discriminator output rather than to minimize the log likelihood. The authors show that this is beneficial in the multi-class setting. Osia et al. (2020) treated the problem as an information-bottleneck problem. The resulting images are optimized to be useful to predict a specific binary attribute, while hiding the identify of a person. Wu et al. (2018), Wang et al. (2019), Ren et al. (2018) studied how to learn transformations of video that respect a privacy budget while maintaining performance on the downstream task. Tran et al. (2018

) proposed an approach for solving pose-invariant face recognition. Similar to our work, their approach used adversarial representation learning to disentangling specific attributes in the data.

Oh et al. (2017) trained an obfuscator network to add a minimal amount of perturbation noise to the input to make the output fool a person recognition adversary.

All these proposed solutions, with the exception of Edwards and Storkey (2016), depend on knowing the downstream task labels. Our work has no such dependency: the data produced by our method is designed to be usable regardless of downstream task.

The work most closely related to ours is that by Huang et al. (2017) and Huang et al. (2018). They use adversarial learning to minimize the mutual information between the private attribute and the censored image under a distortion constraint. We extend on these ideas by proposing a modular design consisting of a filter that is adversarially trained to obfuscate the data points, and a generator that further enhances the privacy by adding new independently sampled synthetic information for the sensitive attributes. Our work is closely connected to the learning of fair representations (Zemel et al., 2013), and the proposed generator module could be applied to counterfactual reasoning in a fashion similar to Johansson et al. (2016) by allowing to control the private attribute input to the generator deterministically.

Some prior work has assumed access to a privacy-preserving mechanism, such as bounding boxes for faces, and has studied to what extent the identity of a person can be hidden when blurring (Oh et al., 2016a), removing (Orekondy et al., 2018) or generating the face of another person (Hukkelås et al., 2019) in its place. Other work has assumed access to the utility-preserving mechanism and proposed to obfuscate everything except what they want to retain (Alharbi et al., 2019). But this raises the question: how do we find the pixels in an image that need to be modified to preserve privacy with respect to some attribute, or alternatively the pixels that need to be kept to preserve utility? Furthermore, Oh et al. (2016b) showed that blurring or removing the head of a person has a limited effect on privacy with respect to person recognition. The finding is crucial; we cannot rely on modifications of an image such as blurring or overpainting to achieve privacy.

An adversarial set-up instead captures the signals that the adversary uses, and can attain a stronger privacy.

3 Privacy-preserving adversarial representation learning

In the current work, we focus on utility-preserving transformations of data: we use privacy-preserving representation learning to obfuscate information in the input data, and seek to output results that retain the information and structure of the input.

3.1 Problem setting

Generative adversarial privacy (GAP) (Huang et al., 2018)

was proposed as a method to provide privacy while maintaining the utility of an image dataset, which will be used as the baseline in the current work. In GAP, one assumes a joint distribution

of public data points and sensitive private attributes where is typically correlated with . The authors define a privacy mechanism where is the source of noise or randomness in . Let be an adversary’s prediction of the sensitive attribute from the privatized data according to a decision rule

. The performance of the adversary is thus measured by a loss function

and the expected loss of the adversary with respect to , and is

(1)

where is the source of noise.

The privacy mechanism should be privacy-preserving and utility-preserving. That is, it should be hard for an analyst to infer from , but should be minimally distorted with respect to . The authors (Huang et al., 2018) formulate this as a constrained minimax problem

s.t.

where the constant defines the allowed distortion for the privatizer and is some distortion measure.

In the current work, we call the privacy mechanism the filter since the purpose of is to filter the sensitive information from . A potential drawback with this formulation is that it only removes the sensitive information in which may make it obvious to the adversary that is a censored version of . Instead, in addition to removing the sensitive information we propose to replace it with a new value

that is sampled from the uniform distribution

111The uniform distribution is a close approximation of the marginal distribution of the smiling attribute., independent of .

3.2 Our contribution

We extend the filter with an additional module which we call the generator. We define the generator mechanism as where

denotes the random variable of the new synthetic sensitive attribute, and

and are the sources of noise or randomness in and respectively. The discriminator is then trained to predict when the input is a real image, and to predict the “” output when the input comes from

as in the semi-supervised learning setup in 

(Salimans et al., 2016). The objective of the generator is to generate a new synthetic (independent) sensitive attribute in , that will fool the discriminator . We further define the loss of the discriminator as

where is the source of noise, is the assumed distribution of the synthetic sensitive attributes , is the fake class, and is the loss function. We formulate this as a constrained minimax problem

s.t.

where the constant defines the allowed distortion for the generator. An overview of the setup can be seen in Figure 1.

Figure 1: An overview of the training setup. This figure does not show the discriminators.

3.3 Data-driven implementation

Typically, we do not have access to the true data distribution , but we have access to a dataset of samples which are assumed to be identically and independently distributed according to the unknown joint distribution . We assume that the sensitive attribute is binary and takes values in . However, the proposed approach can easily be extended to categorical sensitive attributes. We model the filter mechanism

with a convolutional neural network parameterized by

and the generator mechanism with a convolutional neural network parameterized by . We use the UNet (Ronneberger et al., 2015) architecture for both the filter and the generator, as illustrated in Figure 2

. The orange blocks are convolution blocks each of which, except for the last block, consist of a convolution layer, a batch normalization layer and a rectified linear activation unit, repeated twice in that order. The number of output channels of the convolution layers in each block has been noted in Figure 

2

. The last convolution block with a 3 channel output (the RGB image) consists of only a single convolutional layer followed by a sigmoid activation. The green blocks denote either a max pooling layer with a kernel size of two and a stride of two if marked with “/2” or a nearest neighbor upsampling by a factor of two if marked with “2x”. The blue block denotes an embedding layer, which takes as input the categorical value of the sensitive attribute and outputs a dense embedding of 128 dimensions. It is then followed by a linear projection and a reshaping to match the spatial dimensions of the output of the convolution block to which it is concatenated, but with a single channel. The same type of linear projection is applied on the 1024 dimensional noise vector input, but this projection and reshaping matches both the spatial and channel dimensions of the output of the convolutional block to which it is concatenated. Concatenation is in both cases done along the channel dimension.

The discriminators and are modeled using ResNet-18 (He et al., 2016) and a modified version which we refer to as ResNet-10222ResNet-10 has the same setup as ResNet-18, but each of the “conv2_x”, “conv3_x”, “conv4_x”, and “conv5_x” layers consists of only one block instead of two., respectively. The last fully connected layer has been replaced with a two and three class output layer for each model, respectively.

The algorithm used to train the filter, the generator, and the discriminators is described in Algorithm 1 where the loss function for the filter is the entropy of its output, the loss functions and are categorical cross entropy, the distortion measure is defined as the -norm, and is assumed to be the uniform distribution

. The hyperparameters consist of the learning rate

, the quadratic penalty term coefficient , the distortion constraint , and the parameters to Adam (Kingma and Ba, 2014). We also include results where is the categorical cross entropy as in the baseline.

  input:
  
  repeat
     Draw m samples uniformly at random from the dataset
     
     Draw m samples from the noise distribution
     
     Draw m samples from the synthetic distribution
     
     Compute censored and synthetic data
     
     
     Compute filter and generator losses
     
     
     Update filter and generator parameters
     
     
     Compute discriminator losses
     
     
     Update discriminator parameters
     )
     )
  until stopping criterion
  return
Algorithm 1
Figure 2: The architecture of the filter and generator networks. The notation , , and means that the network takes these inputs and gives as output for the filter / generator respectively. In the filter we do not use the embedding branch.

4 Experiments

In this section we describe our experiments, the dataset used to evaluate the method and the evaluation metrics. We have evaluated our method on the facial images from the CelebA dataset 

(Liu et al., 2015)333http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.

The preservation of utility for a downstream task is measured by training a classifier to detect non-sensitive attributes in the censored images, which are known from the original dataset.

4.1 CelebA

The CelebA dataset (Liu et al., 2015) consists of 202,599 face images of size 218x178 pixels and 40 binary attribute annotations per image, such as age (old or young), gender, if the image is blurry, if the person is bald, etc. We include experiments where we use the smiling attribute and the gender attribute from CelebA, respectively as the sensitive variable to censor and synthesize. The dataset has an official split into a training set of 162,770 images, a validation set of 19,867 images and a test set of 19,962 images. We resize all images to 64x64 pixels and normalize all pixel values to the region , unless otherwise is specified.

4.2 Filtering and replacement of sensitive data

Let be a set of training data where denotes facial image and denotes the sensitive attribute (either smiling or gender). Further let be the held out test data. Since we evaluate on CelebA (Liu et al., 2015) we have for the training data and for the test data. We also, for the purpose of evaluation only, assume access to a number of utility attributes for each . We consider the utility attributes gender, wearing lipstick, young, high cheekbones, mouth slightly open, heavy makeup, and exclude the one currently used as the sensitive attribute for each experiment. The utility attributes are used to evaluate how well non-sensitive attributes are preserved after the images have been censored.

Let and denote the parameters of the filter and generator, respectively, obtained by Algorithm 1 applied to with , , , and . We can then define the training data censored by the filter as , where and the training data censored by the filter and the generator as where , , and . We do the same transformations to the test data and denote them and respectively.

Each experiment is run on a Tesla V100 SXM2 32 GB in a DGX-1 machine. The training is restricted to 100 epochs which takes about 13 hours.

To evaluate the two methods we use four different metrics. We train three adversarial classifiers , and to predict the ground truth smile of a person given an image censored by the baseline, only the generator, and our method respectively. That is, we train the adversary on and evaluate the accuracy of the adversary on to measure how predictable the sensitive attribute is from an image censored by the baseline in this adversarial setting, and similarly we train the adversary and on their respective censored training sets and evaluate the accuracy on their respective testing sets.

In addition, we train a classifier to predict the gender of a person in an image on the original uncensored training data and use it as a measure to see how well the gender attribute is preserved after censoring the data. We also train a classifier to predict the smile of a person in the image on the original uncensored training data and use it as a measure to see how well the smile is censored.

To quantify the image quality of the censored images and we use the Fréchet Inception Distance (FID) (Heusel et al., 2017). This is a metric frequently used in GAN literature to measure image quality.

5 Results

In this section we present quantitative and qualitative results on the facial image experiments.

5.1 Quantitative results

Figure 3 shows the trade-off between privacy and utility using the strong adversarially trained classifiers , and when evaluated on images censored by the baseline and our method respectively. With these adversaries, which are much stronger than the fixed classifiers, our method consistently has a higher utility at any given level of privacy compared to the baseline. Remember: these adversaries require to run tagged training data through the privacy mechanism to be able to train.

Table 1 shows the accuracy of the fixed classifiers (smiling) and (gender) when applied to (baseline) and (ours), as well as the FID score. We observe that the privacy increases with larger for both methods while the utility decreases.

Dist. Fix. smiling Fix. gender FID
b-line ours b-line ours b-line ours


0.001
58.6 63.1 93.1 91.9 12.3 15.0
0.005 53.4 58.8 89.4 89.8 25.5 21.3
0.01 50.7 57.9 84.0 86.2 52.4 27.3
0.05 50.0 50.5 56.1 63.3 246.2 44.4
Guess 50.0 61.4
Table 1:

The mean accuracy and standard deviation over five different random seeds when evaluating the fixed classifiers

and on the held out test data when censored with the baseline (b-line) and our method, and FID score of the censored images with varying . The expected accuracy of random guessing on bottom row. Smiling: closer to guessing means more privacy, gender: higher means more utility. FID: lower is better.
Figure 3: Privacy vs utility trade-off where the sensitive attribute is smiling (left), and gender (right). Privacy is measured using the adversarial classifiers , and , which had access to the non-censored labels for the censored images in the training set. Average utility is the accuracy of fixed classifiers of attributes other than the sensitive one (for Gender, Wearing_Lipstick, Young, High_Cheekbones, Mouth_Slightly_Open, Heavy_Makeup). Our approach with entropy loss consistently performs better than all other explored approaches.

In Table 2 we present the results of evaluating the accuracy of the fixed classifier on the dataset where is the image censored with our method and is the new synthetic attribute uniformly sampled from . That is, we measure how often the classifier predict the new synthetic attribute when applied to . We can see that with the method is on average able to fool the classifier of the time, and this increases with larger to a success rate of on average with . We also include these results when the images have been censored with respect to the attributes gender, wearing lipstick and young.

Dist. Synthetic
Smiling Gender Lipstick Young
0.001
0.005
0.01
0.05

Table 2: The mean accuracy and standard deviation over five different random seeds for evaluated on the data set for varying . That is, the success rate of our method to fool the fixed classifier that the synthetic sensitive attribute is in the censored image . Higher is better.
Smiling Gender Lipstick Young
Smiling 1.00 -0.04 0.08 -0.06
Gender -0.07 1.00 -0.44 -0.21
Lipstick 0.14 -0.30 1.00 0.26
Young 0.05 -0.11 0.23 1.00
High Cheekbones 0.14 -0.07 0.15 -0.01
Mouth Open 0.04 0.00 0.03 -0.02
Heavy Makeup 0.12 -0.24 0.47 0.22
Table 3: The value of each cell denotes the Pearson’s correlation coefficient between predictions from a fixed classifier trained to predict the row attribute and a fixed classifier trained to predict the column attribute, given that the column attribute has been censored.

We also include results on correlations between classifier predictions on a pair of attributes when one attribute has been synthetically replaced, as seen in Table 3. This is further discussed in Section 6.

5.2 Qualitative results

In this section we present qualitative results of the baseline and our method. In Figure 4 we show, from the top row to the bottom row, the input image , the censored image , the censored image with the synthetic attribute (non-smiling), and the censored image with the synthetic attribute (smiling). A value of is used in the first four columns, and in the last four columns. The images censored by our method look sharper and it is less obvious that they are censored. We can see that the method convincingly generates non-smiling faces and smiling faces while most of the other parts of the image is intact. These images are sampled from models trained on images of 128x128 pixels resolution. Figure 6 shows corresponding samples on the same input images, but using gender as the sensitive attribute.

The gap in visual quality between our method and the baseline becomes clear when increase from to . The images censored by the baseline look blurry while the images censored by our method still seem to maintain a lot of the structure from the original image. However, in some cases, for example in the eighth column, it is perhaps no longer obvious that the image is of the same person, but attributes such as gender, hair color and skin tone are still preserved.

In Figure 5 we present results where the attributes smiling, gender, wearing lipstick, and young have been censored. The method fails to disentangle the gender attribute and the lipstick attribute. This is discussed in more detail in Section 6.

Figure 4: Qualitative results for the sensitive attribute smile. In the first four columns: , and in the last four columns: . From top to bottom row: input image (), censored image (), censored image with synthetic non-smile (), censored image with synthetic smile (). The model is able to generate a synthetic smiling attribute while maintaining much of the structure in the image. These images were generated from a model trained using 128x128 pixels.
Figure 5: Examples of changing other attributes in an image. Each column (except the first which is the input) corresponds to an attribute that is being censored, and the top and bottom row show the censored image with the synthesized attribute set to false and true respectively.

6 Discussion

In Figure 3 our method consistently has a higher utility at any given level of privacy compared to the baseline. Furthermore, we can observe that using the entropy loss function for the filter benefits both the baseline and our method. Both for smiling and gender being the sensitive attribute, our method outperforms all other evaluated approaches.

In Table 1, we recognize that the baseline achieves a higher privacy for small , but point out that for a large value of the baseline maintains a very low utility as the gender prediction is not better than random guessing, and the FID score is very high. Further, for our method achieves a higher utility according to the fixed gender prediction score while maintaining a higher image quality, but with a lower privacy. However, it should be noted that the standard deviation is quite high for the smiling prediction accuracy of the baseline at , and that there are values lower than random guessing, meaning that if we flip the prediction the accuracy would increase.

Most importantly, our method demonstrates a higher privacy in the evaluation with the much stronger adversary as seen in Figure 3. This shows that our method makes it more difficult for the adversary to see through the privatization step. To show the effect of the filter we have included results using only the generator to privatize the images. Note that the combination of the filter and the generator works best.

A possible reason why using only the generator does not achieve high privacy is that the generator architecture is designed to easily learn the identity function to promote transformations that changes the input. Assuming enough capacity, the generator could learn the following rule: if the sensitive attribute in the image that is being censored is the same as the randomly sampled attribute, let the image through without changes. Otherwise, apply the -constrained change that transforms the image into a realistic image with the new attribute.

If the transformed image is indistinguishable from a real image this is not a problem, but if it is not we can easily reverse the privatization by detecting if the image is real or not. To mitigate this problem the filter always removes the sensitive data in the image which forces the generator to synthesize new data. Since the censored image is now guaranteed to be synthetic, we can no longer do the simple real/fake attack.

In Table 2, we see that the fixed smile classifier is fooled by our privatization mechanism in 82.4% to 91.2% of the data points in the test set (depending on the distortion ). These results indicate that it may be harder for an adversarially trained classifier to predict the sensitive attribute when it has been replaced with something else, as compared to simply removed. We assume that this is due to the added variability in the data. Or intuitively: it is easier to “blend in” with other images that have similar demonstrations of smiles.

Attribute Correlation
High cheekbones 0.68
Mouth slightly open 0.53
Rosy cheeks 0.22
Oval face 0.21
Wearing lipstick 0.18
Heavy makeup 0.18
Wearing earrings 0.17
Attractive 0.15
Gender -0.14
Bags under eyes 0.11
Table 4: Pearson correlation coefficient between the smiling attribute and 10 other attributes in the CelebA training dataset, ordered from high to low absolute correlation.
Figure 6: Qualitative results for the sensitive attribute gender. In the first four columns: , and in the last four columns: . From top to bottom row: input image (), censored image (), censored image with synthetic female gender (), censored image with synthetic male gender (). The model is able to generate a synthetic gender while maintaining much of the structure in the image. These images were generated from a model trained using 128x128 pixels.

In Table 4 we see that some of the other attributes in the facial images of the CelebA dataset are highly correlated with whether or not the person is smiling. Two attributes that are highly correlated with smiling are high cheekbones and slightly open mouths. It is not obvious that a person with high cheekbones should be predisposed to smile more often, rather it may be that a person that is smiling is perceived as having high cheekbones due to the contraction of the facial muscles. We can see in Figure 4 that this is captured in all of the images with a synthetic smile (fourth row). The cheek muscles are visibly contracted and there are clear dimples in all images. On the other hand, the synthetic images with non-smile (third row) seem to have much less contracted facial muscles. We also note that the images with generated smiles tend to have an open mouth, while the images with a generated non-smile tend to have a closed mouth.

The fact that many important attributes in facial images correlate leads to the reflection that disentangling the underlying factors of variation is not entirely possible. For example, in this dataset lipstick is highly correlated with gender. This means that if we want to hide all information about whether or not the person is wearing lipstick we also need to hide its gender (and other correlating attributes). This problem can be seen in Table 3 where changing whether or not a person is wearing lipstick correlates with changes of gender.

The question we ask is: if we censor an attribute in an image, how does that correlate with changes of other attributes in the image? In the lipstick column of Table 3 we have censored the attribute lipstick. We then make predictions on whether or not the person in the censored image is wearing lipstick, and compute the correlation between these predictions and predictions for the attributes for each row. For example, we can see that changes in lipstick correlate negatively with changes in gender and positively with makeup. This highlights the problem of disentangling these underlying factors of variation. We also see this in Figure 5 where changing the lipstick attribute to true results in a transformation that changes the gender from male to female.

One core strength of our method is that it is domain-preserving, meaning that where denotes the domain of images. This allows a utility provider to use the censored image in existing algorithms without modifications. Together with our results on the utility preservation, we envision that we can stack several different such mechanisms in a chain to add a selection of privatizations to an image. This could be useful in social media settings where people may want to share images, but would like a selection of attributes to be censored.

7 Conclusions

In this work we have addressed the problem of learning privacy-preserving adversarial representations for image data that can censor sensitive attributes by generating a new synthetic attribute in its place. While previous work has proposed this method for removing information from a representation, our approach extends on this and can also generate new information that looks realistic in its place. We evaluate our method using adversarially trained classifiers, and our results show that it is possible to preserve non-sensitive attributes of the image when performing the censoring. Further, the results show that the synthetically added attribute helps in fooling the adversary in the most challenging setting where the adversary is allowed to be trained using the output of the privacy mechanism.

References

  • R. Alharbi, M. Tolba, L. C. Petito, J. Hester, and N. Alshurafa (2019) To mask or not to mask? balancing privacy with visual confirmation utility in activity-oriented wearable cameras. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3 (3). External Links: Link, Document Cited by: §2.
  • M. Bertran, N. Martinez, A. Papadaki, Q. Qiu, M. Rodrigues, G. Reeves, and G. Sapiro (2019) Adversarially learned representations for information obfuscation and inference. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 614–623. Cited by: §2.
  • A. Beutel, J. Chen, Z. Zhao, and E. H. Chi (2017) Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075. Cited by: §2.
  • P. Chandan Roy and V. Naresh Boddeti (2019) Mitigating information leakage in image representations: a maximum entropy approach. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §2.
  • H. Chen, R. H. Chiang, and V. C. Storey (2012) Business intelligence and analytics: from big data to big impact.. MIS quarterly 36 (4). Cited by: §1.
  • H. Edwards and A. J. Storkey (2016) Censoring representations with an adversary. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Cited by: §2, §2.
  • I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT press. Cited by: §1.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 770–778. External Links: Document, ISSN Cited by: §3.3.
  • M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 6626–6637. Cited by: §4.2.
  • C. Huang, P. Kairouz, and L. Sankar (2018) Generative adversarial privacy: a data-driven approach to information-theoretic privacy. In 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Vol. , pp. 2162–2166. External Links: Document, ISSN 1058-6393 Cited by: §1, §2, §3.1, §3.1.
  • C. Huang, P. Kairouz, X. Chen, L. Sankar, and R. Rajagopal (2017) Context-aware generative adversarial privacy. Entropy 19 (12). External Links: Link, ISSN 1099-4300, Document Cited by: §2.
  • H. Hukkelås, R. Mester, and F. Lindseth (2019) DeepPrivacy: a generative adversarial network for face anonymization. In Advances in Visual Computing, G. Bebis, R. Boyle, B. Parvin, D. Koracin, D. Ushizima, S. Chai, S. Sueda, X. Lin, A. Lu, D. Thalmann, C. Wang, and P. Xu (Eds.), Cham, pp. 565–578. External Links: ISBN 978-3-030-33720-9 Cited by: §2.
  • P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 5967–5976. External Links: Document, ISSN 1063-6919 Cited by: §2.
  • F. D. Johansson, U. Shalit, and D. Sontag (2016) Learning representations for counterfactual inference. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 3020–3029. Cited by: §2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. Note: cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015 External Links: Link Cited by: §3.3.
  • G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez (2017) A survey on deep learning in medical image analysis. Medical image analysis 42, pp. 60–88. Cited by: §1.
  • Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), Cited by: §4.1, §4.2, §4.
  • S. J. Oh, R. Benenson, M. Fritz, and B. Schiele (2016a) Faceless person recognition: privacy implications in social media. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 19–35. External Links: ISBN 978-3-319-46487-9 Cited by: §2.
  • S. J. Oh, R. Benenson, M. Fritz, and B. Schiele (2016b) Faceless person recognition: privacy implications in social media. In European Conference on Computer Vision, pp. 19–35. Cited by: §2.
  • S. J. Oh, M. Fritz, and B. Schiele (2017)

    Adversarial image perturbation for privacy protection a game theory perspective

    .
    In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1491–1500. Cited by: §2.
  • T. Orekondy, M. Fritz, and B. Schiele (2018) Connecting pixels to privacy and utility: automatic redaction of private information in images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • S. A. Osia, A. Taheri, A. S. Shamsabadi, K. Katevas, H. Haddadi, and H. R. Rabiee (2020)

    Deep private-feature extraction

    .
    IEEE Transactions on Knowledge and Data Engineering 32 (1), pp. 54–66. External Links: Document, ISSN 2326-3865 Cited by: §2.
  • N. Raval, A. Machanavajjhala, and L. P. Cox (2017) Protecting visual secrets using adversarial nets. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1329–1332. Cited by: §2.
  • Z. Ren, Y. J. Lee, and M. S. Ryoo (2018) Learning to anonymize faces for privacy preserving action detection. In Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), Cham, pp. 639–655. External Links: ISBN 978-3-030-01246-5 Cited by: §2.
  • O. Ronneberger, P.Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), LNCS, Vol. 9351, pp. 234–241. Note: (available on arXiv:1505.04597 [cs.CV]) Cited by: §3.3.
  • T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 2234–2242. Cited by: §3.2.
  • L. Song, Z. Lu, R. He, Z. Sun, and T. Tan (2017) Geometry guided adversarial facial expression synthesis. CoRR abs/1712.03474. External Links: Link, 1712.03474 Cited by: §2.
  • H. Tang, D. Xu, G. Liu, W. Wang, N. Sebe, and Y. Yan (2019) Cycle in cycle generative adversarial networks for keypoint-guided image generation. In Proceedings of the 27th ACM International Conference on Multimedia, MM ’19, New York, NY, USA, pp. 2052–2060. External Links: ISBN 9781450368896, Link, Document Cited by: §2.
  • L. Q. Tran, X. Yin, and X. Liu (2018) Representation learning by rotating your faces. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.
  • H. Wang, Z. Wu, Z. Wang, Z. Wang, and H. Jin (2019) Privacy-preserving deep visual recognition: an adversarial learning framework and A new dataset. CoRR abs/1906.05675. External Links: Link, 1906.05675 Cited by: §2.
  • Z. Wu, Z. Wang, Z. Wang, and H. Jin (2018) Towards privacy-preserving visual recognition via adversarial training: a pilot study. In Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), Cham, pp. 627–645. External Links: ISBN 978-3-030-01270-0 Cited by: §2.
  • Q. Xie, Z. Dai, Y. Du, E. Hovy, and G. Neubig (2017) Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems, pp. 585–596. Cited by: §2.
  • R. S. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork (2013) Learning fair representations.. In ICML, Proceedings of the 30th International Conference on International Conference on Machine Learning, pp. 325–333. Cited by: §2.
  • B. H. Zhang, B. Lemoine, and M. Mitchell (2018) Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340. Cited by: §2.