The list of success stories from big data analytics can be made long. Statistical studies for medical diagnosis (Litjens et al., 2017) and business analytics (Chen et al., 2012) are merely a couple of examples. The amounts of data required for training machine learning models for the processing of natural language and images are growing with the scale and capabilities of the models (Goodfellow et al., 2016). However, collecting large datasets which potentially contain sensitive information about individuals can be difficult because getting the consent from people may be challenging. Furthermore, privacy laws are being incorporated in many countries to protect the rights of individuals, making it hard to use data with sensitive information. Being able to give privacy guarantees on a dataset may be a way to allow the distribution of the data, while protecting the rights of individuals, and thus unlocking the large benefits for individuals and for society that big datasets can provide.
In this work, we study techniques for selective anonymization of image datasets. The end goal is to provide the original data as detailed as possible and retain the most information from it, while at the same time making it hard for an adversary to detect specific sensitive attributes. The proposed solution is agnostic to the downstream task, with the objective to make the data as private as possible given a distortion constraint.
Previous research has addressed this issue using adversarial representation learning with some success: a filter model is trained to hide sensitive information while an adversary model is trained to recover the information. While previous work had the objective of training the filter model so that it is hard for the adversary to uncover the sensitive attributes (Huang et al., 2018), we instead explore this task under the assumption that it is easier to hide sensitive information if you replace it with something else: a sample which is independent from the input data.
In our setup, the adversary can make an arbitrary number of queries to the model, each time another sample will be produced from the distribution of the sensitive data, while keeping as much as possible of the non-sensitive information about the requested data point.
Besides the adversary module, our proposed solution includes two main components: one filter model that is trained to remove the sensitive information, and one generator model that inserts a synthetically generated new value for the sensitive attribute. The generated sensitive information is entirely independent from the sensitive information in the original input image. Following a body of work in privacy-related adversarial learning we evaluate the proposed model on faces from the CelebA dataset, and consider, for example, the smile of a person to be the sensitive attribute. The smile is an attribute that carries interesting aspects in the transformations of a human face. Even if the obvious change reside close to the mouth, subtle changes occur in many other parts of the face when a person smiles: eyelids tighten, dimples may occur and the skin may wrinkle. The current work also includes a thorough analysis of the dataset, including correlations of such features. These correlations make the task interesting and challenging, reflecting the real difficulty that may occur when anonymizing data. What is the right trade-off between preserving the utility as defined by allowing information about other attributes to remain, and removing the sensitive information? Our method is easily generalizable and can be applied to other visual attributes such as gender, race, background cues, etc.
2 Related work
Adversarial learning is the process of training a model with the objective of being able to fool an adversary (a second model). The adversary is trained simultaneously, and both become increasingly good at their respective task during the training process. This approach has been successfully used to learn image-to-image transformations (Isola et al., 2017), and synthesis of properties such as facial expressions (Song et al., 2017; Tang et al., 2019). Privacy-preserving adversarial representation learning studies how to utilize this learning framework to learn representations of data that hide some sensitive information, yet retain the utility. Adversarial learning has also been used to train generative models (Goodfellow et al., 2014).
A body of prior work has been dedicated to employ adversarial representation learning to hide sensitive attributes while retaining the useful information (Edwards and Storkey, 2016; Zhang et al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al., 2017). Bertran et al. (2019) presented a privacy preserving mechanism that minimizes the mutual information between the utility variable and the input image data conditioned on the learned representation. Chandan Roy and Naresh Boddeti (2019) proposed that the generator in the adversarial learning setup should be optimized to maximize the entropy of the discriminator output rather than to minimize the log likelihood. The authors show that this is beneficial in the multi-class setting. Osia et al. (2020) treated the problem as an information-bottleneck problem. The resulting images are optimized to be useful to predict a specific binary attribute, while hiding the identify of a person. Wu et al. (2018), Wang et al. (2019), Ren et al. (2018) studied how to learn transformations of video that respect a privacy budget while maintaining performance on the downstream task. Tran et al. (2018
) proposed an approach for solving pose-invariant face recognition. Similar to our work, their approach used adversarial representation learning to disentangling specific attributes in the data.Oh et al. (2017) trained an obfuscator network to add a minimal amount of perturbation noise to the input to make the output fool a person recognition adversary.
All these proposed solutions, with the exception of Edwards and Storkey (2016), depend on knowing the downstream task labels. Our work has no such dependency: the data produced by our method is designed to be usable regardless of downstream task.
The work most closely related to ours is that by Huang et al. (2017) and Huang et al. (2018). They use adversarial learning to minimize the mutual information between the private attribute and the censored image under a distortion constraint. We extend on these ideas by proposing a modular design consisting of a filter that is adversarially trained to obfuscate the data points, and a generator that further enhances the privacy by adding new independently sampled synthetic information for the sensitive attributes. Our work is closely connected to the learning of fair representations (Zemel et al., 2013), and the proposed generator module could be applied to counterfactual reasoning in a fashion similar to Johansson et al. (2016) by allowing to control the private attribute input to the generator deterministically.
Some prior work has assumed access to a privacy-preserving mechanism, such as bounding boxes for faces, and has studied to what extent the identity of a person can be hidden when blurring (Oh et al., 2016a), removing (Orekondy et al., 2018) or generating the face of another person (Hukkelås et al., 2019) in its place. Other work has assumed access to the utility-preserving mechanism and proposed to obfuscate everything except what they want to retain (Alharbi et al., 2019). But this raises the question: how do we find the pixels in an image that need to be modified to preserve privacy with respect to some attribute, or alternatively the pixels that need to be kept to preserve utility? Furthermore, Oh et al. (2016b) showed that blurring or removing the head of a person has a limited effect on privacy with respect to person recognition. The finding is crucial; we cannot rely on modifications of an image such as blurring or overpainting to achieve privacy.
An adversarial set-up instead captures the signals that the adversary uses, and can attain a stronger privacy.
3 Privacy-preserving adversarial representation learning
In the current work, we focus on utility-preserving transformations of data: we use privacy-preserving representation learning to obfuscate information in the input data, and seek to output results that retain the information and structure of the input.
3.1 Problem setting
Generative adversarial privacy (GAP) (Huang et al., 2018)
was proposed as a method to provide privacy while maintaining the utility of an image dataset, which will be used as the baseline in the current work. In GAP, one assumes a joint distributionof public data points and sensitive private attributes where is typically correlated with . The authors define a privacy mechanism where is the source of noise or randomness in . Let be an adversary’s prediction of the sensitive attribute from the privatized data according to a decision rule
. The performance of the adversary is thus measured by a loss functionand the expected loss of the adversary with respect to , and is
where is the source of noise.
The privacy mechanism should be privacy-preserving and utility-preserving. That is, it should be hard for an analyst to infer from , but should be minimally distorted with respect to . The authors (Huang et al., 2018) formulate this as a constrained minimax problem
where the constant defines the allowed distortion for the privatizer and is some distortion measure.
In the current work, we call the privacy mechanism the filter since the purpose of is to filter the sensitive information from . A potential drawback with this formulation is that it only removes the sensitive information in which may make it obvious to the adversary that is a censored version of . Instead, in addition to removing the sensitive information we propose to replace it with a new value
that is sampled from the uniform distribution111The uniform distribution is a close approximation of the marginal distribution of the smiling attribute., independent of .
3.2 Our contribution
We extend the filter with an additional module which we call the generator. We define the generator mechanism as where
denotes the random variable of the new synthetic sensitive attribute, andand are the sources of noise or randomness in and respectively. The discriminator is then trained to predict when the input is a real image, and to predict the “” output when the input comes from
as in the semi-supervised learning setup in(Salimans et al., 2016). The objective of the generator is to generate a new synthetic (independent) sensitive attribute in , that will fool the discriminator . We further define the loss of the discriminator as
where is the source of noise, is the assumed distribution of the synthetic sensitive attributes , is the fake class, and is the loss function. We formulate this as a constrained minimax problem
where the constant defines the allowed distortion for the generator. An overview of the setup can be seen in Figure 1.
3.3 Data-driven implementation
Typically, we do not have access to the true data distribution , but we have access to a dataset of samples which are assumed to be identically and independently distributed according to the unknown joint distribution . We assume that the sensitive attribute is binary and takes values in . However, the proposed approach can easily be extended to categorical sensitive attributes. We model the filter mechanism
with a convolutional neural network parameterized byand the generator mechanism with a convolutional neural network parameterized by . We use the UNet (Ronneberger et al., 2015) architecture for both the filter and the generator, as illustrated in Figure 2
. The orange blocks are convolution blocks each of which, except for the last block, consist of a convolution layer, a batch normalization layer and a rectified linear activation unit, repeated twice in that order. The number of output channels of the convolution layers in each block has been noted in Figure2
. The last convolution block with a 3 channel output (the RGB image) consists of only a single convolutional layer followed by a sigmoid activation. The green blocks denote either a max pooling layer with a kernel size of two and a stride of two if marked with “/2” or a nearest neighbor upsampling by a factor of two if marked with “2x”. The blue block denotes an embedding layer, which takes as input the categorical value of the sensitive attribute and outputs a dense embedding of 128 dimensions. It is then followed by a linear projection and a reshaping to match the spatial dimensions of the output of the convolution block to which it is concatenated, but with a single channel. The same type of linear projection is applied on the 1024 dimensional noise vector input, but this projection and reshaping matches both the spatial and channel dimensions of the output of the convolutional block to which it is concatenated. Concatenation is in both cases done along the channel dimension.
The discriminators and are modeled using ResNet-18 (He et al., 2016) and a modified version which we refer to as ResNet-10222ResNet-10 has the same setup as ResNet-18, but each of the “conv2_x”, “conv3_x”, “conv4_x”, and “conv5_x” layers consists of only one block instead of two., respectively. The last fully connected layer has been replaced with a two and three class output layer for each model, respectively.
The algorithm used to train the filter, the generator, and the discriminators is described in Algorithm 1 where the loss function for the filter is the entropy of its output, the loss functions and are categorical cross entropy, the distortion measure is defined as the -norm, and is assumed to be the uniform distribution
. The hyperparameters consist of the learning rate, the quadratic penalty term coefficient , the distortion constraint , and the parameters to Adam (Kingma and Ba, 2014). We also include results where is the categorical cross entropy as in the baseline.
In this section we describe our experiments, the dataset used to evaluate the method and the evaluation metrics. We have evaluated our method on the facial images from the CelebA dataset(Liu et al., 2015)333http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.
The preservation of utility for a downstream task is measured by training a classifier to detect non-sensitive attributes in the censored images, which are known from the original dataset.
The CelebA dataset (Liu et al., 2015) consists of 202,599 face images of size 218x178 pixels and 40 binary attribute annotations per image, such as age (old or young), gender, if the image is blurry, if the person is bald, etc. We include experiments where we use the smiling attribute and the gender attribute from CelebA, respectively as the sensitive variable to censor and synthesize. The dataset has an official split into a training set of 162,770 images, a validation set of 19,867 images and a test set of 19,962 images. We resize all images to 64x64 pixels and normalize all pixel values to the region , unless otherwise is specified.
4.2 Filtering and replacement of sensitive data
Let be a set of training data where denotes facial image and denotes the sensitive attribute (either smiling or gender). Further let be the held out test data. Since we evaluate on CelebA (Liu et al., 2015) we have for the training data and for the test data. We also, for the purpose of evaluation only, assume access to a number of utility attributes for each . We consider the utility attributes gender, wearing lipstick, young, high cheekbones, mouth slightly open, heavy makeup, and exclude the one currently used as the sensitive attribute for each experiment. The utility attributes are used to evaluate how well non-sensitive attributes are preserved after the images have been censored.
Let and denote the parameters of the filter and generator, respectively, obtained by Algorithm 1 applied to with , , , and . We can then define the training data censored by the filter as , where and the training data censored by the filter and the generator as where , , and . We do the same transformations to the test data and denote them and respectively.
Each experiment is run on a Tesla V100 SXM2 32 GB in a DGX-1 machine. The training is restricted to 100 epochs which takes about 13 hours.
To evaluate the two methods we use four different metrics. We train three adversarial classifiers , and to predict the ground truth smile of a person given an image censored by the baseline, only the generator, and our method respectively. That is, we train the adversary on and evaluate the accuracy of the adversary on to measure how predictable the sensitive attribute is from an image censored by the baseline in this adversarial setting, and similarly we train the adversary and on their respective censored training sets and evaluate the accuracy on their respective testing sets.
In addition, we train a classifier to predict the gender of a person in an image on the original uncensored training data and use it as a measure to see how well the gender attribute is preserved after censoring the data. We also train a classifier to predict the smile of a person in the image on the original uncensored training data and use it as a measure to see how well the smile is censored.
To quantify the image quality of the censored images and we use the Fréchet Inception Distance (FID) (Heusel et al., 2017). This is a metric frequently used in GAN literature to measure image quality.
In this section we present quantitative and qualitative results on the facial image experiments.
5.1 Quantitative results
Figure 3 shows the trade-off between privacy and utility using the strong adversarially trained classifiers , and when evaluated on images censored by the baseline and our method respectively. With these adversaries, which are much stronger than the fixed classifiers, our method consistently has a higher utility at any given level of privacy compared to the baseline. Remember: these adversaries require to run tagged training data through the privacy mechanism to be able to train.
Table 1 shows the accuracy of the fixed classifiers (smiling) and (gender) when applied to (baseline) and (ours), as well as the FID score. We observe that the privacy increases with larger for both methods while the utility decreases.
|Dist.||Fix. smiling||Fix. gender||FID|
The mean accuracy and standard deviation over five different random seeds when evaluating the fixed classifiersand on the held out test data when censored with the baseline (b-line) and our method, and FID score of the censored images with varying . The expected accuracy of random guessing on bottom row. Smiling: closer to guessing means more privacy, gender: higher means more utility. FID: lower is better.
In Table 2 we present the results of evaluating the accuracy of the fixed classifier on the dataset where is the image censored with our method and is the new synthetic attribute uniformly sampled from . That is, we measure how often the classifier predict the new synthetic attribute when applied to . We can see that with the method is on average able to fool the classifier of the time, and this increases with larger to a success rate of on average with . We also include these results when the images have been censored with respect to the attributes gender, wearing lipstick and young.
5.2 Qualitative results
In this section we present qualitative results of the baseline and our method. In Figure 4 we show, from the top row to the bottom row, the input image , the censored image , the censored image with the synthetic attribute (non-smiling), and the censored image with the synthetic attribute (smiling). A value of is used in the first four columns, and in the last four columns. The images censored by our method look sharper and it is less obvious that they are censored. We can see that the method convincingly generates non-smiling faces and smiling faces while most of the other parts of the image is intact. These images are sampled from models trained on images of 128x128 pixels resolution. Figure 6 shows corresponding samples on the same input images, but using gender as the sensitive attribute.
The gap in visual quality between our method and the baseline becomes clear when increase from to . The images censored by the baseline look blurry while the images censored by our method still seem to maintain a lot of the structure from the original image. However, in some cases, for example in the eighth column, it is perhaps no longer obvious that the image is of the same person, but attributes such as gender, hair color and skin tone are still preserved.
In Figure 3 our method consistently has a higher utility at any given level of privacy compared to the baseline. Furthermore, we can observe that using the entropy loss function for the filter benefits both the baseline and our method. Both for smiling and gender being the sensitive attribute, our method outperforms all other evaluated approaches.
In Table 1, we recognize that the baseline achieves a higher privacy for small , but point out that for a large value of the baseline maintains a very low utility as the gender prediction is not better than random guessing, and the FID score is very high. Further, for our method achieves a higher utility according to the fixed gender prediction score while maintaining a higher image quality, but with a lower privacy. However, it should be noted that the standard deviation is quite high for the smiling prediction accuracy of the baseline at , and that there are values lower than random guessing, meaning that if we flip the prediction the accuracy would increase.
Most importantly, our method demonstrates a higher privacy in the evaluation with the much stronger adversary as seen in Figure 3. This shows that our method makes it more difficult for the adversary to see through the privatization step. To show the effect of the filter we have included results using only the generator to privatize the images. Note that the combination of the filter and the generator works best.
A possible reason why using only the generator does not achieve high privacy is that the generator architecture is designed to easily learn the identity function to promote transformations that changes the input. Assuming enough capacity, the generator could learn the following rule: if the sensitive attribute in the image that is being censored is the same as the randomly sampled attribute, let the image through without changes. Otherwise, apply the -constrained change that transforms the image into a realistic image with the new attribute.
If the transformed image is indistinguishable from a real image this is not a problem, but if it is not we can easily reverse the privatization by detecting if the image is real or not. To mitigate this problem the filter always removes the sensitive data in the image which forces the generator to synthesize new data. Since the censored image is now guaranteed to be synthetic, we can no longer do the simple real/fake attack.
In Table 2, we see that the fixed smile classifier is fooled by our privatization mechanism in 82.4% to 91.2% of the data points in the test set (depending on the distortion ). These results indicate that it may be harder for an adversarially trained classifier to predict the sensitive attribute when it has been replaced with something else, as compared to simply removed. We assume that this is due to the added variability in the data. Or intuitively: it is easier to “blend in” with other images that have similar demonstrations of smiles.
|Mouth slightly open||0.53|
|Bags under eyes||0.11|
In Table 4 we see that some of the other attributes in the facial images of the CelebA dataset are highly correlated with whether or not the person is smiling. Two attributes that are highly correlated with smiling are high cheekbones and slightly open mouths. It is not obvious that a person with high cheekbones should be predisposed to smile more often, rather it may be that a person that is smiling is perceived as having high cheekbones due to the contraction of the facial muscles. We can see in Figure 4 that this is captured in all of the images with a synthetic smile (fourth row). The cheek muscles are visibly contracted and there are clear dimples in all images. On the other hand, the synthetic images with non-smile (third row) seem to have much less contracted facial muscles. We also note that the images with generated smiles tend to have an open mouth, while the images with a generated non-smile tend to have a closed mouth.
The fact that many important attributes in facial images correlate leads to the reflection that disentangling the underlying factors of variation is not entirely possible. For example, in this dataset lipstick is highly correlated with gender. This means that if we want to hide all information about whether or not the person is wearing lipstick we also need to hide its gender (and other correlating attributes). This problem can be seen in Table 3 where changing whether or not a person is wearing lipstick correlates with changes of gender.
The question we ask is: if we censor an attribute in an image, how does that correlate with changes of other attributes in the image? In the lipstick column of Table 3 we have censored the attribute lipstick. We then make predictions on whether or not the person in the censored image is wearing lipstick, and compute the correlation between these predictions and predictions for the attributes for each row. For example, we can see that changes in lipstick correlate negatively with changes in gender and positively with makeup. This highlights the problem of disentangling these underlying factors of variation. We also see this in Figure 5 where changing the lipstick attribute to true results in a transformation that changes the gender from male to female.
One core strength of our method is that it is domain-preserving, meaning that where denotes the domain of images. This allows a utility provider to use the censored image in existing algorithms without modifications. Together with our results on the utility preservation, we envision that we can stack several different such mechanisms in a chain to add a selection of privatizations to an image. This could be useful in social media settings where people may want to share images, but would like a selection of attributes to be censored.
In this work we have addressed the problem of learning privacy-preserving adversarial representations for image data that can censor sensitive attributes by generating a new synthetic attribute in its place. While previous work has proposed this method for removing information from a representation, our approach extends on this and can also generate new information that looks realistic in its place. We evaluate our method using adversarially trained classifiers, and our results show that it is possible to preserve non-sensitive attributes of the image when performing the censoring. Further, the results show that the synthetically added attribute helps in fooling the adversary in the most challenging setting where the adversary is allowed to be trained using the output of the privacy mechanism.
- To mask or not to mask? balancing privacy with visual confirmation utility in activity-oriented wearable cameras. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3 (3). External Links: Cited by: §2.
- Adversarially learned representations for information obfuscation and inference. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 614–623. Cited by: §2.
- Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075. Cited by: §2.
- Mitigating information leakage in image representations: a maximum entropy approach. In , Cited by: §2.
- Business intelligence and analytics: from big data to big impact.. MIS quarterly 36 (4). Cited by: §1.
- Censoring representations with an adversary. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Cited by: §2, §2.
- Deep learning. MIT press. Cited by: §1.
- Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §2.
- Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 770–778. External Links: Cited by: §3.3.
- GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 6626–6637. Cited by: §4.2.
- Generative adversarial privacy: a data-driven approach to information-theoretic privacy. In 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Vol. , pp. 2162–2166. External Links: Cited by: §1, §2, §3.1, §3.1.
- Context-aware generative adversarial privacy. Entropy 19 (12). External Links: Cited by: §2.
- DeepPrivacy: a generative adversarial network for face anonymization. In Advances in Visual Computing, G. Bebis, R. Boyle, B. Parvin, D. Koracin, D. Ushizima, S. Chai, S. Sueda, X. Lin, A. Lu, D. Thalmann, C. Wang, and P. Xu (Eds.), Cham, pp. 565–578. External Links: Cited by: §2.
- Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 5967–5976. External Links: Cited by: §2.
- Learning representations for counterfactual inference. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 3020–3029. Cited by: §2.
- Adam: a method for stochastic optimization. Note: cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015 External Links: Cited by: §3.3.
- A survey on deep learning in medical image analysis. Medical image analysis 42, pp. 60–88. Cited by: §1.
- Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), Cited by: §4.1, §4.2, §4.
- Faceless person recognition: privacy implications in social media. In Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), Cham, pp. 19–35. External Links: Cited by: §2.
- Faceless person recognition: privacy implications in social media. In European Conference on Computer Vision, pp. 19–35. Cited by: §2.
Adversarial image perturbation for privacy protection a game theory perspective. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1491–1500. Cited by: §2.
- Connecting pixels to privacy and utility: automatic redaction of private information in images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
Deep private-feature extraction. IEEE Transactions on Knowledge and Data Engineering 32 (1), pp. 54–66. External Links: Cited by: §2.
- Protecting visual secrets using adversarial nets. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1329–1332. Cited by: §2.
- Learning to anonymize faces for privacy preserving action detection. In Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), Cham, pp. 639–655. External Links: Cited by: §2.
- U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), LNCS, Vol. 9351, pp. 234–241. Note: (available on arXiv:1505.04597 [cs.CV]) Cited by: §3.3.
- Improved techniques for training gans. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 2234–2242. Cited by: §3.2.
- Geometry guided adversarial facial expression synthesis. CoRR abs/1712.03474. External Links: Cited by: §2.
- Cycle in cycle generative adversarial networks for keypoint-guided image generation. In Proceedings of the 27th ACM International Conference on Multimedia, MM ’19, New York, NY, USA, pp. 2052–2060. External Links: Cited by: §2.
- Representation learning by rotating your faces. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.
- Privacy-preserving deep visual recognition: an adversarial learning framework and A new dataset. CoRR abs/1906.05675. External Links: Cited by: §2.
- Towards privacy-preserving visual recognition via adversarial training: a pilot study. In Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), Cham, pp. 627–645. External Links: Cited by: §2.
- Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems, pp. 585–596. Cited by: §2.
- Learning fair representations.. In ICML, Proceedings of the 30th International Conference on International Conference on Machine Learning, pp. 325–333. Cited by: §2.
- Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340. Cited by: §2.