Censoring Representations with an Adversary

11/18/2015 ∙ by Harrison Edwards, et al. ∙ 0

In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

When we apply machine learning techniques many real-world settings, it is not long before we run into the problem of sensitive information. It may be that we want to provide information to a third party, but be sure that third party cannot determine critical sensitive variables. Alternatively it may be that we need to make decisions that do not treat one category differently from another. Two specific cases of this are image anonymization and fairness respectively.

1.1 Fairness

As more of life’s decisions become automated there is a growing need, for legal and ethical reasons, to have machine learning algorithms that can make fair decisions. A decision is fair if it does not depend upon a sensitive variable such as gender, age or race. One naive approach to achieving a fair decision would be to simply remove the sensitive covariate from the model. But information about the sensitive variable can ‘leak’ back into the decisions made if there is any dependence between it and the other variables. In this work we focus on

fair classifiers

where we want to predict a binary variable

and be fair with respect to a binary sensitive variable . Here, fairness means that the decision is not-dependent on (i.e. marginally independent of) the sensitive variable.

Previous works, such as those listed in Section 2.2, have tended to develop specific fair variants of common classifiers to solve this problem. Our approach, called adversarial learned fair representations

(ALFR), is to learn representations of the data which are concurrently both fair and discriminative for the prediction task. These features can then be used with any classifier. To achieve both fair and discriminative properties, we represent this as a dual objective optimization problem that can be cast as a minimax problem. We maintain the flexibility of both the representation and the test of fairness, by using deep feed-forward neural networks for each part. There is a deep neural network that is used to produce the representation; that representation is then critiqued by a deep neural adversary who tries to predict the sensitive variable from the representation.

In this paper, we introduced the adversarial method ALFR as a minimax problem, describe the optimization process, and evaluate ALFR on two datasets, Diabetes and Adult. This demonstrates improvement over a related approach Zemel et al. (2013). We also, as an aside, provide the relationship between the discrimination of a classifier and the H-divergence, as used in domain adaptation. The relationship of these methods to domain adaptation is interesting: the different cases for the sensitive variable can be thought of as different domains. However, we leave the study of this for another paper.

1.2 Image Anonymization

There are many notions and problems relating to privacy in the literature. Strict forms of privacy such as that enforced by differential privacy are not always necessary, and more relaxed notions of privacy are beneficial in reducing distortion and making more accurate analysis possible . One particular case of privacy is where certain parts of the data should not be communicated (e.g. someones address or name). However in many settings it is hard to be explicit about exactly what should not be communicated or whether that information is coupled with other measured variables.

In this work we consider the concrete case of removing private information from an image, whilst distorting the image as little as possible. Examples of private information include: licence plates on cars in photos and doctors’ annotations on medical images such as X-rays. We suggest a modification of an autoencoder to remove such private information, and validate this idea by removing surnames from a collection of images of faces. A similar application might be removing logos or watermarks from images.

The novelty of our approach is that the model does not need to be trained with aligned input/output examples, rather only examples of inputs and (separately) examples of outputs, labelled as such. For example if the task is to remove text from an image, an aligned input/output pair would be an image containing text, and the same image with the text removed. Unaligned data would simply be images labelled as containing no text, and images labelled as containing text. The former sort of data would often be substantially more difficult to obtain than the latter.

Once again we use the same two-neural-network minimax formalism to characterise the problem, and the same stochastic gradient optimization procedure to learn the neural network parameters. The model is applied to the problem of images with and without annotation, to good visual affect. A neural network can no longer distinguish well between annotated and non-annotated images, and the actually annotation itself is obscured.

2 Related Work

2.1 Adversarial Learning

The idea of adversarial learning is that one has a representation , a dependent variable and an adversary that tries to predict from . The adversary then provides an adaptive measure of dependence between and which can then be used to learn so that it minimizes this dependence.

The adversarial approach was, to the best of our knowledge, introduced in Schmidhuber (1991) where it was used to learn a representation of data where each is both binary and independent of the other . The experiments in this work were on synthetic data, and were later followed up to learn filters from natural image patches in Schmidhuber et al. (1996) and Schraudolph et al. (1999). They referred to this approach as the principle of predictability minimization.

More recently in Goodfellow et al. (2014) the idea of using an adversary to learn a generative model of data was introduced and followed-up by work such as Gauthier (2015), Rudy & Taylor (2014) and Denton et al. (2015). In this setting the representation is a mixture of data samples and generated samples, and is a binary variable indicating whether a given sample is from the data or ‘fake’. For discussion on using ‘distinguishability criteria’ to learn generative models see Goodfellow (2014).

The inspiration for ALFR comes from using adversarial learning to do domain-adaptation in Ganin et al. (2015). In this setting is a variable indicating the domain, and the idea is to learn a representation in which the domains are indistinguishable, motivated by bounds in Ben-David et al. (2010) relating performance on the target domain (the new domain to which we want to adapt) to the dissimilarity of the target and source domains.

2.2 Fair Classifiers

Several works have proposed variants of classifiers that enforce fairness. These include discrimination-free naive-Bayes (

Calders & Verwer (2010)

), a regularized version of logistic regression (

Kamishima et al. (2011)), and a more recent approach Zafar et al. (2015)

, where the authors introduce constraints into the objective functions of logistic regression, hinge-loss classifiers and support vector machines.

Another approach is data massaging, whereby the labels of the training data are changed so that the training data is fair (Kamiran & Calders (2009)), this is similar to the resampling methods often used to tackle class-imbalance problems.

An approach more in the spirit of ALFR is learned fair representations (LFR) (Zemel et al. (2013)

). In that paper, the authors learn a representation of the data that is a probability distribution over clusters — a form of ‘fair clustering’ — where learning the cluster of a datapoint tells one nothing about the sensitive variable

. The clustering is learned to be fair and also discriminative for the prediction task.

Other than LFR, the previous work has focused on enforcing fair decisions, whereas LFR and ALFR aim to get fairness as a side-effect of fair representations. The advantage of the latter approach is that the representations can potentially be reused for different tasks, and there is the possibility of a separation of concerns whereby one party is responsible for making the representations fair, and another party is responsible for making the best predictive model. In contrast to LFR, our approach is more flexible in terms of the kinds of representations it can learn, whereas LFR learns essentially categorical representations. In addition our approach means that the representations can be used with any classifier.

Concurrent with this work is the preprint ‘The Variational Fair Autoencoder’ (Louizos et al. (2015)). There are two main differences between this work and theirs. First they use a variational autoencoder that factorizes the latent variables and sensitive variable . This aspect is complementary and could be incorporated into the adversarial framework. Secondly they use a Maximum Mean Discrepancy (MMD) penalty to reduce dependence of the representation on , this is a kernel-based alternative to using an adversary. It is not yet clear in what circumstances we should prefer MMD over an adversary, we hope that future work will address this question.

2.3 Removing Private Information

The problem of removing information from images in a learned manner has not been tackled, to the best of our knowledge, in the machine learning community. However there has been work on detecting when private information has been erroneously left in an image, such as in Korayem et al. (2014) and Erickson et al.

3 Formalism: Fairness and Discrimination

We consider a binary classification task where is the input space, is the label set and is a protected variable label set. We are provided with i.i.d examples

drawn from some joint distribution

over corresponding random variables

. The goal of the learning algorithm is to build a classifier that has high accuracy whilst maintaining the property of statistical parity or fairness, that is

(1)

A key statistic we will use to measure statistical parity is the discrimination defined

(2)

where are the number of data items where equal to and , respectively. We measure the success of our classifier using the empirical accuracy .

Following Zemel et al. (2013) we aim to optimize the difference between discrimination and classification accuracy

(3)

where , called the delta. In Zemel et al. (2013) they consider the specific trade off where , whereas we will evaluate our models across a range of different values for .

4 Censored Representations

In this section we show how a modification of an autoencoder can be used to learn a representation that obscures/removes a sensitive variable. In the general case we have an input variable , a target variable and a binary sensitive variable . The objective is to learn a representation that preserves information about , is useful for predicting and is approximately independent of . The loss we will use is of the form

(4)

where is the cost of reconstructing from , is a measure of dependence between and and is the error in predicing from . The scalars

are hyperparameters controlling the weighting of the competing objectives. If we don’t have a specific prediction task, as in the case where we just want to remove private information from an image, then we don’t need the

term. On the other hand, if we are not interested in reusing the representation for different predictive tasks then we may set . We may also want to have in the case where we want to learn a transformation of the data that is fair and preserves the semantics of the representation, for example in certain regulated areas the interpretability of the model is paramount. Notice also that only the final term depends-upon and so there is an opportunity to train in a semi-supervized fashion if the labels are not available for all of the data.

4.1 Quantifying Dependence

We begin with quantifying the dependence between the representation and the sensitive variable . Since is binary, we can think of this as measuring the difference between two conditional distributions: and . We will do this by training a classifier to tell them apart, called the adversary. Interested readers may wish to know that this measure of dependence is related to the notion of an -divergence, as described in Appendix A.

In particular if the adversary network trained to discriminate between and has parameters , and the encoder has parameters , then we can define

(5)

that is the negative of the standard log-loss for a binary classifier. The adversary’s parameters should be chosen to maximize (and hence to approximately realize the empirical -divergence ), whilst the representation parameters should be chosen to minimize (and hence to approximately minimize ), so we have a minimax problem:

(6)

Of course, this admits a trivial solution by learning a constant representation , and so we must introduce constraints to learn anything interesting.

4.2 Quantifying C(X,R)

We quantify the information retained in about by the ability of a decoder to reconstruct from , in particular we use the expected squared error

(7)

where we extend to include the parameters in the decoder Dec.

4.3 Quantifying E(Y,R)

We quantify how discriminative is for the prediction task using the log-loss of a classifier or predictor network , trained to predict from :

(8)

where we again extend to encompass the parameters of the predictor network Pred.

4.4 Optimization

There are four elements in the model: the encoder, the decoder, the predictor and the adversary. Each is implemented using a feed-forward neural network, the precize architectures are detailed in Section 5. The costs for these elements, given in equations 8, 5 and 7, are joined together to give the joint loss

(9)

This enables the problem of learning censored representation to be cast as the minimax problem

(10)

Expecting to be able to train the adversary in the inner loop to a global optimum is unrealistic. Instead, as described in Goodfellow et al. (2014)

we use a heuristic: a variant on stochastic gradient descent where for each minibatch we decide whether to take a gradient step with respect to the actors parameters

or a negative gradient step with respect to the adversary’s parameters . In Goodfellow et al. (2014) they consider simple strict alternation between updating the adversary and actor, we find this to be a useful default. We give detailed pseudo-code for strict alternation in Algorithm 1. Note that the gradient steps in Algorithm 1 can easily be replaced with a more powerful optimizer such as the Adam algorithm (Kingma & Ba (2014)). This method is a heuristic in the sense that we do not provide formal guarantees of convergence, but we do know that the solution to the minimax problem is a fixed point of the process. There are a large number of papers using this method (such as those mentioned in 2.1) and getting good results, so there is considerable empirical evidence in its favour.

The key issue in this process is that if the adversary is too competent then the gradients will be weak for the actor, whereas if the adversary is too incompetent then the gradients will be uninformative for the actor. We have also considered not updating the adversary if, for instance, its accuracy in predicting is over a threshold, say , and not updating the generator if the accuracy is below a threshold, say . We find that this sometimes improves results, but more investigation is needed.

initialize network parameters
Boolean indicating whether to update parameters or .
repeat
      random mini-batch from dataset
     
     
     
     
      Reconstruction loss for the autoencder.
      Log-loss for the predictor.
      Negative log-loss for the adversary.
      Joint loss.
     if  then Updating adversary’s parameters.
         
     else Updating autoencoder’s parameters.
         
     end if
     
until Deadline
Algorithm 1 Strictly alternating gradient steps.

4.5 Adversarial Learned Fair Representations (ALFR)

We apply the general setup described in Section 4 to the case of learning fair classifiers as described in Section 3. In this case the sensitive variable would correspond to some category like gender or race, and the target variable would be the attribute we wish to predict using these fair representations.

In this specific case it is worth pointing out that the discrimination of the classifier is bounded above by the empirical -divergence, as shown in Lemma A in Appendix A, that is

(11)

where is a symmetric hypothesis class on including and are the empirical samples given to us. This is because the discrimination between and given by the best hypothesis must be at least as good as some particular hypothesis , assuming . So minimizing the divergence minimizes the discrimination.

4.6 Anonymizing Images

We apply the the idea of censoring representations to the application of removing text from images. In this problem the input is an image and the sensitive variable describes whether or not the image contains private information (text). Here there is no prediction task and so there is no variable

. In contrast to ALFR, we are not interested in learning a hidden representation, but the reconstructed image

, so in this case we have . In order to evaluate the model, we need to have a small amount of validation/test data where we have pairs of images with and without the text. This is used to choose hyperparameters, but the model itself never gets to train on example input/output pairs.

5 Experimental Results

We used the Adam algorithm with the default parameters for all our optimizations. The experiments were implemented using theano (Bergstra et al. (2010), Bastien et al. (2012)) and the python library lasagne.

5.1 Fairness

5.1.1 Datasets

We used two datasets from the UCI repository Lichman (2013) to demonstrate the efficacy of ALFR.

The Adult dataset consists of census data and the task is to predict whether a person makes over dollars per year. The sensitive attribute we chose was Gender. The data has instances and attributes. We used thousand instances for the training set and approximately thousand instances each for the validation and test sets.

The Diabetes dataset consists of hospital data from the US and the task is to predict whether a patient will be readmitted to hospital. The sensitive attribute we chose was Race, we changed this to a binary variable by creating a new attribute isCaucasian. The data has around thousand instances and attributes. We used thousand instances for the training set and approximately thousand instances each for the validation and test sets.

5.1.2 Protocols and Architecture

To compare ALFR with LFR we split each dataset into training, validation and test sets randomly, we then run experiments per model with different hyperparameters. Then for each value of considered we selected the model maximizing on the validation data. This process was repeated times on different data splits. Since each model sees the same data split, the observations are paired and so we get observations of the difference in performance for each value of .

In evaluating the results, we want to evaluate the approaches for different possible tradeoffs between accuracy and discrimination, and so we compare with a range of values of and explore hyperparameters using a random search (the settings over hyperparameters are drawn from a product of simple distributions over each hyperparameter). We use random search, as opposed to a sequential, performance driven search, so that we are able to compare the models across a range of values of . When using these methods in practice one should select the hyperparameters, using Bayesian optimization or a similar approach, to maximize the specific tradeoff one cares about.

We now give details of the priors over hyperparameters used for the random search. The autoencoder in ALFR had encoding/decoding layers with

hidden units, with all hidden layers having the same number of hidden units. Each encoding/decoding unit used the ReLU (

Nair & Hinton (2010)) activation. The critic also had hidden layers with ReLU activations. The predictor network was simply a logistic regressor on top of the central hidden layer of the autoencoder. In the LFR model the model had clusters. In both models the reconstruction error weighting parameter was fixed at . For both models we used had and .

5.1.3 Results

We found the LFR model was sensitive to hyperparameters/initialization and could often stick during training, but given sufficient experiments we were able to obtain good results. In Figure 1 and Figure 2 we see the results of applying both LFR and ALFR on the Adult and Diabetes data respectively. In both cases we see that the ALFR model is able to obtain significantly better results across most of the range of possible tradeoffs between accuracy and discrimination. The step changes in these figures correspond to places where the change in tradeoff results in a different form of model. This also clarifies that the results are often not very sensitive to this tradeoff parameter.

Figure 1: Results on Adult dataset. For each value of the model maximizing on the validation set is selected, the plots show the performance of the selected models on the test data. Top row is . Second row down shows the mean paired difference between the for the ALFR model and LFR model, where positive values favour ALFR. We also give a 95% CI around the mean. Third row down is . Bottom row is . We see from the top row that the ALFR model has better for every setting of considered. Moreover the difference is significant for most values of (that is, the CI does not include zero).
Figure 2: Results on Diabetes dataset. For each value of the model maximizing on the validation set is selected, the plots show the performance of the selected models on the test data. Top row is . Second row down shows the mean paired difference between the for the ALFR model and LFR model, where positive values favour ALFR. We also give a 95% CI around the mean. Third row down is . Bottom row is .We see from the top row that the ALFR model has better for every setting of considered. We also see that the difference is significant for (that is, the CI does not include zero).

We were also interested in the effect of the hyperparameters , , on the discrimination of the ALFR model. In particular we consider the ratio , measuring the relative importance of the dependence term over the prediction error term in the cost. In Figure 3 we see that the larger is relative to , the lower the discrimination is (up to a point), which matches expectations.

Figure 3: A scatter plot of versus the test discrimination of the ALFR model on the Adult dataset. We see an approximately linear relation up to a certain point, after which further relative increase in has little effect on the discrimination.

5.2 Image Anonymization

5.2.1 Datasets

We used the ‘10k US Adult Faces’ dataset Bainbridge et al. (2013) consisting of 10,168 natural face photographs. We preprocessed the data by rescaling each to pixels and converting to grayscale. In addition, for half the training images we added private information, in the form of text overlayed at a random location in the image. We used photographs as a validation set, and for the test set, on both the validation and test sets we have the image both with and without the text for evaluation. In this problem the sensitive variable indicates whether or not the image contains text.

5.2.2 Protocols and Architecture

The global image autoencoder model we used was an expert patch based model, using patches. The expert model consists of two parts, a patch classifier and a neural patch autoencoder. The image reconstruction is formed by simply copying the patch if the patch classifier predicts there to be no text-label in the patch. Otherwize it uses the patch autoencoder to construct the patch. The patch classifier is pretrained with weak-supervision using image level labels for each patch. The decision boundary for the patch adversary is a hyperparameter, optimized to use the autoencoder if the probability of a text-label is in this setting. Altogether the whole image was reconstructed from the input image using the expert patch based model for each patch of the image.

Along with the image autoencoder is the full image adversary, which takes the whole reconstructed image and classifies whether the image has a text-label present or not.

The patch classifier had a single hidden layer with 200 ReLU units. The autoencoder also had a single hidden layer with ReLU units. The adversary had two convolutional layers each with filters of size

interspersed with two max-pooling layers with pooling size

, this was followed by a dense layer with hidden units with ReLU activations. Other hyperparameters were the and weights, which were chosen to be and .

All hyperparameters were chosen by evaluation on the validation data. The validation data consists of input/output pairs where contains text, and is the corresponding image without text. The performance was measured using the mean-square error betweeen the autoencoder’s output given and the target .

5.2.3 Results

Sample images produced by the trained model on the test data are shown in Figure 4. The model learns to imagine what the image would have looked like had there been no text, despite having never been given example input/output pairs. The produced images are quite plausible, but artifacts become apparent when zooming in. We believe that the quality of the images could be substantially improved through the use of a convolutional autoencoder on a larger dataset, since this could take into account a wider context than a patch when removing the text.

In this application the flexibility of the adversarial framework as compared with a clustering approach becomes apparent, it would be extremely difficult to get an LFR-style approach to work for images, since it must reconstruct images as a convex combination of template images.

Figure 4: Image anonymization results on the test set. In each pair of faces the left image is the input to the autoencoder, and the right image is the censored output.

6 Conclusions and Future Work

We have shown how the adversarial approach can be adapted to the task of removing sensitive information from representations. Our model ALFR improves upon a related approach whilst at the same time being more flexible. We demonstrate this flexibility by showing how the same setup can be used to remove text from an image, with encouraging results.

As has been noted before it is difficult to train adversarial models owing to the unstable dynamic between actor and adversary. There is work to be done in the future in developing theory, or at least heuristics, for improving the stability of the training process.

Following up from the application on images we would like to investigate the more challenging problem of removing more pervasive information from an image, for instance removing gender from a face.

Another interesting problem to investigate would be obscuring text in images where we only have negative examples of images with text. With no further assumptions our method would not be applicable, but if we assume that we have some information about the text then we can gain some traction. For example if the text is the name of the person in the image, and we know the name of the person in the image, then the adversary could be trained to predict a bag-of-characters of the name, whereas the autoencoder would be trained to make this task difficult for the adversary. The result should be a blurring, rather than removal of the text. One issue one would face in this approach would be a lack of any ground-truth examples for validation, since there are many ways to obscure text.

Acknowledgments

This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.

References

Appendix A The -Divergence

In this appendix we describe the notion of a -divergence, as developed in Kifer et al. (2004) and Ben-David et al. (2007). The -divergence is a way of measuring the difference between two distributions using classifiers.

A hypothesis for a random variable is a mapping . A hypothesis class for a random variable is a collection of hypotheses on . A symmetrical hypothesis class is a hypothesis class such that for each , the inverse hypothesis defined is also in

Now we can define the divergence.

Given two random variables on a common space and a hypothesis class on , the -divergence between and is

In case is symmetric, Ben-David et al. (2010) show that can be approximated empirically.

Let be a symmetrical hypothesis class on random variables on a common space . Now given i.i.d samples from and i.i.d samples from , we define the empirical -divergence between to be

The nature of this empirical approximation is shown in the following probabilistic bound from Ben-David et al. (2010). Let be a symmetrical hypothesis class with VC dimension on random variables on a common space . Now given i.i.d samples from and i.i.d samples from , we have that, for any , with probability at least ,

So we can see that minimizing the capability of an adversary to tell the difference between two distributions relates to minimizing an -divergence. The empirical divergence can be straightforwardly related to the discrimination of a classifier.

Let , , and be as described in Section 4. Let be our classifier and the discrimination as described in Section 3. Then

where is a symmetric hypothesis class on including and are the representations of the empirical samples.

Proof.

Recall that

where . We simply observe that

where for the inequality we use the fact that and for the first equality we use the fact that is a symmetric hypothesis class.