Evading the Adversary in Invariant Representation

05/24/2018 ∙ by Daniel Moyer, et al. ∙ USC Information Sciences Institute University of Southern California 0

Representations of data that are invariant to changes in specified nuisance factors are useful for a wide range of problems: removing potential bias in prediction problems, controlling the effects of known confounders, and disentangling meaningful factors of variation. Unfortunately, learning representations that exhibit invariance to arbitrary nuisance factors yet remain useful for other tasks is challenging. Existing approaches cast the trade-off between task performance and invariance in an adversarial way, using an iterative minimax optimization. We show that adversarial training is unnecessary and sometimes counter-productive by casting invariant representation learning for various tasks as a single information-theoretic objective that can be directly optimized. We demonstrate that this approach matches or exceeds performance of state-of-the-art adversarial approaches for learning fair representations and for generative modeling with controllable transformations.



There are no comments yet.


page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The removal of unwanted information is a surprisingly common task. Transform-invariant features in computer vision, “fair” encodings from the algorithmic fairness community, and two-stage regressions often used in scientific studies are all cases of the same general concept: we wish to remove the effect of some outside variable

on our data while still relevant to our original task. In the context of representation learning, we wish to map into an encoding that is uninformative of , yet also optimal for our task loss .

These objectives are often operationalized as an independence constraint . Encodings satisfying this condition are invariant under changes in , thus called “invariant representations”. In practice these constraints are often relaxed to other measures; in recent works an adversary’s ability to predict from has been used as such a proxy louizos2015variational ; xie2017controllable , transplanting adversarial losses from generative literature to encoder/decoder settings.

In the present work we instead relax to a penalty on the mutual information . We provide an analysis of this loss, showing that:

  1. admits a useful variational upper bound. This is in contrast to the usual lower bound e.g. bounds on for some labels .

  2. When placed alongside the Variational Auto Encoder (VAE) and the Variational Information Bottleneck (VIB) frameworks, the upper bound on produces a computationally tractable form for learning -agnostic encodings and predictors.

  3. The adversarial approach can be also derived as a procedure to minimize , but does not provide an upper bound.

Our proposed methods have the practical advantage of only requiring at training time, and not at test time. They are thus viable for production settings where accessing is expensive (requiring human labeling), impossible (requiring underlying transformations), or legally inadvisable (sharing protected data). On the other hand, our method also produces a conditional decoder taking both and as inputs. While for some purposes this might be discarded at test time, it also can be manipulated to function similarly to Fader Networks lample2017fader , in that we can generate realistic looking transformations of an input image where some class label has been altered. We empirically test our proposed

-agnostic VAE and VIB methods on two standard “fair prediction” tasks, as well as an unsupervised learning task demonstrating Fader-like capabilities.

1.1 Related Work

The removal of covariate factors from scientific data has a long history. Observational studies in the sciences often cannot control for every factor influencing subjects, thus a large amount of literature has been generated on the topic of removing such factors after data collection. Simple statistical techniques usually involve modifying analyses with corresponding covariate effects raudenbush1994random or sometimes multi-level regressions freckleton2002misuse . Modern methods have included more nuanced feature generation and more complex models, but follow along the same vein feis2015ica ; fortin2017harmonization “Regressing out” or “controlling for” unwanted factors can be effective, but place strong constraints on later analyses (namely, the observation of the same unwanted covariates).

A similar concept also has deep roots in computer vision, where transform-invariant features and/or methods have been sought after for some time. Often these methods were designed for specific cases, e.g. scale-invariant or rotation invariant features. Early examples include Steerable Filters freeman1991design ; greenspan1994overcomplete , and later SIFT lowe1999object . For many image transformations, data augmentation has become a standard practical tool to encourage invariance.

Recent work has provided a group-theoretic cohen2016group analysis of the removal of covariate information (either by design or by augmentation), in which equivalences are drawn between finding invariant features and finding the quotient space of the domain over a covariate group action. More generally, an empirical solution was proposed by Lample et al. lample2017fader , who removed specific visual features from a latent representation through adversarial training.

More recently the algorithmic fairness community has investigated fair methods kamiran2009classifying and fair representations zemel2013learning . Derived in part from the desire to avoid discriminating against protected classes of individuals (and in part to avoid breaking laws and/or to avoid being the subject of a civil suit), the objective of these methods has been to preserve task accuracy (usually classification or regression) while removing bias against the protected class of individuals.

Particularly relevant to our work are the recent methods proposed by Louizos et al. louizos2015variational and Xie et al. xie2017controllable , which have a similar problem setup. Both methods generate representations that make it difficult for an adversary to recover the protected class but are still useful for a classification task. Louizos et al. propose the “Variational Fair Auto-Encoder” (VFAE), which, as its name suggests, modifies the VAE of Kingma and Welling kingma2013auto to produce fair111The definition of “fair” in an algorithmic setting is of some debate. A fair encoding in this paper is uninformative of protected classes. We offer no opinion on whether this is truly “fair” in a general or legal sense, taking the word at face value as used by Louizos et al. encodings, as well as a supervised case providing fair classifications. Xie et al. combine this concept with adversarial training, adding (inverted) gradient information to produce fair representations and classifications. This adversarial solution coincides exactly with the conceptual framework used in a computer vision application for constructing Fader Networks lample2017fader .

Compressive encoding in a learning context is also well studied. In particular, the Information Bottleneck tishby2000information and its modern successor Variational Information Bottleneck (VIB) alemi2016deep ; achille2018information both provide compressive encodings, aiming for “relevance” with respect to a target variable (usually a label). An unsupervised method, CorEx, also has a similar extension (Anchored Corex gallagher2016anchored ), in which latent factors can be driven toward specific targets. Our work could be thought of as adding “negative anchors” or aiming for “irrelevance” with respect to protected classes.

Models including “nuisance” factors were also considered by Soatto and Chiuso soatto2014visual , in which the authors propose definitions of both nuisances and invariant representations. The authors follow the group theoretic concept of nuisances. Achille and Soatto achille2017emergence directly utilize these results, providing the same criterion and relaxation for invariance we will use here, minimal mutual information between the representation and the covariate . While nuisances form only a small subsection of the paper, the authors propose and test a sampling based approach to learning invariant representation. Their method is predicated on the ability to sample from the nuisance distribution (e.g. adding occlusions in images). We optimize a similar objective, but avoid such practical constraints.

Contemporaneous to this work, another paper by Song et al. song2018learning proposes a similar information theory based bound. Instead of making the variational approximation of reconstruction using , the authors give the following bound for a specified . This leads to a similar desiderata, which they optimizing using an (adversarial) iterative minimax method.

2 Model

Consider a general task that includes an encoding of observed data into latent variables through the conditional likelihood (an encoder). Further assume that we observe a variable which exhibits statistical dependence with (possibly non-linear dependence). We would like to find a

that minimizes our loss function

from the original task, but that also produces a independent of .

This is clearly a difficult optimization; independence is a very strong condition. A natural relaxation of this is the minimization of the mutual information . We can write our relaxed objective as


where is a trade-off parameter between the two objectives. might involve other variables as well, e.g. labels . Without details of and its associated task, we can still provide insight into .

Before continuing it is important to note that all entropic quantities related to are from the encoding distribution unless explicitly stated otherwise. In some cases entropies depend on prior distributions, , and this will be explicitly noted.

From properties of mutual information, we have that . Here, we note that is the function that we are optimizing over, and thus the distribution of solely depends on . Thus,


Using Mutual Information properties and a variational inequality, we can then write the following:


is a constant and can be ignored. In Eq. 5 we introduce the variational distribution which will play the traditional role of the decoder. is thus bounded up to a constant by a divergence and a reconstruction error.

The result is similar in appearance to the bound from Variational Auto-Encoders kingma2013auto , wherein we balance the divergence between and a prior against the reconstruction error. Here our penalty on amounts to encouraging to be close to its marginal , i.e. to vary less across inputs , no matter the form of or . From a coding viewpoint our penalty encourages the compression of out of using the term from Eq 5.

In both interpretations, these penalties are tempered by conditional reconstruction error. This provides additional intuition; by adding a copy of into the reconstruction, we ensure that compressing away information in about is not penalized. In other words, conditional reconstruction combined with compressing regularization leads to invariance w.r.t. to the conditional input.

2.1 Invariant Codes through VAE

We apply our proposed penalty to the VAE of Kingma and Welling kingma2013auto , inspired by the similarity of the penalty in Eq. 7 to the VAE loss function. The original VAE stems from the classical unsupervised task of constructing latent factors, , so that define a generative model that maximizes the log likelihood of the data . This generally intractable expression is lower bounded using Jensen’s inequality and a variational approximation:


Kingma and Welling kingma2013auto frame and

as an encoder/decoder pair. They then provide a re-parameterization trick that, when used with standard function approximators (neural networks), allows for efficient estimation of latent codes

. In short, the reparametrization is the following:


where is a deterministic function (neural network) with parameters , and

is an independent random variable from a Normal distribution

222In the original paper, this was defined more generally; here we only consider the Normal distribution case. also parameterized by .

We can reformulate Kingma and Welling’s VAE to include our penalty on . Define data and latent factors , but also define observed upon which may have non-trivial dependence. That is,


The invariant coding task is to find that maximize , subject to under (i.e. subject to the estimated code being invariant to ). We make the same relaxation as in Eq. 1 to formulate our objective:


Starting with the first term, we can derive a familiar looking encoder/decoder loss function that now includes .


Because is a prior, we can make the assumption that , the prior marginal distribution over . This is a willful model misspecification: for an arbitrary encoder, the latent factors,

, are probably not independent of

. However, practically we wish to find that are independent of , thus it is reasonable to include such a prior belief in our generative model. Taking this assumption, we have


This is almost exactly the same as the VAE objective in Eq. 8, except our decoder requires as well as . Putting this together with the penalty term Eq. 7, we have the following variational bound on the combined objective (up to a constant):


We use this bound to learn -invariant auto-encoders.

2.1.1 Derivation of an approximation for the Conditional-Marginal divergence

Equation 18 is our desired loss function for learning invariant codes in an unsupervised context. Unfortunately it contains , the empirical marginal distribution of latent code , which is difficult to compute. Using the re-parameterization trick, becomes a mixture distribution and this allows us to approximate its divergence from .


We can thus approximate from its pairwise distances , which all have a closed form due to the reparameterization trick. While this requires operations for batch size , we can reduce pairwise Gaussian KL divergence to matrix algebra, making this computation fast in practice.

This further provides insight into the previously proposed Variational Fair Auto-Encoder of Louizos et al louizos2015variational

. In that paper, the authors add a Maximum Mean Discrepancy penalty as a somewhat ad hoc regularizer. This nevertheless works in practice quite well, as it encourages the statistical moments of each

to be the same over the varying values of . Our condition of has equivalent minima, and shares the “q-regularizing” flavor of the MMD penalty.

2.1.2 Alternate derivation leads to adversarial loss

In Equation 3 we used the identity , with the caveat that the third term is zero. We could have instead used another identity, . Here, the first term is constant, but expanding the second term provides the following:


The last inequality is again up to a constant term. Interpreting this in machine learning parlance, another possible approach for minimizing

is to optimize the conditional distribution so that , the lowest entropy predictor of given , has the highest entropy (i.e. is as inaccurate as possible at predicting ). This is often operationalized by adversarial learning, and subsequent error may be due in part to the adversary not achieving the infimum. Practically speaking, this may indicate that over-training adversaries would benefit performance by bringing the adversarial gradient closer to the infimum adversary’s gradient.

2.2 Supervised Invariant Codes through the Variational Information Bottleneck

Learned encodings are often used for downstream supervised prediction tasks. Just as in Variational Fair Auto-encoders louizos2015variational , we can model both at the same time to offer -invariant predictions. Our formulation of this problem fits into the Information Bottleneck framework tishby2000information and mirrors the Variational Information Bottleneck (VIB) alemi2016deep .

Conceptually, VAEs have strong connections to the Information Bottleneck alemi2016deep . Stepping out of the generative context, we can “reroute” our decoder to a label variable . This gives us the following computational model:


The bottleneck paradigm prescribes optimizing over and so that is maximal while minimizing (“maintaining information about y with maximal compression of x into z”). As illustrated by Alemi et al. alemi2016deep , this can be approximated using variational inference.

We can produce -invariant codes in the supervised Information Bottleneck context using the relaxation from Eq. 1. Beginning with the bottleneck objective and then including the minimization of , we have


We can then apply the same bound as in Eq. 5 to obtain, up to constant , the following:


In this objective we have a maximization of likelihood . This is a decoder loss, adding a third branch to our network. Following the derivation in Alemi et al. alemi2016deep as well as a similar path as in Section 2.1, the variational bound on the objective is


We use Eq. 31 to learn -invariant predictors. Optimization is performed over three function approximations: one encoder , one conditional decoder , and one predictor . We further must compute from the penalty term. Instead of following Alemi et al.alemi2016deep , we again use the approximation to from Eq. 24.

3 Computation and Empirical Evaluation

We have two modified VAE loss (Eq. 18) and modified VIB loss (Eq. 31). In both we have to learn an encoder and decoder pair and . We use feed forward networks to approximate these functions. For we use the Gaussian reparameterization trick, and for we simply concatenate onto as extra input features to be decoded. In the modified VIB we also have a predictor branch , which we also use a feed forward network to parametrize. Specific architectures (e.g. number of layers and nodes per layer for each branch) vary by domain.

We evaluate the performance on of our proposed invariance penalty on two datasets with a “fair classification” task. We also demonstrate “Fader Network”-like capabilities for manipulating specified factors in generative modeling on the MNIST dataset.

3.1 Fair Classification

For each fair classification dataset/task we evaluated both prediction accuracy and adversarial error in predicting

from the latent code. We compare against the Variational Fair Autoencoder (VFAE)

louizos2015variational , and the adversarial method proposed in Xie et al. xie2017controllable . Both datasets are from the UCI repository. The preprocessing for both datasets follow Zemel et al. 2013zemel2013learning , which is also the source for the pre-processing in our baselines louizos2015variational ; xie2017controllable .

The first dataset is the German dataset, containing 1000 samples of personal financial data. The objective is to predict whether a person has a good credit score, and the protected class is Age (which, as per zemel2013learning

, is binarized). The second dataset is the Adult dataset, containing 45,222 data points of US census data. The objective is to predict whether or not a person has over 50,000 dollars saved in the bank. The protected factor for the Adult dataset is Gender

333In some papers the protected factor for the Adult dataset is reported as Age, but those papers also reference Zemel et al. zemel2013learning as the processing and experimental scheme, which specifies Gender..

Wherever possible we use architectural constraints from previous papers. All encoders and decoders are single layer, as specified by Louizos et al. louizos2015variational

(including those in the baselines), and for both datasets we use 64 hidden units in our method as in Xie et al., while for VFAE we use their described architecture. We use a latent space of 30 dimensions for each case. We train using Adam using the same hyperparameter settings as in Xie et al., and a batch size of 128. Optimization and parameter tuning is done via a held-out validation set.

For each tested method we train a discriminator to predict from generated latent codes . These discriminators are trained independently from the encoder/decoder/within-method adversaries. We use the architecture from Xie et al. xie2017controllable

for these post-hoc adversaries, which describes a three-layer feed-forward network trained using batch normalization and Adam (using

and a learning rate of ), with 64 hidden units per layer, using absolute error. We generalize this to four adversaries, increasing in the number of hidden layers. Each discriminator is trained post-hoc for each model, even in cases with a discriminator in the model (e.g. the model proposed by Xie et al. xie2017controllable ).

3.2 Unsupervised Learning

We demonstrate a form of unsupervised image manipulation inspired by Fader Networks lample2017fader on the MNIST dataset. We use the digit label as the covariate class , which pushes all non-class stylistic information into the latent space while attempting to remove information about the exact digit being written. This allows us to manipulate the decoder at test time to produce different artificial digits based on the style of one digit. We use 2 hidden layers with 512 nodes for both the encoder and the decoder.

4 Results

German Dataset Adv. Loss Pred Acc.
Maj. Class 0.725 0.695
VFAE louizos2015variational 0.717 0.720
Xie et al. xie2017controllable 0.811 0.695
Proposed 0.698 0.710

Adult Dataset Adv. Loss Pred Acc.
Maj. Class 0.675 0.752
VFAE louizos2015variational 0.882 0.842
Xie et al. xie2017controllable 0.888 0.831
Proposed 0.675 0.844
Figure 1: On the left we display the adversarial loss (the accuracy of the adversary on ) and predictive accurracy on for three methods, plus the majority-class baseline, on both Adult and German datasets. For adv. loss lower is better, while for pred. acc. higher is better

. On the right we plot adversarial loss by varying adversarial strength (indicated by color), parameterized by the number of layers from zero (logistic regression) to three. All evaluations are performed on the hold-out test sets.

Figure 2: t-SNE plots for the latent encodings of (Left to Right) the VFAE, Xie et al., and our proposed method on the Adult dataset (first 1000 pts., test split). The value of the variable is provided as color, where red is the majority class.
Figure 3: We demonstrate the ability to generate stylistically similar images of varying classes using the MNIST dataset. The left column is mapped into that is invariant to its digit label . We then can generate an image using and any other specified digit, , as show on the right.

For the German dataset shown on top table of Figure 1, the methods are roughly equivalent. All methods have comparable predictive accuracy, while the VFAE and the proposed method have competitive adversarial loss. In general however, the smaller dataset does not differentiate the methods.

For the larger Adult dataset shown on the bottom table of Figure 1, all three methods again have comparable predictive accuracy. However, against stronger adversaries each baseline has very high loss. Our proposed method has comparable accuracy with the VFAE, while providing the best adversarial error across all four adversarial difficulty levels.

We further visualized a projection of the latent codes using t-SNE maaten2008visualizing ; invariant representations should produce inseparable embeddings for each class. All methods have large red-only regions; this is somewhat expected for the majority class. However, both baseline methods have blue-only regions, while the proposed method has only a heterogenous region444Previous versions of this paper had severely contorted latent codes for the Xie et al. baseline. Further investigation showed this to be a convergence issue. Mild performance improvements were also observed..

Figure 3 demonstrates our ability to manipulate the conditional decoder. The left column contain the actual images (randomly selected from the test set), while the right columns contain images generated using the decoder. Particularly notable are the transfer of azimuth and thickness, and the failure of some styles to transfer to some digits (usually curved to straight digits or vice versa).

5 Discussion

As show analytically in Section 2.1.2, in the optimal case adversarial training can perform as well as our derived method; it is also intuitively simple and allows for more nuanced tuning. However, it introduces an extra layer of complexity (indeed, a second optimization problem) into our system. In this particular case of invariant representation, our results lead us to believe that adversarial training is unnecessary.

This does not mean that adversarial training for invariant representations is strictly worse in practice. There are certainly cases where training an adversary may be easier or less restrictive than other methods, and due to its shared literature with Generative Adversarial Networks goodfellow2014generative

, there may be training heuristics or other techniques that can improve performance.

On the otherhand, we believe that our derivations here shed light on why these methods might fail. We believe specific failure modes of adversarial training can be attributed to Eq. 27, where the adversary fails to achieve the infimum. Bad approximations (i.e. weak or poorly trained adversaries) may provide bad gradient information to the system, leading to poor performance of the encoder against a post-hoc adversary.

Our experimental results do not match those reported in Xie et al. While in general their method has comparable performance for predictive accuracy, we do not find that their adversarial error is low; instead, we find that the encoder/adversary pair becomes stuck in local minima. We also find that the adversary trained alongside the encoder performs badly against the encoder (i.e. the adversary cannot predict well), but a post-hoc trained adversary performs very well, easily predicting (as demonstrated by our experiments).

It may be that we have inadvertently built a stronger adversary. We have attempted to follow the author’s experimental design as closely as possible, using the same architecture and the same adversary (using the gradient-flip trick and 3-layer feed forward networks). With the details provided we could not replicate their reported adversarial error for their method, nor for the VFAE method. However, we are able to reproduce the adversarial error reported in Louizos et al., which uses logisic regression. In general for stronger adversaries the adversarial loss will increase, but the relative rankings should remain roughly the same.

6 Conclusion

We have derived a variational upper bound for the mutual information between latent representations and covariate factors. Provided a dataset with labeled covariates, we can train both supervised and unsupervised learning methods that are invariant to these factors without the use of adversarial training. After training our method can be used in production without requiring covariate labels. Finally, our approach also enables manipulation of specified factors when generating realistic data. Our direct, information-theoretic optimization approach avoids the pitfalls inherent in adversarial learning for invariant representation and produces results that match or exceed capabilities of these state-of-the-art methods.


This work was supported by DARPA grants W911NF-16-1-0575 and FA8750-17-C-0106, as well as the NSF Graduate Research Fellowship Program Grant Number DGE-1418060. We would like to thank the conference organizers, area chairs, and especially the anonymous reviewers for their work and helpful input. We also would like to thank Ayush Jaiswal for several insightful conversations, and Ishaan Gulrajani for finding and correcting a bug in our evaluation code.