Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

06/20/2019 ∙ by Charles T. Marx, et al. ∙ THE UNIVERSITY OF UTAH The University of Arizona Haverford College cornell university 8

Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirect (model outcomes are influenced via proxy features). Feature influence can also be expressed in aggregate over the training or test data or locally with respect to a single point. Current research has typically focused on one of each of these dimensions. In this paper, we develop disentangled influence audits, a procedure to audit the indirect influence of features. Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes. We show through both theory and experiments that disentangled influence audits can both detect proxy features and show, for each individual or in aggregate, which of these proxy features affects the classifier being audited the most. In this respect, our method is more powerful than existing methods for ascertaining feature influence.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As machine learning models have become increasingly complex, there has been a growing subfield of work on interpreting and explaining the predictions of these models

[17, 8]. In order to assess the importance of particular features to aggregated model predictions or outcomes for an individual instance, a variety of direct and indirect feature influence techniques have been developed. While direct feature influence [4, 9, 13, 18] focuses on determining the importance of features used directly by the model to determine an outcome, indirect feature influence techniques [1] report that a feature is important if that feature or a proxy had an influence on the model outcomes.

Feature influence methods can focus on the influence of a feature taken over all instances in the training or test set [4, 1], or on the local feature influence on a single individual item of the training or test set [18, 13] (both of which are different than the influence of a specific training instance on a model’s parameters [11]). Both the global perspective given by considering the impact of a feature on all training and/or test instances as well as the local, individual perspective can be useful when auditing a model’s predictions. Consider, for example, the question of fairness in an automated hiring decision: determining the indirect influence of gender on all test outcomes could help us understand whether the system had disparate impacts overall, while an individual-level feature audit could help determine if a specific person’s decisions were due in part to their gender.111While unrelated to feature influence, the idea of recourse [21] also emphasizes the importance of individual-level explanations of an outcome or how to change it.

Our Work.

In this paper we present a general technique to perform both global and individual-level indirect influence audits. Our technique is modular – it solves the indirect influence problem by reduction to a direct influence problem, allowing us to benefit from existing techniques.

Our key insight is that disentangled representations

can be used to do indirect influence computation. The idea of a disentangled representation is to learn independent factors of variation that reflect the natural symmetries of a data set. This approach has been very successful in generating representations in deep learning that can be manipulated while creating realistic inputs

[2, 3, 6, 12, 19]. Related methods use competitive learning to ensure a representation is free of protected information while preserving other information [5, 14].

In our context, the idea is to disentangle

the influence of the feature whose (indirect) influence we want to compute. By doing this, we obtain a representation in which we can manipulate the feature directly to estimate its influence. Our approach has a number of advantages. We can connect indirect influence in the native representation to direct influence in the disentangled representation. Our method creates a

disentangled model: a wrapper to the original model with the disentangled features as inputs. This implies that it works for (almost) any model for which direct influence methods work, and also allows us to use any future developed direct influence model.

Specifically, our disentangled influence audits approach provides the following contributions:

  1. Theoretical and experimental justification that the disentangled model and associated disentangled influence audits we create provides an accurate indirect influence audit of complex, and potentially black box, models.

  2. Quality measures, based on the error of the disentanglement and the error of the reconstruction of the original input, that can be associated with the audit results.

  3. An indirect influence method that can work in association with both global and individual-level feature influence mechanisms. Our disentangled influence audits can additionally audit continuous features and image data; types of audits that were not possible with previous indirect audit methods (without additional preprocessing).

2 Our Methodology

2.1 Theoretical background

Let and denote sets of attributes with associated domains and . represents features of interest: these could be protected attributes of the data or any other features whose influence we wish to determine. For convenience we will assume that consists of the values taken by a single feature – our exposition and techniques work more generally. represents other attributes of the data that may or may not be influenced by features in . An instance is thus a point . Let denote the space of labels for a learning task ( for binary classification or for regression).

Disentangled Representation.

Our goal is to find an alternate representation of an instance . Specifically, we would like to construct that represents all factors of variation that are independent of , as well as a mapping such that . We will refer to the associated new domain as . We can formalize this using the framework of [10]. For any , we can define a group action implicitly in terms of its orbits: specifically, we define an equivalence relation if in the underlying data, changing to would change to . Note that this is an orbit with respect to the permutation group on (where is the size of the domain ). Our goal is to find an equivariant function and an associated group action that yields the desired disentangled representation.

We can define a group action on the disentangled representation as the mapping . Then, given such that , it is equivariant and the representation satisfies the property of being disentangled. Formally, the group action is the product of and the identity mapping on , but for clarity we omit this detail.

Direct and indirect influence

Given a model , a direct influence measure quantifies the degree to which any particular feature influences the outcome of on a specific input. In this paper, we use the SHAP values proposed by [13]

that are inspired by the Shapley values in game theory. For a model

and input , the influence of feature is defined as [13, Eq. 8] where denotes the number of nonzero entries in ,

is a vector whose nonzero entries are a subset of the nonzero entries in

, denotes with the entry set to zero, and is the number of features. Finally, , the conditional expected value of the model subject to fixing all the nonzero entries of ( is the set of nonzero entries in ).

Indirect influence attempts to capture how a feature might influence the outcome of a model even if it is not explicitly represented in the data, i.e its influence is via proxy features. The above direct influence measure cannot capture these effects because the information encoded in a feature might be retained in other features even if is removed. We say that the indirect influence of feature on the outcome of model on input is the direct influence of some proxy for , where a proxy for consists of a set of features and a function that predicts : i.e such that . Note that this generalizes in particular the notion of indirect influence defined by [1]: in their work, indirect influence is defined via an explicit attempt to first remove any possible proxy for and then evaluate the direct influence of . Further, note that if there are no features that can predict , then the indirect and direct influence of are the same (because the only proxy for is itself).

Disentangled influence

The key insight in our work is that disentangled representations can be used to compute indirect influence. Assume that we have an initial representation of a feature vector as and train a model on labeled pairs . Our goal is to determine the indirect influence of on the model outcome. Suppose we construct a disentangled representation as defined above, with the associated encoding function and decoding function .

Proposition 1.

The indirect influence of on the outcome of on input equals , where .

Proof.

By the properties of the disentangled representation, there is no proxy for in the components of : if there were, then it would not be true that the was equivariant (because we could not factor the action on separately from the identity mapping on ).

Thus, if we wished to compute the indirect influence of on model with outcome , it is sufficient to compute the direct influence of on the model that first converts from the disentangled representation back to the original representation and then applies . ∎

Dealing with errors.

The above proposition holds if we are able to obtain a perfectly disentangled and invertible representation. In practice, this might not be the case and the resulting representation might introduce errors. In particular, assume that our decoder function is some . While we do not provide an explicit formula for the dependence of the influence function parameters, we note that it is a linear function of the predictions, and so we can begin to understand the errors in the influence estimates by looking at the behavior of the predictor with respect to .

Model output can be written as . Recalling that , the partial derivative of with respect to can be written as . Consider the term . If the disentangled representation is perfect, then this term is zero (because is unaffected by ), and therefore we get which is as we would expect. If the reconstruction is perfect (but not necessarily the disentangling), then the term is . What remains is the partial derivative of with respect to the latent encoding .

2.2 Implementation

Our overall process requires two separate pieces: 1) a method to create disentangled representations, and 2) a method to audit direct features. In most experiments in this paper, we use adversarial autoencoders

[15] to generate disentangled representations, and Shapley values from the shap technique for auditing direct features [13] (as described above in Section 2.1).

Disentangled representations via adversarial autoencoders

We create disentangled representations by training three separate neural networks, which we denote

, , and (see Figure 1). Networks and are autoencoders: the image of has lower dimensionality than the domain of , and the training process seeks for to be an approximate identity, through gradient descent on the reconstruction error . Unlike regular autoencoders, is also given direct access to the protected attribute. Adversarial autoencoders [15], in addition, use an ancillary network that attempts to recover the protected attribute from the image of , without access to itself. (Note the slight abuse of notation here: is assumed not to have access to , while does have access to it.) During the training of and , we seek to reduce , but also to increase the error of the discriminator .

Figure 1: System diagram when auditing the indirect influence of feature on the outcomes of model for instance using direct influence algorithm .

The optimization process of tries to recover the protected attribute from the code generated by . ( and are the adversaries.) When the process converges to an equilibrium, the code generated by will contain no information about that is useful to , but still reconstructs the original data correctly: disentangles from the other features.

The loss functions used to codify this process are

,   ,  and , where is the mean squared error and

is a hyperparameter determining the importance of disentanglement relative to reconstruction. When

is a binary feature, and are adjusted to use binary cross entropy loss between and .

Disentangled feature audits

Concretely, our method works as follows, where the variable names match the diagram in Figure 1:

in Features(X) ( is not used)

We note here one important difference in the interpretation of disentangled influence values when contrasted with regular Shapley values. Because the influence of each feature is determined on a different disentangled model, the scores we get are not directly interpretable as a partition of the model’s prediction. For example, consider a dataset in which feature is responsible for 50% of the direct influence, while feature is a perfect proxy for , but shows 0% influence under a direct audit. Relative judgments of feature importance remain sensible.

3 Experiments

In this section, we’ll assess the extent to which the disentangled influence audits is able to identify sources of indirect influence to a model and quantify its error. All data and code for the described method and below experiments is available at https://github.com/charliemarx/disentangling-influence.

3.1 Synthetic Regression Data

In order to evaluate whether the indirect influence calculated by the disentangled influence audits correctly captures all influence of individual-level features on an outcome, we will consider influence on a simple synthetic dataset. It includes 5,000 instances two variables and

drawn independently from a uniform distribution over

that are added to determine the label . It also includes proxy variables , , , and . A random noise variable is also included that is drawn independently of and uniformly from . The model we are auditing is a handcrafted model that contains no hidden layers and has fixed weights of 1 corresponding to and and weights of 0 for all other features (i.e., it directly computes ). We use shap as the direct influence delegate method [13].222This method is available via pip install shap. See also: https://github.com/slundberg/shap

In order to examine the impact of the quality of the disentangled representation on the results, we considered both a handcrafted disentangled representation and a learned one. For the former, nine unique models were handcrafted to disentangle each of the nine features perfectly (see Appendix A for details). The learned disentangled representation is created according to the adversarial autoencoder methodology described in more detail in the previous section.


Direct Influence Indirect Influence

Handcrafted DR


Learned DR

Figure 2: Synthetic data direct shap (left) and indirect (right) feature influences using a handcrafted (top row) or learned disentangled representation (bottom row).

The results for the handcrafted disentangled representation (top of Figure 2) are as expected: features and are the only ones with direct influence, all or based features have the same amount of indirect influence, while all features including have zero influence. Using the learned disentangled representation introduces the potential for error: the resulting influences (bottom of Figure 2) show more variation between features, but the same general trends as in the handcrafted test case.

Additionally, note that since shap gives influence results per individual instance, we can also see that (for both models) instances with larger (or, respectively, smaller) or values give larger (respectively, smaller) results for the label , i.e., have larger absolute influences on the outcomes.

3.1.1 Error Analyses

There are two main sources of error for disentangled influence audits: error in the reconstruction of the original input and error in the disentanglement of from such that the discriminator is able to accurately predict some close to . We will measure the former error in two ways. First, we will consider the reconstruction error, which we define as . Second, we consider the prediction error, which is - a measure of the impact of the reconstruction error on the model to be audited. Reconstruction and prediction errors close to 0 indicate that the disentangled model  is similar to the model being audited. We measure the latter form of error, the disentanglement error, as where

is the variance of

. A disentanglement error of below indicates that information about that feature may have been revealed, i.e., that there may be indirect influence that is not accounted for in the resulting influence score. In addition to the usefulness of these error measures during training time, they also provide information that helps us to assess the quality of the indirect influence audit, including at the level of the error for an individual instance.



Figure 3: Errors on the synthetic data for the reconstruction error (left) when taken across influence audits for each feature, prediction error (middle), and disentanglement error (right).

These influence experiments on the dataset demonstrate the importance of a good disentangled representation to the quality of the resulting indirect influence measures, since the handcrafted zero-error disentangled representation clearly results in more accurate influence results. Each of the error types described above are given for the learned disentangled representation in Figure 3. While most features have reconstruction and prediction errors close to 0 and disentanglement errors close to 1, a few features also have some far outlying instances. For example, we can see that the variables have high prediction error on some instances, and this is reflected in the incorrect indirect influence that they’re found to have on the learned representation for some instances.

3.2 dSprites Image Classification

Indirect Influence

Figure 4: dSprites data indirect latent factor influences on a model predicting shape.

The second synthetic dataset is the dSprites dataset commonly used in the disentangled representations literature to disentangle independent factors that are sources of variation [16]. The dataset consists of images ( pixels) of a white shape (a square, ellipse, or heart) on a black background. The independent latent factors are position, position, orientation, scale, and shape. The images were downsampled to resolution and only the half of the data in which the shapes are largest were used due to the lower resolution. The binary classification task is to predict whether the shape is a heart. A good disentangled representation should be able to separate the shape from the other latent factors.



Figure 5: The mean squared reconstruction error (left), absolute prediction error (middle), and absolute disentanglement error (right) of the latent factors in the dSprites data under an indirect influence audit.

In this experiment we seek to quantify the indirect influence of each latent factor on a model trained to predict the shape from an image. Since shape is the label and the latent factors are independent, we expect the feature shape to have more indirect influence on the model than any other latent factor. Note that a direct influence audit is impossible since the latent factors are not themselves features of the data. Model and disentangled representation training information can be found in Appendix A.

The indirect influence audit, shown in Figure 4, correctly identifies shape as the most important latent factor, and also correctly shows the other four factors as having essentially zero indirect influence. However, the audit struggles to capture the extent of the indirect influence of shape since the resulting shap values are small.

The associated error measures for the dSprites influence audit are shown in Figure 5. We report the reconstruction error as the mean squared error between and for each latent factor. The prediction error is the difference between and

of the model’s estimate of the probability the shape is a heart. While the reconstruction errors are relatively low (less than 0.05 for all but

position) the prediction error and disentanglement errors are high. A high prediction error indicates that the model is sensitive to the errors in reconstruction and the indirect influence results may be unstable, which may explain the low shap values for shape in the indirect influence audit.

3.3 Adult Income Data

Figure 6:

Ten selected features for Adult dataset. Direct (left) and indirect (right) influence are shown. For all features, see Supplemental Material. Low values indicate a one-hot encoded feature is

false.

Finally, we’ll consider a real-world dataset containing Adult Income data that is commonly used as a test case in the fairness-aware machine learning community. The Adult dataset includes 14 features describing type of work, demographic information, and capital gains information for individuals from the 1994 U.S. census [20]. The classification task is predicting whether an individual makes more or less than $50,000 per year. Preprocessing, model, and disentangled representation training information are included in Appendix A.

Direct and indirect influence audits on the Adult dataset are given in Figure 10 and in Appendix B. While many of the resulting influence scores are the same in both the direct and indirect cases, the disentangled influence audits finds substantially more influence based on sex than the direct influence audit - this is not surprising given the large influence that sex is known to have on U.S. income. Other important features in a fairness context, such as nationality, are also shown to have indirect influences that are not apparent on a direct influence audit. The error results (Figure 7 and Appendix B) indicate that while the error is low across all three types of errors for many features, the disentanglement errors are higher (further from 1) for some rare-valued features. This means that despite the indirect influence that the audit did find, there may be additional indirect influence it did not pick up for those features.


Figure 7: The reconstruction error (left), prediction error (middle), and disentanglement error (right) of selected Adult Income features under an indirect influence audit; see the supplemental material for the complete figure.

3.4 Comparison to Other Methods

Figure 8: Comparison on the synthetic data of the disentangled influence audits using the handcrafted (left) or learned (middle) disentangled representation with the BBA approach of [1] (right).

Here, we compare the disentangled influence audits results to results on the same datasets and models by the indirect influence technique introduced in [1], which we will refer to as BBA (black-box auditing).333This method is available via pip install BlackBoxAuditing. See also: https://github.com/algofairness/BlackBoxAuditing However, this is not a direct comparison, since BBA is not able to determine feature influence for individual instances, only influence for a feature taken over all instances. In order to compare to our results, we will thus take the mean over all instances of the absolute value of the per feature disentangled influence. BBA was designed to audit classifiers, so in order to compare to the results of disentangled influence audits we will consider the obscured data they generate as input into our regression models and then report the average change in mean squared error for the case of the synthetic data. (BBA can’t handle dSprites image data as input.)

Figure 9: Comparison on the Adult data of the disentangled influence audits versus the BBA indirect influence approach of [1].

A comparison of the disentangled influence and BBA results on the synthetic data shown in figure 8 shows that all three variants of indirect influence are able to determine that the variables have comparatively low influence on the model. The disentangled influence with a handcrafted disentangled representation shows the correct indirect influence of each feature, while the learned disentangled representation influence is somewhat more noisy, and the BBA results suffer from relying on the mean squared error (i.e., the amount of influence changes based on the feature’s value).

Figure 9 shows the mean absolute disentangled influence per feature on the x-axis and the BBA influence results on the y-axis. It’s clear that the disentangled influence audits technique is much better able to find features with possible indirect influence on this dataset and model: most of the BBA influences are clustered near zero, while the disentangled influence values provide more variation and potential for insight.

4 Discussion and Conclusion

In this paper, we introduce the idea of disentangling influence: using the ideas from disentangled representations to allow for indirect influence audits. We show via theory and experiments that this method works across a variety of problems and data types including classification and regression as well as numerical, categorical, and image data. The methodology allows us to turn any future developed direct influence measures into indirect influence measures. In addition to the strengths of the technique demonstrated here, disentangled influence audits have the added potential to allow for multidimensional indirect influence audits that would, e.g., allow a fairness audit on both race and gender to be performed (without using a single combined race and gender feature [7]). We hope this opens the door for more nuanced fairness audits.

References

  • [1] P. Adler, C. Falk, S. A. Friedler, T. Nix, G. Rybeck, C. Scheidegger, B. Smith, and S. Venkatasubramanian. Auditing black-box models for indirect influence. Knowledge and Information Systems, 54(1):95–122, 2018.
  • [2] A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy. Deep variational information bottleneck. International Conference on Learning Representations, 2016.
  • [3] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
  • [4] A. Datta, S. Sen, and Y. Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Proceedings of 37th IEEE Symposium on Security and Privacy, 2016.
  • [5] H. Edwards and A. Storkey. Censoring representations with an adversary. In Proceedings of the 33th International Conference on Machine Learning, 2016.
  • [6] B. Esmaeili, H. Wu, S. Jain, A. Bozkurt, N. Siddharth, B. Paige, D. H. Brooks, J. Dy, and J.-W. van de Meent. Structured disentangled representations. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of Machine Learning Research, volume 89, pages 2525–2534. PMLR, 16–18 Apr 2019.
  • [7] S. A. Friedler, C. Scheidegger, S. Venkatasubramanian, S. Choudhary, E. P. Hamilton, and D. Roth. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 329–338. ACM, 2019.
  • [8] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):93, 2018.
  • [9] A. Henelius, K. Puolamäki, H. Boström, L. Asker, and P. Papapetrou. A peek into the black box: exploring classifiers by randomization. Data Min Knowl Disc, 28:1503–1529, 2014.
  • [10] I. Higgins, D. Amos, D. Pfau, S. Racaniere, L. Matthey, D. Rezende, and A. Lerchner. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230, 2018.
  • [11] P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885–1894. JMLR. org, 2017.
  • [12] A. Kumar, P. Sattigeri, and A. Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. International Conference on Learning Representations, 2017.
  • [13] S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765–4774, 2017.
  • [14] D. Madras, E. Creager, T. Pitassi, and R. Zemel. Learning adversarially fair and transferable representations. In Proceedings of the 35th International Conference on Machine Learning, 2018.
  • [15] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
  • [16] L. Matthey, I. Higgins, D. Hassabis, and A. Lerchner. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017.
  • [17] C. Molnar. Interpretable machine learning: A guide for making black box models explainable. Christoph Molnar, Leanpub, 2018.
  • [18] M. T. Ribeiro, S. Singh, and C. Guestrin. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proc. ACM KDD, 2016.
  • [19] M. Tschannen, O. Bachem, and M. Lucic. Recent advances in autoencoder-based representation learning. arXiv preprint arXiv:1812.05069, 2018.
  • [20] I. M. L. R. University of California. Adult income dataset. https://archive.ics.uci.edu/ml/datasets/adult.
  • [21] B. Ustun, A. Spangher, and Y. Liu. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 10–19. ACM, 2019.

Appendix A Implementation Details

Synthetic model and disentangled representation information.

In both our synthetic experiments with handcrafted and trained disentangled representations we audit a model with no hidden layers that computes exactly from the features and .

The handcrafted disentangled representation is created to map the features with no error. Suppose for example the protected feature, denoted , was one of the features based on (one of , , ). The disentangled representation used in this case would be . Here, we see that p will fully reveal the information relating to all of the features based on y, and does not reveal any information about the protected feature. Thus, this representation satisfies the independence and preservation of information requirements. The decoder then maps this vector back to the original feature vector , in the natural way. If for example , the decoder first computes to calculate , then uses this to compute . All features relating to and are computed from and in the natural way as well.

In the disentangled representation we train the encoder, decoder and discriminator each have two hidden layers of 10 hidden units each. We use a 4 dimensional latent vector. All layers in each model have ReLU activations except for the last layer of the decoder and discriminator which have sigmoid activations. We use

as the importance of disentanglement for the encoder. The minibatch size is 16 and we optimize for 10,000 train steps using SGD with a constant learning rate of 0.01.

dSprites model and disentangled representation information.

The model we use to predict the shape from the image is a neural network with three layers of 128, 64, and 32 hidden units respectively, and achieves a prediction accuracy on a held out test set. The test set was randomly drawn as of the data. To generate the disentangled representation we use an encoder, decoder and discriminator each with a single hidden layer of 256, 256 and 64 hidden units respectively. We use a 16 dimensional latent vector. The minibatch size is 100 and we optimize for 10,000 train steps using SGD with a constant learning rate of 0.05. All layers in each model have ReLU activations except for the last layer of the decoder and discriminator which have sigmoid activations. We use as the importance of disentanglement for the encoder.

Adult Income preprocessing, model, and disentangled representation information.

During preprocessing, categorical features are one-hot encoded and numerical features are normalized to mean 0 and standard deviation 1. The “education_num" feature is dropped during preprocessing. For each categorical feature, values which occur in less than 1,000 instances are binned into “rare_value". We train a classifier for the “income>=50K" label with binary cross entropy loss and no hidden layers. The classifier achieves test loss of 0.326 and test accuracy of

.

To generate the disentangled representation we use an encoder, decoder and discriminator which each have two hidden layers with 25 and 12 hidden units respectively. We use a 10 dimensional latent vector. We use as the importance of disentanglement for the encoder. The models are trained for 4000 train steps with minibatch sizes of 16, using SGD with a constant learning rate of 0.01. We used the canonical train/test split.

Additional Information.

All models for the synthetic and dSprites experiments were trained on a MacBook Pro (Early 2015) with a 2.7GHz Processor and 8 GB of RAM. The models for the adult experiments were trained on an NVIDIA Titan Xp GPU. Hyperparameters were chosen via experimentation. Only architectures containing 2 or fewer hidden layers were considered for models used to disentangle the data. The minibatch sizes tested were between 16 and 100, and learning rates between 0.01 and 0.1 were tested. In each experiment, we used at least 5 and no more than 15 evaluation runs.

Appendix B Full Results for Adult Income Dataset

b.1 Direct and Indirect Influence Results

Figure 10: The full influence results for the adult data direct (left) and indirect (right) feature influences.

b.2 Error Results

Figure 11: The full disentanglement (top), reconstruction (left) and prediction (right) error metrics for the adult data experiment.