Log In Sign Up

Fairness by Learning Orthogonal Disentangled Representations

by   Mhd Hasan Sarhan, et al.

Learning discriminative powerful representations is a crucial step for machine learning systems. Introducing invariance against arbitrary nuisance or sensitive attributes while performing well on specific tasks is an important problem in representation learning. This is mostly approached by purging the sensitive information from learned representations. In this paper, we propose a novel disentanglement approach to invariant representation problem. We disentangle the meaningful and sensitive representations by enforcing orthogonality constraints as a proxy for independence. We explicitly enforce the meaningful representation to be agnostic to sensitive information by entropy maximization. The proposed approach is evaluated on five publicly available datasets and compared with state of the art methods for learning fairness and invariance achieving the state of the art performance on three datasets and comparable performance on the rest. Further, we perform an ablative study to evaluate the effect of each component.


page 10

page 13


SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

In this paper, we cast fair machine learning as invariant machine learni...

On Nonparametric Guidance for Learning Autoencoder Representations

Unsupervised discovery of latent representations, in addition to being u...

Learning Fair Representation via Distributional Contrastive Disentanglement

Learning fair representation is crucial for achieving fairness or debias...

A Contrastive Objective for Learning Disentangled Representations

Learning representations of images that are invariant to sensitive or un...

Fair Representations by Compression

Organizations that collect and sell data face increasing scrutiny for th...

On the Fairness of Disentangled Representations

Recently there has been a significant interest in learning disentangled ...

Using Global Constraints and Reranking to Improve Cognates Detection

Global constraints and reranking have not been used in cognates detectio...

1 Introduction

Learning representations that are useful for downstream tasks yet robust against arbitrary nuisance factors is a challenging problem. Automated systems powered by machine learning techniques are corner stones for decision support systems such as granting loans, advertising, and medical diagnostics. Deep neural networks learn powerful representations that encapsulate the extracted variations in the data. Since these networks learn from historical data, they are prone to represent the past biases and the learnt representations might contain information that were not intended to be released. This has raised various concerns regarding fairness, bias and discrimination in statistical inference algorithms 

[16]. The European union has recently released their ”Ethics guidelines for trustworthy AI” report 111Ethics guidelines for trustworthy AI, where it is stated that unfairness and biases must be avoided.

Since a few years, the community has been investigating to learn a latent representation that well describes a target observed variable (e.g. Annual salary) while being robust against a sensitive attribute (e.g. Gender or race). This nuisance could be independent from the target task which is termed as a domain adaptation problem. One example is the identification of faces regardless of the illumination conditions . In the other case termed fair representation learning and are not independent. This could be the case with being the credit risk of a person while

is age or gender. Such relation between these variables could be due to past biases that are inherently in the data. This independence is assumed to hold when building fair classification models. Although this assumption is over-optimistic as these factors are probably not independent, we wish to find a representation

that is independent from which justifies the usage of such a prior belief [17]. This is mostly approached by approximations of mutual information scores between and and force the two variables to minimize this score either in an adversarial [21, 15] or non-adversarial [13, 17] manner. These methods while performing well on various datasets, are still limited by either convergence instability problems in case of adversarial solutions or hindered performance compared to the adversarial counterpart. Learning disentangled representations has been proven to be beneficial to learning fairer representations compared to general purpose representations [12]. We use this concept to disentangle the components of the learned representations. Moreover, we treat the and as separate independent generative factors and decompose the learned representation in such a way that each representation holds information related to the respective generative factor. This is achieved by enforcing orthogonality between the representations as a relaxation for the independence constraint. We hypothesize that decomposing the latent code into target code and residual sensitive code would be beneficial for limiting the leakage of sensitive information into by redirecting it to while keeping it informative about some target task that we are interested in.

We propose a framework for learning invariant fair representations by decomposing learned representations into target and residual/sensitive representations. We impose disentanglement on the components of each code and impose orthogonality constraint on the two learned representations as a proxy for independence. The learned target representation is explicitly enforced to be agnostic to sensitive information by maximizing the entropy of sensitive information in .

Our contributions are three-folds:

  • Decomposition of target and sensitive data into two orthogonal representations to promote better mitigation of sensitive information leakage.

  • Promote disentanglement property to split hidden generative factors of each learned code.

  • Enforce the target representation to be agnostic of sensitive information by maximizing the entropy.

2 Related work

Learning fair and invariant representations has a long history. Earlier strategies involved changing the examples to ensure fair representation of the all groups. This relies on the assumption that equalized opportunities in the training test would generalize to the test set. Such techniques are referred to as data massaging techniques [8, 18]. These approaches may suffer of under-utilization of data or complications on the logistics of data collection. Later, Zemel et al.  [22]

have proposed a semi-supervised fair clustering technique to learn a representation space where data points are clustered such that each cluster contains similar proportions of the protected groups. One drawback is that the clustering constraint limits the power of a distributed representation. To solve this, Louizos 

et al.  [13]

have presented the Variational Fair Autoencoder (VFAE) where a model is trained to learn a representation that is informative enough yet invariant to some nuisance variables. This invariance is approached through Maximum Mean Discrepancy (MMD) penalty. The learned sensitive-information-free representation could be later used for any subsequent processing such as classification of a target task. After the success of Generative Adversarial Networks (GANs) 

[6], multiple approaches have leveraged this learning paradigm to produce robust invariant representations [21, 23, 4, 15]. The problem setup in these approaches is a minimax game between an encoder that learns a representation for a target task and an adversary that extracts sensitive information from the learned representation. In this case, the encoder minimizes the negative log-likelihood of the adversary while the adversary is forced to extract sensitive information alternatively. While methods relying on adversarial zeros-sum game of negative log-liklihood minimization and maximization perform well in the literature, they sometimes suffer from convergence problems and require additional regularization terms to stabilize the training. To overcome these problems, Xie et al.  [20] posed the problem as an adversarial non-zero sum game where the encoder and discriminator have competing objectives that optimize for different metrics. This is achieved by adding an entropy loss that forces the discriminator to be un-informed about sensitive information. It is worth noting that it is argued by [17] that adversarial training for fairness and invariance is unnecessary and sometimes leads to counter productive results. Hence, they have approximated the mutual information between the latent representation and sensitive information using a variational upper bound. Lastly, Creager et al.  [2] have proposed a fair representation learning model by disentanglement, their model has the advantage of flexibly changing sensitive information at test time and combine multiple sensitive attributes to achieve subgroup fairness.

3 Methodology

Figure 1: Left: The graphical model of our proposed method. Right: Our framework encode the input data to intermediate target and residual (sensitive) representations, parameterized by and

. Samples from the estimated posteriors are fed to the discriminators to predict the target and sensitive labels.

let be the dataset of individuals from all groups and be an input sample. Each input is associated with a target attribute with classes, and a sensitive attribute with classes. Our goal is to learn an encoder that maps input to two low-dimensional representations , . Ideally must contain information regarding target attribute while mitigating leakage about the sensitive attribute and contains residual information that is related to the sensitive attribute.

3.1 Fairness definition

One of the common definition of fairness that has been proposed in the literature [21, 20, 19, 1]

is simply requiring the sensitive information to be statistically independent from the target. Mathematically, the prediction of a classifier

must be independent from the sensitive information,i.e. . For example, in the German credit dataset, we need to predict the credit behaviour of the bank account holder regardless the sensitive information, such as gender, age …etc. In other words, should be equal to . The main objective is to learn fair data representations that are i) informative enough for the downstream task, and ii) independent from the sensitive information.

3.2 Problem Formulation

To promote the independence of the generative factors, i.e. target and sensitive information, we aim to maximize the log likelihood of the conditional distribution , where


To enforce our aforementioned conditions, we let our model encode the observed input data into target and residual representations,


and maxmimize the log likelihood given the following constraints; (i) is statistically independent from , and (ii) is agnostic to sensitive information . Our objective function can be written as



is the uniform distribution.

3.3 Fairness by Learning Orthogonal and Disentangled Representations

As depicted in Fig. 1, our observed data is fed to a shared encoder , then projected into two subspaces producing our target, and residual (sensitive) representations using the encoders; , and , respectively, where is shared parameter, i.e. . Each representation is fed to the corresponding discriminator; target discriminator, , and sensitive discriminator . Both discriminators and encoders are trained in supervised fashion to minimize the following loss,


where .
To ensure that our target representation does not encode any leakage of the sensitive information, we follow Roy et al. [20] in maximizing the entropy of the sensitive discriminator given the target representation as


We relax the independence assumption by enforcing i) disentanglement property, and ii) the orthogonality of the corresponding representations.

To promote the (i) disentanglement property on the target representation, we first need to estimate the distribution and enforce some sort of independence among the latent factors,


Since is intractable, we employ the Variational Inference, thanks to the re-paramterization trick [10], and let our model output the distribution parameters; , and , and minimize the KL-divergence between posterior and prior distributions as


where , and . Similarly, we enforce the same constraints on the residual (sensitive) representation and minimize the KL-divergence as .

To enforce the (ii) orthogonality between the target and residual (sensitive) representations,i.e. , we hard code the means of the prior distributions to orthogonal means. In this way, we implicitly enforce the weight parameters to project the representations into orthogonal subspaces. To illustrate this in 2-dimensional space, we set the prior distributions to , and (cf. Fig. 1).

To summarize, an additional loss term is introduced to the objective function promoting both Orthogonality and Disentanglement properties, denoted Orthogonal-Disentangled loss,


A variant of this loss without the property of orthogonality, denoted Disentangled loss, is also introduced for the purpose of ablative study (See Sec. 4.3).

3.4 Overall objective function

To summarize, our overall objective function is


where , and are hyper-parameters to weigh the Entropy loss and the Orthogonal-Disentangled loss, respectively. A sensitivity analysis on the hyper-parameters is presented in Sec. 4.5.


  Maximum Epochs

, Step size ,
0:   Initialize:
  for  do
  end for
Algorithm 1 Learning Orthogonal Disentangled Fair Representations

4 Experiments

In this section, the performance of the learned representations using our method will be evaluated and compared against various state of the art methods in the domain. First, we present the experimental setup by describing the five datasets used for validation, the model implementation details for each dataset, and design of the experiments. We then compare the model performance with state of the art fair representation models on five datasets. We perform an ablative study to monitor the effect of each added component on the overall performance. Lastly, we perform a sensitivity analysis to study the effect of hyper-parameters on the training.

4.1 Experimental Setup


For evaluating fair classification, we use two datasets from the UCI repository [3], namely, the German and the Adult datasets. The German credit dataset consists of 1000 samples each with 20 attributes, and the target task is to classify a bank account holder having good or bad credit risk. The sensitive attribute is the gender of the bank account holder. The adult dataset contains 45,222 samples each with 14 attributes. The target task is a binary classification of annual income being more or less than and again gender is the sensitive attribute.
To examine the model learned invariance on visual data, we have used the application of illumination invariant face classification. Ideally, we want the representation to contain information about the subject’s identity without holding information regarding illumination direction. For this purpose, the extended YaleB dataset is used [5]. The dataset contains the face images of 38 subjects under five different light source direction conditions (upper right, lower right, lower left, upper left, and front). The target task is the identification of the subject while the light source condition is considered the sensitive attribute. Following Roy et al.  [20], we have created a binary target task from CIFAR-10 dataset [11]. The original dataset contains 10 classes we refer to as fine classes, we divide the 10 classes into two categories living and non-living classes and refer to this split as coarse classes. It is expected that living objects have common visual proprieties that differ from non-living ones. The target task is the classification of the coarse classes while not revealing information about the fine classes. With a similar concept, we divide the 100 fine classes of CIFAR-100 dataset into 20 coarse classes that cluster similar concepts into one category. For example, the coarse class ’aquatic mammals’ contains the fine classes ’beaver’, ’dolphin’, ’otter’, ’seal’, and ’whale’. For the full details of the split, the reader is referred to [20] or the supplementary materials of this manuscript. The target task for CIFAR-100 is the classification of the coarse classes while mitigating information leakage regarding the sensitive fine classes.

Implementation details:

For the Adult and German datasets, we follow the setup appeared in [20]

by having a 1-hidden-layer neural network as encoder, the discriminator has two hidden layer and the target predictor is a logistic regression layer. Each hidden layer contains

units. The size of the representation is . The learning rate for all components is and weight decay is .
For the Extended YaleB dataset, we use an experimental setup similar to Xie et al.  [21] and Louizos et al.  [13] by using the same train/test split strategy. We used samples for training and for testing. The model setup is similar to [21, 20], the encoder consisted of one layer, target predictor is one linear layer and the discriminator is neural network with two hidden layers each contains 100 units. The parameters are trained using Adam optimizer with a learning rate of and weight decay of .
Similar to [20], we employed ResNet-18 [7]

architecture for training the encoder on the two CIFAR datasets. For the discriminator and target classifiers, we employed a neural network with two hidden layers (256 and 128 neurons). For the encoder, we set the learning rate to

and weight decay to . For the target and discriminator networks, the learning rate and weight decay were set to and ,respectively. Adam optimizer [9] is used in all experiments.

Experiments design:

We address two questions in the experiments. First, is how much information about the sensitive attributes is retained in the learned representation ?. Ideally, would not contain any sensitive attribute information. This is evaluated by training a classifier with the same architecture as the discriminator network on sensitive attributes classification task. The closer the accuracy to a naive majority label predictor, the better the model is. This classifier is trained with as input after the encoder, target, and discriminator had been trained and freezed. Second, is how well the learned representation performs in identifying target attributes?. To this end, we train a classifier similar to the target on the learned representation to detect the target attributes. We also visualize the representations and by using their t-SNE projections to show how the learned representations describe target attributes while being agnostic to the sensitive information.

4.2 Comparison with state of the art

Target Acc. Sensitive Acc. Target Acc. Sensitive Acc.
Baseline 0.9775 0.2344 0.7199 0.3069
Xie et al. [21] (trade-off #1) 0.9752 0.2083 0.7132 0.1543
Roy et al. [20] (trade-off #1) 0.9778 0.2344 0.7117 0.1688
Xie et al. [21] (trade-off #2) 0.9735 0.2064 0.7040 0.1484
Roy et al. [20] (trade-off #2) 0.9679 0.2114 0.7050 0.1643
Ours 0.9725 0.1907 0.7074 0.1447
Table 1: Results on CIFAR-10 and CIFAR-100 datasets.

We compare the proposed approach against various state of the art methods on the five presented datasets. We first train the model with Algorithm 1 while changing hyper-parameters between runs.We choose the best performing model in terms of the trade-off between target and sensitive classification accuracy based on . We then compare it with various state of the art methods for sensitive information leakage and retaining target information.

CIFAR datasets:

We compare the proposed approach with two other state of the art methods on the CIFAR-10 and CIFAR-100 datasets, namely Xie et al.  [21] and Roy et al.  [20]. We examine two different trade-off points of both approaches. The first trade-off point is the one with best target accuracy reported by the model while the second trade-off point is the one with the target accuracy closest to ours for a more fair comparison. The lower the target accuracy in the trade-off the better (lower) the sensitive accuracy is. We can see when the target accuracies are comparable, our model performs better in preventing sensitive information leakage to the representation . Hence, the proposed method has a better trade-off on the target and sensitive accuracy for both CIFAR-10 and CIFAR-100 datasets. However, the peak target performance is comparable but lower than the peak target performance of the studied methods.

Extended YaleB dataset:

For the illumination invariant classification task on the extended YaleB dataset, the proposed method is compared with the logistic regression baseline (LR), variational fair autoencoder VFAE [13], Xie et al.  [21] and Roy et al.  [20]. The results are shown in Fig. 2 on the right hand side. The proposed model performs best on the target attribute classification while having the closest performance to the majority classification line (dashed line in Fig. 2). The majority line is the trivial baseline of predicting the majority label. The closer the sensitive accuracy to the majority line the better the model is in hiding sensitive information from . This means the learned representation is powerful at identifying subject in the images regardless of illumination conditions. To assess this visually, refer to sec. 4.4 for qualitative analysis.

Tabular datasets:

On the Adult and German datasets, we compare with LFR [22], vanilla VAE [10], variational fair autoencoder [13], Xie et al.  [21] and Roy et al.  [20]. The results of these comparisons are shown in Fig. 2. On the German dataset, we observe a very good performance in hiding sensitive information with accuracy compared to in [20]. On the target task, the model performs well compared to other models except for [20] which does marginally better than the rest. On the Adult dataset, our proposed model performs better than the aforementioned models on the target task while leaking slightly more information compared to other methods and the majority line at . Our method has sensitive accuracy while LFR, VAE, vFAE, Xie et al. , and Roy et al. have , , , , and sensitive accuracy, respectively.

Generally, we observe that the proposed model performs well on all datasets with state of the art performance on visual datasets (CIFAR-10, CIFAR-100, YaleB). This suggests that such a model could lead to more fair/invariant representation without large sacrifices on downstream tasks.

(a) Target attribute classification accuracy.
(b) Sensitive attribute classification accuracy.
Figure 2: Results on Adult, German, and extended YaleB datasets. The dashed black line represent a naive majority classifier that predicts the majority label.

4.3 Ablative study

In this section, we evaluate the contributions provided in the paper by eliminating parts of the loss function and study how each part affects the training in terms of target and sensitive accuracy. To this end, we used the best performing models after hyper-parameter search when training for all contributions for each dataset. The models are trained with the same settings and architectures described in Sec.

4.1. We compare five different variations for each model alongside the baseline classifier:

  1. Baseline: Training a deterministic classifier for the target task and evaluate the information leakage about the sensitive attribute.

  2. Entropy w/o KL: Entropy loss is incorporated (Equation 6) in the loss while is not included (Equation 9).

  3. KL Orth. w/o Entropy: Entropy loss is not used (Equation 6) while is used for target and sensitive representations with orthogonal means (Equation 9).

  4. w/o Entropy w/o KL: Neither entropy loss nor KL divergence are used in the loss. This case is similar ti multi-task learning with the tasks being the classification of target and sensitive attributes.

  5. Entropy + KL w/o Orth.: Entropy loss is used and disentangled loss is used with similar means. Hence, there might be some disentanglement of generative factors in the components of each latent code but no constraints are applied to force disentanglement of the two representations.

  6. Entropy + KL Orth.: All contributions are included.

The results of the ablative study are shown in Figure 3.

  • For the sensitive class accuracy, it is desirable to have a lower accuracy in distinguishing sensitive attributes. Compared to the baseline, we observe that adding entropy loss and orthogonality constraints on the representations lowers the discriminative power of the learned representation regarding sensitive information. This is valid on all studied datasets except for CIFAR-10 where orthogonality constraint without entropy produced better representations for hiding sensitive information with a small drop (

    ) on the target task performance. In the rest of the cases, having either entropy loss or KL loss only does not bring noticeable performance gains compared to a multi-task learning paradigm. This could be attributed to the fact that orthogonality on its own does not enforce independence of random variables and another constraint is needed to encourage independent latent variables (

    i.e. entropy loss).

  • Comparing baseline with w/o Entropy w/o KL case answers the important question ”Does multi-task learning with no constraints on representations bring any added value in mitigating sensitive information leakage?”. In three out of the five studied datasets, it is the case. We can see lower accuracy in identifying sensitive information by using the learned target representation as input to a classifier while having no constraints on the relationship between the sensitive and target representations during the training process of the encoder. Simply, adding an auxiliary classifier to the target classifier and force it to learn information about sensitive attributes hides some sensitive data from the target classifier.

  • Regarding target accuracy, the proposed model does not suffer from large drops in target performance when disentangling target from sensitive information. This could be seen by comparing target accuracy between the baseline and Entropy+KL Orth. columns. The largest drop in target performance compared to no privacy baseline is seen on the German dataset. This could be because of the very high dependence between gender and granting good or bad credit to a subject in the dataset and the small amount of subjects in the dataset.

Figure 3: Ablative study. Dark gray and light gray dashed lines represent the accuracy results on the target and sensitive task respectively for the ”Entropy + KL Orth.” model.

4.4 Qualitative analysis

We visualize the learned embeddings using t-SNE [14] projections for the extended YaleB and CIFAR-10 datasets (cf. Fig. 4. We use the image space, , as inputs to the projection to visualize what type of information is held within each representation. We also show the label of each image with regards to the target task to make it easier to investigate the clusters. For the extended YaleB, we see that, using the image space , the images are clustered mostly depending on their illumination conditions. However, when using , the images are not clustered according lighting conditions but rather, mostly based on the subject identity. Moreover, the visualization of representation shows that the representation contains information about the sensitive class. For the CIFAR-10 dataset, using the image space basically clusters the images on the dominant color. When using , it is clear that the target information is separated where the right side represent the non-living objects, and the left to inside part represents the living objects. What should be observed in , is that within each target class, the fine classes are mixed and indistinguishable as we see cars, boats and trucks mixed in the right hand side of the figure, for example. The representation has some information about the target class and also has the residual information about the fine classes as we see in the annotated red rectangle. A group of horses images are clustered together, then few dogs’ images are clustered under it, then followed by birds. This shows that has captured some sensitive information while is more agnostic to the sensitive fine classes.

(a) t-SNE on
(b) t-SNE on
(c) t-SNE on
(d) t-SNE on
(e) t-SNE on
(f) t-SNE on
Figure 4: t-SNE visualization of the extended YaleB faces (top) and CIFAR-10 (bottom) images. Figure is better seen in color and high resolution.

4.5 Sensitivity analysis

Figure 5: Sensitivity analysis on the Adult dataset

To analyze the effect of hyper-parameters choices on the sensitive and target accuracy, we show heatmaps of how the performance changes when the studied hyper-parameters are changed. The investigated hyper-parameters are KL weight (), Entropy Weight (), KL gamma (), and Entropy gamma (). We show the results on the Adult dataset. We can see that the sensitive accuracy is sensitive to more than as changes in do not induce much change on the sensitive accuracy. A similar trend is not visible on the target accuracy. Regarding the choice of and , we can see that the sensitive leakage is highly affected by these hyper-parameters and the results vary when changed. However, a more robust performance is observed on the target classification task.

5 Conclusion

In this work, we have proposed a novel model for learning invariant representations by decomposing the learned codes into sensitive and target representation. We imposed orthogonality and disentanglement constrains on the representations and forced the target representation to be uninformative of the sensitive information by maximizing sensitive entropy. The proposed approach is evaluated on five datasets and compared with state of the art models. The results show that our proposed model performs better than state of the art models on three datasets and performed comparably on the other two. We observe better hiding of sensitive information while affecting the target accuracy minimally. This goes inline with our hypothesis that decomposing the two representation and enforcing orthogonality could help with problem of information leakage by redirecting the information into the sensitive representation. One current limitation of this work is that it requires a target task to learn the disentanglement which could be avoided by learning the reconstruction as an auxiliary task.