Auto-Encoding Total Correlation Explanation

by   Shuyang Gao, et al.

Advances in unsupervised learning enable reconstruction and generation of samples from complex distributions, but this success is marred by the inscrutability of the representations learned. We propose an information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation. The principle of total Cor-relation Ex-planation (CorEx) has motivated successful unsupervised learning applications across a variety of domains, but under some restrictive assumptions. Here we relax those restrictions by introducing a flexible variational lower bound to CorEx. Surprisingly, we find that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions. This information-theoretic view of VAE deepens our understanding of hierarchical VAE and motivates a new algorithm, AnchorVAE, that makes latent codes more interpretable through information maximization and enables generation of richer and more realistic samples.



There are no comments yet.


page 6

page 7

page 8


Learning Representations by Maximizing Mutual Information in Variational Autoencoder

Variational autoencoders (VAE) have ushered in a new era of unsupervised...

Unsupervised Learning via Total Correlation Explanation

Learning by children and animals occurs effortlessly and largely without...

Towards Deeper Understanding of Variational Autoencoding Models

We propose a new family of optimization criteria for variational auto-en...

Discovering Structure in High-Dimensional Data Through Correlation Explanation

We introduce a method to learn a hierarchy of successively more abstract...

Information bottleneck through variational glasses

Information bottleneck (IB) principle [1] has become an important elemen...

An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax

A framework is presented for unsupervised learning of representations ba...

Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification

The Information Bottleneck (IB) provides an information theoretic princi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning representations from data without labels has become increasingly important to solving some of the most crucial problems in machine learning—including tasks in image, language, speech, etc. (Bengio et al., 2013)

. Complex models, such as deep neural networks, have been successfully applied to generative modeling with high-dimensional data. From these methods we can either infer hidden representations with variational autoencoders (VAE) 

(Kingma & Welling, 2013; Rezende et al., 2014) or generate new samples with VAE or generative adversarial networks (GAN)  (Goodfellow et al., 2014).

Building on these successes, an explosive amount of recent effort has focused on interpreting learned representations, which could have significant implications for subsequent tasks. Methods like InfoGAN (Chen et al., 2016) and -VAE (Higgins et al., 2017) are able to learn disentangled and interpretable representations in a completely unsupervised fashion. Information theory provides a natural framework for understanding representation learning and continues to generate new insights (Alemi et al., 2017; Shwartz-Ziv & Tishby, 2017; Achille & Soatto, 2018; Saxe et al., 2018).

In this paper we discuss the problem of learning disentangled and interpretable representations in a purely information-theoretic way. Instead of making assumptions about the data generating process at the beginning, we consider the question of how informative the underlying latent variable is about the original data variable . We would like to be as informative as possible about the relationships in while remaining as disentangled as possible in the sense of statistical independence. This principle has been previously proposed as Cor-relation Ex-planation (CorEx) (Ver Steeg & Galstyan, 2014; Ver Steeg, 2017). By optimizing appropriate information-theoretic measures, CorEx defines not only an informative representation but also a disentangled

one, thus eliciting a natural comparison to the recent literature on interpretable machine learning. However, computing the CorEx objective can be challenging, and previous studies have been restricted to cases where random variables are either discrete 

(Ver Steeg & Galstyan, 2014), or Gaussian (Ver Steeg & Galstyan, 2017).

Our key contributions are as follows:

  • [noitemsep,topsep=0pt]

  • We construct a variational lower bound to the CorEx objective and optimize the bound with deep neural networks. Surprisingly, we find that under standard assumptions, the lower bound for CorEx shares the same mathematical form as the evidence lower bound (ELBO) used in VAE, suggesting that CorEx provides a dual information-theoretic perspective on representations learned by VAE.

  • Going beyond the standard scenario to hierarchical VAEs or deep Gaussian latent models (DLGM) (Rezende et al., 2014), we demonstrate that CorEx provides new insight into measuring how representations become progressively more disentangled at subsequent layers. In addition, the CorEx objective can be naturally decomposed into two sets of mutual information terms with an interpretation as an unsupervised information bottleneck.

  • Inspired by this formulation, we propose to make some latent factors more interpretable by reweighting terms in the objective to make certain parts of the latent code uniquely informative about the inputs (instead of adding new terms to the objective, as in InfoGAN (Chen et al., 2016)).

  • Finally, we show that by sampling each latent code from the encoding distribution instead of the standard Gaussian prior in VAE, we can generate richer and more realistic samples than VAE even under the same network model.

We first review some basic information-theoretic quantities in Sec. 2, then introduce the total correlation explanation (CorEx) learning framework in Sec. 3. In Sec. 4 we derive the variational lower bound of the CorEx objective and demonstrate a connection with VAE in Sec. 5. This connection sheds light on some new applications of VAE, which we will describe in Sec. 6. We discuss related work in Sec. 7 and conclude our paper in Sec. 8.

2 Information Theory Background

Let denote a

-dimensional random variable whose probability density function is

. Shannon differential entropy (Cover & Thomas, 2006) is defined in the usual way as . Let denote an -dimensional random variable whose probability density function is . Then mutual information between two random variables, and , is defined as . Mutual information can also be viewed as the reduction in uncertainty about one variable given another variable—i.e., .

A measure of multivariate mutual information called total correlation (Watanabe, 1960) or multi-information (Studenỳ & Vejnarova, 1998) is defined as follows:


Note that denotes the Kullback-Leibler divergence in Eq. 1. Intuitively, captures the total dependence across all the dimensions of and is zero if and only if all are independent. Total correlation or statistical independence is often used to characterize disentanglement in recent literature on learning representations (Dinh et al., 2014; Achille & Soatto, 2017).

The conditional total correlation of , after observing some latent variable , is defined as follows,


We define a measure of informativeness of latent variable about the dependence among the observed variables by quantifying how total correlation is reduced after conditioning on some latent factor ; i.e.,


In Eq. 3, we can see that is maximized if and only if the conditional distribution factorizes, in which case we can interpret as capturing the information about common causes across all .

3 Total Correlation Explanation Representation Learning

In a typical unsupervised setting like VAE, we assume a generative model where is a function of a latent variable , and we then maximize the log likelihood of under this model. From a CorEx perspective, the situation is reversed. We let be some stochastic function of parameterized by , i.e.,

. Then we seek a joint distribution

, where is the underlying true data distribution that maximizes the following objective:


In Eq. 4, corresponds to the amount of correlation that is explained by as defined in Eq. 3, and quantifies the dependence among the latent variables .

By non-negativity of total correlation, Eq. 4 naturally forms a lower bound on ; i.e., for any . Therefore, the global maximum of Eq. 4 occurs at , in which case and can be exactly interpreted as a generative model where are independent random variables that generate , as shown in Fig. 1.

Figure 1: The graphical model for assuming achieves the global maximum in Eq. 4. In this model, all are factorized conditioned on , and all are independent.

Notice that the term is a bit different from the classical definition of informativeness using mutual information  (Linsker, 1988). In fact, after combining the entropy terms in Eq. 1 and  2, the following equation holds (Ver Steeg & Galstyan, 2015):


The term in Eq. 4 can be seen as finding a minimal latent representation which, after conditioning, disentangles . When stacking hidden variable layers in Sec.  6, we will see that this condition can lead to interpretable features by forcing intermediate layers to be explained by higher layers under a factorized model.

Informativeness vs.   Disentanglement

If we only consider the informativeness term as in the objective, a naive solution to this problem would be just setting . To avoid this, we also want the latent variables to be as disentangled as possible, corresponding to the term encouraging independence. In other words, the objective in Eq. 4 is trying to find , so that not only disentangles as much as possible, but is itself as disentangled as possible.

4 Optimization

We first focus on optimizing the objective function defined by Eq. 4. The extension to the multi-layer (hierarchical) case is presented in the next section.

By using Eqs. 1 and 5, we expand Eq. 4 into basic information-theoretic quantities as follows:


If we further constrain our search space to have the factorized form 111Each marginal distribution is parametrized by a different . But we will omit the subscript under for simplicity, as well as , in the following context. which is a standard assumption in most VAE models, then we have:


We convert the two total correlation terms into two sets of mutual information terms in Eq. 7. The first term, , denotes the mutual information between each input dimension and , and can be broadly construed as measuring the “relevance” of the representation to each observed variable in the parlance of the information bottleneck (Tishby et al., 2000; Shwartz-Ziv & Tishby, 2017). The second term, , represents the mutual information between each latent dimension and and can be viewed as the compression achieved by each latent factor. We proceed by constructing tractable bounds on these quantities.

4.1 Variational Lower Bound for

(Barber & Agakov, 2003) derived the following lower bound for mutual information by using the non-negativity of KL-divergence; i.e., gives:


where the angled brackets represent expectations, and is any arbitrary distribution parametrized by . We need a variational distribution because the posterior distribution is hard to calculate because the true data distribution is unknown—although approximating the normalization factor can be tractable compared to VAE. A detailed comparison with VAE will be made in Sec. 5.

4.2 Variational Upper Bound for

We again use the non-negativity of KL-divergence, i.e., , to obtain:


where represents an arbitrary distribution parametrized by .

Combining bounds in Eqs. 8 and 9 into Eq. 7, we have:


We then can jointly optimize the lower bound in Eq. 10 w.r.t. both the stochastic parameter and the variational parameters and .

5 Connection to Variational Autoencoders

Remarkably, Eq. 10 has a form that is very similar to the lower bound introduced in variational autoencoders, except it is decomposed into each dimension and . To pursue this similarity further, we denote


Then, by rearranging the terms in Eq. 10, we obtain


The first term in the bound, , is a constant and has no effect on the optimization. The remaining expression coincides with the VAE objective as long as is a standard Gaussian. The second term corresponds to the reconstruction error, and the third term is the KL-divergence term in VAE.


The CorEx objective starts with a defined encoder and seeks a decoder via variational approximation to the true posterior. VAE is exactly the opposite. Moreover, in VAE we need a variational approximation to the posterior because the normalization constant is intractable; in CorEx the variational distribution is needed because we do not know the true data distribution . It is also worth mentioning that the lower bound in Eq. 12 requires a fully factorized form of the decoder , unlike VAE where can be flexible.222In this paper we also restrict the encoder distribution to have a factorized form which follows the standard network structures in VAE, but it is not a necessary condition to achieve the lower bound shown in Eq. 12.

As pointed out by (Zhao et al., 2017a), if we choose to use a more expressive distribution family, such as PixelRNN/PixelCNN (Van Oord et al., 2016; Gulrajani et al., 2017) for the decoder in a VAE, the model tends to neglect the latent codes altogether, i.e., . This problem, however, does not exist in CorEx, since it explicitly requires to be informative about in the objective function. It is this informativeness term that leads the CorEx objective to a factorized decoder family . In fact, if we assume , then we will get and an informativeness term of zero—meaning CorEx will avoid such undesirable solutions.

Stacking CorEx and Hierarchical VAE

[Is this supposed to be a section?] Notice that if Eq. 4 does not achieve the global maximum, it might be the case that the latent variable is still not disentangled enough, i.e., . If this is true, we can reapply the CorEx principle (Ver Steeg & Galstyan, 2015) and learn another layer of latent variables on top of and redo the optimization on w.r.t. the following equation; i.e.,


To generalize, suppose there are layers of latent variables, and we further denote the observed variable . Then one can stack each latent variable on top of and jointly optimize the summation of the corresponding objectives, as shown in Eqs. 4 and 13; i.e.,


By simple expansion of Eq. 14 and cancellation of intermediate terms, we have:


Furthermore, if we have for all , then we get:


Eq. 16 shows that stacking latent factor representations results in progressively better lower bounds for .

To optimize Eq. 14, we reuse Eqs. 78 and  9 and get:


Enforcing independence relations at each layer, we denote:


and obtain


One can now see that the second term of the RHS in Eq. 19 has the same form as deep latent Gaussian models (Rezende et al., 2014) (also known as hierarchical VAE) as long as the latent code distribution on the top layer follows standard normal and

on each layer is parametrized by Gaussian distributions.

One immediate insight from this connection is that, as long as each ) is greater than zero in Eq. 14, then by expanding the definition of each term we can easily see that is more disentangled than ; i.e., if . Therefore, each latent layer of hierarchical VAE will be more and more disentangled if for each . This interpretation also provides a criterion for determining the depth of a hierarchical representation; we can add layers as long as the corresponding term in the objective is positive so that the overall lower bound on is increasing.

Despite reaching the same final expression, approaching this result from an information-theoretic optimization rather than generative modeling perspective offers some advantages. First, we have much more flexibility in specifying the distribution of latent factors, as we can directly sample from this distribution using our encoder. Second, the connection with mutual information suggests intuitive modifications of our objective that increase the interpretability of results. These advantages will be explored in more depth in Sec. 6.

6 Applications

6.1 Disentangling Latent Codes via Hierarchical VAE / Stacking CorEx on MNIST

We train a simple hierarchical VAE/stacking CorEx model with two stochastic layers on the MNIST dataset. The graphical model is shown in Fig. 2. For each stochastic layer, we use a neural network to parametrize the distribution and , and we set to be a fixed standard Gaussian.

Figure 2: Encoder and decoder models for MNIST, where is 64 dimensional continuous variable and

is a discrete variable (one hot vector with length ten).

We use a 784-512-512-64 fully connected network between and and a 64-32-32-16-16-10 dense network between and

, with ReLU activations in both. The output of

is a ten-dimensional one hot vector, where we decode based on each one-hot representation and weight the results according to their softmax probabilities.

After training the model, we find that the learned discrete variable on the top layer gives us an unsupervised classification accuracy of 85%, which is competitive with the more complex method shown in (Dilokthanakul et al., 2016).

To verify that the top layer helps disentangle the middle layer by encouraging conditional independence of given , we calculate the mutual information between input and each dimension . , as shown in Fig. LABEL:fig:mnist_mi. We can see around 80% of the latent codes have very low mutual information with . We then select the top two dimensions with the most mutual information, and denote these two dimensions as , . We find and . We then generate new digits by first fixing the discrete latent variable on the top layer, and sampling latent codes from . We systematically vary the noise from -2 to 2 through and while keeping the other dimensions of fixed, and visualize the results in Fig. 3.

(a) Manipulating with MNIST. (Azimuth)

(b) Manipulating with MNIST. (Width)

Figure 3: Varying the latent codes of on MNIST: In both figures, each row corresponds to a fixed discrete number in layer . Different columns correspond to the varying noise from the selected latent node in layer from left to right, while keeping other latent codes fixed. In (a) varying the noise results in different rotations of the digit; In (b) a small (large) value of the latent code corresponds to wider (narrower) digit.

We can see that this simple two-layer structure automatically disentangles and learns the interpretable factors on MNIST (width and rotation). We attribute this behavior to stacking, where the top layer disentangles the middle layer and makes the latent codes more interpretable through samples from .

6.2 Learning Interpretable Representations through Information Maximizing VAE / CorEx on CelebA

One important insight from recently developed methods, like InfoGAN, is that we can maximize the mutual information between a latent code and the observations to make the latent code more interpretable.

While it seems ad hoc to add an additional mutual information term in the original VAE objective, a more natural analogue arises in the CorEx setting. Looking at the formulation in Eq. 7, it already contains two sets of mutual information terms. If one would like to anchor a latent variable, say , to have higher mutual information with the observation , then one can simply modify the objective by replacing the unweighted sum with a weighted one:


Eq. 20 suggests that mutual information maximization in CorEx is achieved by modifying the corresponding weights of the second term in Eq. 7. We then use the lower bound in Eq. 10 to obtain

Eq. 6.2 shows that in VAE we can decrease the weight of KL-divergence for particular latent codes to achieve mutual information maximization. We call this new approach AnchorVAE in Eq. 6.2. Notice that there is a subtle difference between AnchorVAE and -VAE (Higgins et al., 2017). In -VAE, the weights of KL-divergence term for all latent codes are the same, while in AnchorVAE, only the weights of specified factors have been changed to encourage high mutual information. With some prior knowledge of the underlying factors of variation, AnchorVAE encourages the model to concentrate this explanatory power in a limited number of variables.

Figure 4: Mutual information between input data and each latent variable in CelebA with AnchorVAE. It is clear that the anchored first five dimensions have the highest mutual information with .

We trained AnchorVAE on the CelebA dataset with 2048 latent factors, with mean square error for reconstruction loss. We adopted a three-layer convolutional neural network structure. The weights of KL-divergence of the first five latent variables are set to 0.5 to let them have higher mutual information than other latent variables. The mutual information is plotted in Fig. 

4 after training. We find these five latent variables have the highest mutual information of around 3.5, demonstrating the mutual information maximization effect in AnchorVAE.

To evaluate the interpretability of those anchored variables for generating new samples, we manipulate the first five latent variables while keeping other dimensions fixed. Fig. 5 summarizes the result. We observe that all five anchored latent variables learn intuitive factors of variation in the data. It is interesting to see that latent variable and are very similar—both vary the generated images from white to black in some sense. However, these two latent factors are actually very different: emphasizes skin color variation while controls the position of the light source.

(a) Varying . (Skin Color)
(b) Varying . (Azimuth)
(c) Varying . (Emotion)
(d) Varying . (Hair)
(e) Varying . (Lighting)
Figure 5: Manipulating latent codes on CelebA using AnchorVAE: We show the effect of the anchored latent variables on the outputs while traversing their values from [-3,3]. Each row represents a different seed image to encode latent codes. Each anchored latent code represents a different factor on interpretablility. (a) Skin Color (b) Azimuth (c) Emotion (Smile) (d) Hair (less or more) (e) Lighting.

We also trained the original VAE objective with the same network structure and examine the top five latent codes with highest mutual information. Fig. 6 shows the results of manipulating the top two latent codes , with mutual information and respectively. We can see that they reflect an entangled representation. The other three latent codes demonstrate similar entanglements which are omitted here.

(a) entangles skin color with hair
(b) entangles emotion with azimuth
Figure 6: Manipulating top two latent codes with the most mutual information on CelebA using original VAE. We observe that both latent codes learned entangled representations. (a) entangles skin color with hair; (b) entangles emotion with azimuth.

6.3 Generating Richer and More Realistic Images via CorEx

Let us revisit the variational upper bound on in Eq. 9. In this upper bound, VAE chooses

to be a standard normal distribution. But notice that this upper bound becomes tight when

; i.e.,

where . Therefore, after training the model, we can approximate the true distribution by first sampling a data point and then sampling from the conditional . Repeating this process across latent dimensions, we can use the factorized distribution to generate new data instead of sampling from a standard normal. In this way, we obtain more realistic images since we are sampling from a tighter lower bound to the CorEx objective.

We ran a traditional VAE on the celebA dataset with the log-normal loss as the reconstruction error and 128 latent codes. We calculated the variance of each

and ploted the cumulative distribution of these variances in Fig. 6(a).

(a) Cumulative distribution of variance for each )
(b) Variance of versus mutual information
Figure 7: Variance statistics for on celebA after training a standard VAE with 128 latent codes.

One can see that around 20% of the latent variables actually have a variance greater than two. We have plotted variance versus the mutual information in Fig. 6(b), in which we can see that higher variance in corresponds to higher mutual information . In this case, using a standard normal distribution with variance 1 for all would be far from optimal for generating the data.

(a) Latent codes are generated from standard normal
(b) Latent codes are generated from
Figure 8: Different sampling strategies of latent codes for CelebA dataset on VAE / CorEx. Sampling latent codes from in (b) yields better quality images than sampling from a standard normal distribution in (a).

Fig. 8 shows the generated images by either sampling the latent code from a standard normal distribution or the factorized distribution . We can see that Fig. 7(b) not only tends to generate more realistic images than Fig. 7(a), but it also exhibits more diversity than Fig. 7(a). We attribute this improvement to the more flexible nature of our latent code distribution.

7 Related Work

The notion of disentanglement in representation learning lacks a unique characterization, but it generally refers to latent factors which are individually interpretable, amenable to simple downstream modeling or transfer learning, and invariant to nuisance variation in the data

(Bengio et al., 2013). We adopt the common definition of statistical independence (Achille & Soatto, 2017; Dinh et al., 2014) by minimizing total correlation—an idea with a rich history (Barlow, 1989; Comon, 1994; Schmidhuber, 1992). However, there are numerous alternatives not rooted in independence. (Higgins et al., 2017)

measures disentanglement by the identifiability of changes in a single latent dimension. More concretely, they vary only one latent variable with others fixed, apply the learned decoder and encoder to reconstruct the latent space, and propose that a classifier should be able to predict the varied dimension for a disentangled representation. The work of

(Thomas et al., 2017; Bengio et al., 2017)

is similar in spirit, identifying disentangled factors as changes in a latent embedding that can be controlled via reinforcement learning. Alternatively, if prior knowledge of the number of desired factors of variation is given, models such as InfoGAN

(Chen et al., 2016) or our AnchorVAE seek to directly incorporate this information.

Our work provides a complementary perspective to a growing body of research connecting information theory and variational inference (Achille & Soatto, 2017, 2018; Alemi et al., 2017); much of this is motivated by the Information Bottleneck (IB) method (Tishby et al., 2000). In the unsupervised case, IB generalizes the VAE objective by adding a Lagrange multiplier to the KL divergence term of the ELBO to manage the trade-off between data reconstruction and model compression. This is identical to the -VAE objective, where (Higgins et al., 2017) observes that overweighting the KL divergence term () can encourage disentanglement, albeit at the cost of reconstruction performance. (Achille & Soatto, 2018) add additional total correlation regularization to the IB Lagrangian to encourage independence, and propose using and increasing gradually during training. Furthermore, their optimization using multiplicative noise generalizes dropout methods, which helps to achieve improved robustness to nuisance variables.

These objectives match CorEx and the ELBO for , but adding a Lagrange multiplier to control the disentangling term in CorEx would not lead to -VAE. We saw in Sec. 5 that our bound on the CorEx objective reduces to the ELBO with common factorization assumptions, so adding to in CorEx would lead to . This bound recovers the objective of (Kim & Mnih, 2017), who consider to encourage independence, but without the more principled justification of CorEx.

(Sønderby et al., 2016; Zhao et al., 2017b) highlight limitations of the naive hierarchical VAE, such as representational inefficiency, and propose alternative ladder neural network structures for learning hierarchical features. However, from the CorEx perspective, we observe that the hierarchical VAE is encouraging more disentangled representations in top layers, which has not been previously recognized.

8 Conclusion

Deep learning enables us to construct latent representations that reconstruct or generate samples from complex, high-dimensional distributions. Unfortunately, these powerful models do not necessarily produce representations with structures that match human intuition or goals. Subtle changes to training objectives lead to qualitatively different representations, but our understanding of this dependence remains tenuous.

Information theory has proven fruitful for understanding the competition between compression and relevance preservation in supervised learning 

(Shwartz-Ziv & Tishby, 2017). We explored a similar trade-off in unsupervised learning, between multivariate information maximization and disentanglement of the learned factors. Writing this objective in terms of mutual information led to two surprising connections. First, we came to an unsupervised information bottleneck formulation that trades off compression and reconstruction relevance. Second, we found that by making appropriate variational approximations, we could reproduce the venerable VAE objective. This new perspective on VAE enabled more flexible distributions for latent codes and motivated new generalizations of the objective to localize interpretable information in latent codes. Ultimately, this led us to a novel learning objective that generated latent factors capturing intuitive structures in image data. We hope this alternative formulation of unsupervised learning continues to provide useful insights into this challenging problem.