Learning representations from data without labels has become increasingly important to solving some of the most crucial problems in machine learning—including tasks in image, language, speech, etc. (Bengio et al., 2013)
. Complex models, such as deep neural networks, have been successfully applied to generative modeling with high-dimensional data. From these methods we can either infer hidden representations with variational autoencoders (VAE)(Kingma & Welling, 2013; Rezende et al., 2014) or generate new samples with VAE or generative adversarial networks (GAN) (Goodfellow et al., 2014).
Building on these successes, an explosive amount of recent effort has focused on interpreting learned representations, which could have significant implications for subsequent tasks. Methods like InfoGAN (Chen et al., 2016) and -VAE (Higgins et al., 2017) are able to learn disentangled and interpretable representations in a completely unsupervised fashion. Information theory provides a natural framework for understanding representation learning and continues to generate new insights (Alemi et al., 2017; Shwartz-Ziv & Tishby, 2017; Achille & Soatto, 2018; Saxe et al., 2018).
In this paper we discuss the problem of learning disentangled and interpretable representations in a purely information-theoretic way. Instead of making assumptions about the data generating process at the beginning, we consider the question of how informative the underlying latent variable is about the original data variable . We would like to be as informative as possible about the relationships in while remaining as disentangled as possible in the sense of statistical independence. This principle has been previously proposed as Cor-relation Ex-planation (CorEx) (Ver Steeg & Galstyan, 2014; Ver Steeg, 2017). By optimizing appropriate information-theoretic measures, CorEx defines not only an informative representation but also a disentangled
one, thus eliciting a natural comparison to the recent literature on interpretable machine learning. However, computing the CorEx objective can be challenging, and previous studies have been restricted to cases where random variables are either discrete(Ver Steeg & Galstyan, 2014), or Gaussian (Ver Steeg & Galstyan, 2017).
Our key contributions are as follows:
We construct a variational lower bound to the CorEx objective and optimize the bound with deep neural networks. Surprisingly, we find that under standard assumptions, the lower bound for CorEx shares the same mathematical form as the evidence lower bound (ELBO) used in VAE, suggesting that CorEx provides a dual information-theoretic perspective on representations learned by VAE.
Going beyond the standard scenario to hierarchical VAEs or deep Gaussian latent models (DLGM) (Rezende et al., 2014), we demonstrate that CorEx provides new insight into measuring how representations become progressively more disentangled at subsequent layers. In addition, the CorEx objective can be naturally decomposed into two sets of mutual information terms with an interpretation as an unsupervised information bottleneck.
Inspired by this formulation, we propose to make some latent factors more interpretable by reweighting terms in the objective to make certain parts of the latent code uniquely informative about the inputs (instead of adding new terms to the objective, as in InfoGAN (Chen et al., 2016)).
Finally, we show that by sampling each latent code from the encoding distribution instead of the standard Gaussian prior in VAE, we can generate richer and more realistic samples than VAE even under the same network model.
We first review some basic information-theoretic quantities in Sec. 2, then introduce the total correlation explanation (CorEx) learning framework in Sec. 3. In Sec. 4 we derive the variational lower bound of the CorEx objective and demonstrate a connection with VAE in Sec. 5. This connection sheds light on some new applications of VAE, which we will describe in Sec. 6. We discuss related work in Sec. 7 and conclude our paper in Sec. 8.
2 Information Theory Background
Let denote a
-dimensional random variable whose probability density function is. Shannon differential entropy (Cover & Thomas, 2006) is defined in the usual way as . Let denote an -dimensional random variable whose probability density function is . Then mutual information between two random variables, and , is defined as . Mutual information can also be viewed as the reduction in uncertainty about one variable given another variable—i.e., .
Note that denotes the Kullback-Leibler divergence in Eq. 1. Intuitively, captures the total dependence across all the dimensions of and is zero if and only if all are independent. Total correlation or statistical independence is often used to characterize disentanglement in recent literature on learning representations (Dinh et al., 2014; Achille & Soatto, 2017).
The conditional total correlation of , after observing some latent variable , is defined as follows,
We define a measure of informativeness of latent variable about the dependence among the observed variables by quantifying how total correlation is reduced after conditioning on some latent factor ; i.e.,
In Eq. 3, we can see that is maximized if and only if the conditional distribution factorizes, in which case we can interpret as capturing the information about common causes across all .
3 Total Correlation Explanation Representation Learning
In a typical unsupervised setting like VAE, we assume a generative model where is a function of a latent variable , and we then maximize the log likelihood of under this model. From a CorEx perspective, the situation is reversed. We let be some stochastic function of parameterized by , i.e.,
. Then we seek a joint distribution, where is the underlying true data distribution that maximizes the following objective:
By non-negativity of total correlation, Eq. 4 naturally forms a lower bound on ; i.e., for any . Therefore, the global maximum of Eq. 4 occurs at , in which case and can be exactly interpreted as a generative model where are independent random variables that generate , as shown in Fig. 1.
Notice that the term is a bit different from the classical definition of informativeness using mutual information (Linsker, 1988). In fact, after combining the entropy terms in Eq. 1 and 2, the following equation holds (Ver Steeg & Galstyan, 2015):
The term in Eq. 4 can be seen as finding a minimal latent representation which, after conditioning, disentangles . When stacking hidden variable layers in Sec. 6, we will see that this condition can lead to interpretable features by forcing intermediate layers to be explained by higher layers under a factorized model.
Informativeness vs. Disentanglement
If we only consider the informativeness term as in the objective, a naive solution to this problem would be just setting . To avoid this, we also want the latent variables to be as disentangled as possible, corresponding to the term encouraging independence. In other words, the objective in Eq. 4 is trying to find , so that not only disentangles as much as possible, but is itself as disentangled as possible.
We first focus on optimizing the objective function defined by Eq. 4. The extension to the multi-layer (hierarchical) case is presented in the next section.
If we further constrain our search space to have the factorized form 111Each marginal distribution is parametrized by a different . But we will omit the subscript under for simplicity, as well as , in the following context. which is a standard assumption in most VAE models, then we have:
We convert the two total correlation terms into two sets of mutual information terms in Eq. 7. The first term, , denotes the mutual information between each input dimension and , and can be broadly construed as measuring the “relevance” of the representation to each observed variable in the parlance of the information bottleneck (Tishby et al., 2000; Shwartz-Ziv & Tishby, 2017). The second term, , represents the mutual information between each latent dimension and and can be viewed as the compression achieved by each latent factor. We proceed by constructing tractable bounds on these quantities.
4.1 Variational Lower Bound for
(Barber & Agakov, 2003) derived the following lower bound for mutual information by using the non-negativity of KL-divergence; i.e., gives:
where the angled brackets represent expectations, and is any arbitrary distribution parametrized by . We need a variational distribution because the posterior distribution is hard to calculate because the true data distribution is unknown—although approximating the normalization factor can be tractable compared to VAE. A detailed comparison with VAE will be made in Sec. 5.
4.2 Variational Upper Bound for
We again use the non-negativity of KL-divergence, i.e., , to obtain:
where represents an arbitrary distribution parametrized by .
5 Connection to Variational Autoencoders
Remarkably, Eq. 10 has a form that is very similar to the lower bound introduced in variational autoencoders, except it is decomposed into each dimension and . To pursue this similarity further, we denote
Then, by rearranging the terms in Eq. 10, we obtain
The first term in the bound, , is a constant and has no effect on the optimization. The remaining expression coincides with the VAE objective as long as is a standard Gaussian. The second term corresponds to the reconstruction error, and the third term is the KL-divergence term in VAE.
The CorEx objective starts with a defined encoder and seeks a decoder via variational approximation to the true posterior. VAE is exactly the opposite. Moreover, in VAE we need a variational approximation to the posterior because the normalization constant is intractable; in CorEx the variational distribution is needed because we do not know the true data distribution . It is also worth mentioning that the lower bound in Eq. 12 requires a fully factorized form of the decoder , unlike VAE where can be flexible.222In this paper we also restrict the encoder distribution to have a factorized form which follows the standard network structures in VAE, but it is not a necessary condition to achieve the lower bound shown in Eq. 12.
As pointed out by (Zhao et al., 2017a), if we choose to use a more expressive distribution family, such as PixelRNN/PixelCNN (Van Oord et al., 2016; Gulrajani et al., 2017) for the decoder in a VAE, the model tends to neglect the latent codes altogether, i.e., . This problem, however, does not exist in CorEx, since it explicitly requires to be informative about in the objective function. It is this informativeness term that leads the CorEx objective to a factorized decoder family . In fact, if we assume , then we will get and an informativeness term of zero—meaning CorEx will avoid such undesirable solutions.
Stacking CorEx and Hierarchical VAE
[Is this supposed to be a section?] Notice that if Eq. 4 does not achieve the global maximum, it might be the case that the latent variable is still not disentangled enough, i.e., . If this is true, we can reapply the CorEx principle (Ver Steeg & Galstyan, 2015) and learn another layer of latent variables on top of and redo the optimization on w.r.t. the following equation; i.e.,
To generalize, suppose there are layers of latent variables, and we further denote the observed variable . Then one can stack each latent variable on top of and jointly optimize the summation of the corresponding objectives, as shown in Eqs. 4 and 13; i.e.,
By simple expansion of Eq. 14 and cancellation of intermediate terms, we have:
Furthermore, if we have for all , then we get:
Eq. 16 shows that stacking latent factor representations results in progressively better lower bounds for .
Enforcing independence relations at each layer, we denote:
One can now see that the second term of the RHS in Eq. 19 has the same form as deep latent Gaussian models (Rezende et al., 2014) (also known as hierarchical VAE) as long as the latent code distribution on the top layer follows standard normal and
on each layer is parametrized by Gaussian distributions.
One immediate insight from this connection is that, as long as each ) is greater than zero in Eq. 14, then by expanding the definition of each term we can easily see that is more disentangled than ; i.e., if . Therefore, each latent layer of hierarchical VAE will be more and more disentangled if for each . This interpretation also provides a criterion for determining the depth of a hierarchical representation; we can add layers as long as the corresponding term in the objective is positive so that the overall lower bound on is increasing.
Despite reaching the same final expression, approaching this result from an information-theoretic optimization rather than generative modeling perspective offers some advantages. First, we have much more flexibility in specifying the distribution of latent factors, as we can directly sample from this distribution using our encoder. Second, the connection with mutual information suggests intuitive modifications of our objective that increase the interpretability of results. These advantages will be explored in more depth in Sec. 6.
6.1 Disentangling Latent Codes via Hierarchical VAE / Stacking CorEx on MNIST
We train a simple hierarchical VAE/stacking CorEx model with two stochastic layers on the MNIST dataset. The graphical model is shown in Fig. 2. For each stochastic layer, we use a neural network to parametrize the distribution and , and we set to be a fixed standard Gaussian.
We use a 784-512-512-64 fully connected network between and and a 64-32-32-16-16-10 dense network between and
, with ReLU activations in both. The output ofis a ten-dimensional one hot vector, where we decode based on each one-hot representation and weight the results according to their softmax probabilities.
After training the model, we find that the learned discrete variable on the top layer gives us an unsupervised classification accuracy of 85%, which is competitive with the more complex method shown in (Dilokthanakul et al., 2016).
To verify that the top layer helps disentangle the middle layer by encouraging conditional independence of given , we calculate the mutual information between input and each dimension . , as shown in Fig. LABEL:fig:mnist_mi. We can see around 80% of the latent codes have very low mutual information with . We then select the top two dimensions with the most mutual information, and denote these two dimensions as , . We find and . We then generate new digits by first fixing the discrete latent variable on the top layer, and sampling latent codes from . We systematically vary the noise from -2 to 2 through and while keeping the other dimensions of fixed, and visualize the results in Fig. 3.
We can see that this simple two-layer structure automatically disentangles and learns the interpretable factors on MNIST (width and rotation). We attribute this behavior to stacking, where the top layer disentangles the middle layer and makes the latent codes more interpretable through samples from .
6.2 Learning Interpretable Representations through Information Maximizing VAE / CorEx on CelebA
One important insight from recently developed methods, like InfoGAN, is that we can maximize the mutual information between a latent code and the observations to make the latent code more interpretable.
While it seems ad hoc to add an additional mutual information term in the original VAE objective, a more natural analogue arises in the CorEx setting. Looking at the formulation in Eq. 7, it already contains two sets of mutual information terms. If one would like to anchor a latent variable, say , to have higher mutual information with the observation , then one can simply modify the objective by replacing the unweighted sum with a weighted one:
Eq. 6.2 shows that in VAE we can decrease the weight of KL-divergence for particular latent codes to achieve mutual information maximization. We call this new approach AnchorVAE in Eq. 6.2. Notice that there is a subtle difference between AnchorVAE and -VAE (Higgins et al., 2017). In -VAE, the weights of KL-divergence term for all latent codes are the same, while in AnchorVAE, only the weights of specified factors have been changed to encourage high mutual information. With some prior knowledge of the underlying factors of variation, AnchorVAE encourages the model to concentrate this explanatory power in a limited number of variables.
We trained AnchorVAE on the CelebA dataset with 2048 latent factors, with mean square error for reconstruction loss. We adopted a three-layer convolutional neural network structure. The weights of KL-divergence of the first five latent variables are set to 0.5 to let them have higher mutual information than other latent variables. The mutual information is plotted in Fig.4 after training. We find these five latent variables have the highest mutual information of around 3.5, demonstrating the mutual information maximization effect in AnchorVAE.
To evaluate the interpretability of those anchored variables for generating new samples, we manipulate the first five latent variables while keeping other dimensions fixed. Fig. 5 summarizes the result. We observe that all five anchored latent variables learn intuitive factors of variation in the data. It is interesting to see that latent variable and are very similar—both vary the generated images from white to black in some sense. However, these two latent factors are actually very different: emphasizes skin color variation while controls the position of the light source.
We also trained the original VAE objective with the same network structure and examine the top five latent codes with highest mutual information. Fig. 6 shows the results of manipulating the top two latent codes , with mutual information and respectively. We can see that they reflect an entangled representation. The other three latent codes demonstrate similar entanglements which are omitted here.
6.3 Generating Richer and More Realistic Images via CorEx
Let us revisit the variational upper bound on in Eq. 9. In this upper bound, VAE chooses
to be a standard normal distribution. But notice that this upper bound becomes tight when; i.e.,
where . Therefore, after training the model, we can approximate the true distribution by first sampling a data point and then sampling from the conditional . Repeating this process across latent dimensions, we can use the factorized distribution to generate new data instead of sampling from a standard normal. In this way, we obtain more realistic images since we are sampling from a tighter lower bound to the CorEx objective.
We ran a traditional VAE on the celebA dataset with the log-normal loss as the reconstruction error and 128 latent codes. We calculated the variance of eachand ploted the cumulative distribution of these variances in Fig. 6(a).
One can see that around 20% of the latent variables actually have a variance greater than two. We have plotted variance versus the mutual information in Fig. 6(b), in which we can see that higher variance in corresponds to higher mutual information . In this case, using a standard normal distribution with variance 1 for all would be far from optimal for generating the data.
Fig. 8 shows the generated images by either sampling the latent code from a standard normal distribution or the factorized distribution . We can see that Fig. 7(b) not only tends to generate more realistic images than Fig. 7(a), but it also exhibits more diversity than Fig. 7(a). We attribute this improvement to the more flexible nature of our latent code distribution.
7 Related Work
The notion of disentanglement in representation learning lacks a unique characterization, but it generally refers to latent factors which are individually interpretable, amenable to simple downstream modeling or transfer learning, and invariant to nuisance variation in the data(Bengio et al., 2013). We adopt the common definition of statistical independence (Achille & Soatto, 2017; Dinh et al., 2014) by minimizing total correlation—an idea with a rich history (Barlow, 1989; Comon, 1994; Schmidhuber, 1992). However, there are numerous alternatives not rooted in independence. (Higgins et al., 2017)
measures disentanglement by the identifiability of changes in a single latent dimension. More concretely, they vary only one latent variable with others fixed, apply the learned decoder and encoder to reconstruct the latent space, and propose that a classifier should be able to predict the varied dimension for a disentangled representation. The work of(Thomas et al., 2017; Bengio et al., 2017)
is similar in spirit, identifying disentangled factors as changes in a latent embedding that can be controlled via reinforcement learning. Alternatively, if prior knowledge of the number of desired factors of variation is given, models such as InfoGAN(Chen et al., 2016) or our AnchorVAE seek to directly incorporate this information.
Our work provides a complementary perspective to a growing body of research connecting information theory and variational inference (Achille & Soatto, 2017, 2018; Alemi et al., 2017); much of this is motivated by the Information Bottleneck (IB) method (Tishby et al., 2000). In the unsupervised case, IB generalizes the VAE objective by adding a Lagrange multiplier to the KL divergence term of the ELBO to manage the trade-off between data reconstruction and model compression. This is identical to the -VAE objective, where (Higgins et al., 2017) observes that overweighting the KL divergence term () can encourage disentanglement, albeit at the cost of reconstruction performance. (Achille & Soatto, 2018) add additional total correlation regularization to the IB Lagrangian to encourage independence, and propose using and increasing gradually during training. Furthermore, their optimization using multiplicative noise generalizes dropout methods, which helps to achieve improved robustness to nuisance variables.
These objectives match CorEx and the ELBO for , but adding a Lagrange multiplier to control the disentangling term in CorEx would not lead to -VAE. We saw in Sec. 5 that our bound on the CorEx objective reduces to the ELBO with common factorization assumptions, so adding to in CorEx would lead to . This bound recovers the objective of (Kim & Mnih, 2017), who consider to encourage independence, but without the more principled justification of CorEx.
(Sønderby et al., 2016; Zhao et al., 2017b) highlight limitations of the naive hierarchical VAE, such as representational inefficiency, and propose alternative ladder neural network structures for learning hierarchical features. However, from the CorEx perspective, we observe that the hierarchical VAE is encouraging more disentangled representations in top layers, which has not been previously recognized.
Deep learning enables us to construct latent representations that reconstruct or generate samples from complex, high-dimensional distributions. Unfortunately, these powerful models do not necessarily produce representations with structures that match human intuition or goals. Subtle changes to training objectives lead to qualitatively different representations, but our understanding of this dependence remains tenuous.
Information theory has proven fruitful for understanding the competition between compression and relevance preservation in supervised learning(Shwartz-Ziv & Tishby, 2017). We explored a similar trade-off in unsupervised learning, between multivariate information maximization and disentanglement of the learned factors. Writing this objective in terms of mutual information led to two surprising connections. First, we came to an unsupervised information bottleneck formulation that trades off compression and reconstruction relevance. Second, we found that by making appropriate variational approximations, we could reproduce the venerable VAE objective. This new perspective on VAE enabled more flexible distributions for latent codes and motivated new generalizations of the objective to localize interpretable information in latent codes. Ultimately, this led us to a novel learning objective that generated latent factors capturing intuitive structures in image data. We hope this alternative formulation of unsupervised learning continues to provide useful insights into this challenging problem.
- Achille & Soatto (2017) Achille, Alessandro and Soatto, Stefano. On the emergence of invariance and disentangling in deep representations. arXiv preprint arXiv:1706.01350, 2017.
- Achille & Soatto (2018) Achille, Alessandro and Soatto, Stefano. Information dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
- Alemi et al. (2017) Alemi, Alexander A, Fischer, Ian, Dillon, Joshua V, and Murphy, Kevin. Deep variational information bottleneck. International Conference on Learning Representations, 2017.
- Barber & Agakov (2003) Barber, David and Agakov, Felix. The im algorithm: a variational approach to information maximization. In Proceedings of the 16th International Conference on Neural Information Processing Systems, pp. 201–208. MIT Press, 2003.
- Barlow (1989) Barlow, Horace. Unsupervised learning. Neural computation, 1(3):295–311, 1989.
- Bengio et al. (2017) Bengio, Emmanuel, Thomas, Valentin, Pineau, Joelle, Precup, Doina, and Bengio, Yoshua. Independently controllable features. arXiv preprint arXiv:1703.07718, 2017.
- Bengio et al. (2013) Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
- Chen et al. (2016) Chen, Xi, Duan, Yan, Houthooft, Rein, Schulman, John, Sutskever, Ilya, and Abbeel, Pieter. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172–2180, 2016.
- Comon (1994) Comon, Pierre. Independent component analysis, a new concept? Signal processing, 36(3):287–314, 1994.
- Cover & Thomas (2006) Cover, Thomas M and Thomas, Joy A. Elements of information theory. Wiley-Interscience, 2006.
- Dilokthanakul et al. (2016) Dilokthanakul, Nat, Mediano, Pedro AM, Garnelo, Marta, Lee, Matthew CH, Salimbeni, Hugh, Arulkumaran, Kai, and Shanahan, Murray. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648, 2016.
- Dinh et al. (2014) Dinh, Laurent, Krueger, David, and Bengio, Yoshua. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
- Goodfellow et al. (2014) Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
- Gulrajani et al. (2017) Gulrajani, Ishaan, Kumar, Kundan, Ahmed, Faruk, Taiga, Adrien Ali, Visin, Francesco, Vazquez, David, and Courville, Aaron. Pixelvae: A latent variable model for natural images. International Conference on Learning Representations, 2017.
- Higgins et al. (2017) Higgins, Irina, Matthey, Loic, Pal, Arka, Burgess, Christopher, Glorot, Xavier, Botvinick, Matthew, Mohamed, Shakir, and Lerchner, Alexander. beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017.
- Kim & Mnih (2017) Kim, Hyunjik and Mnih, Andriy. Disentangling by factorising. NIPS Workshop on Learning Disentangled Representations, 2017.
- Kingma & Welling (2013) Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Linsker (1988) Linsker, Ralph. Self-organization in a perceptual network. Computer, 21(3):105–117, 1988.
Rezende et al. (2014)
Rezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan.
Stochastic backpropagation and approximate inference in deep generative models.In International Conference on Machine Learning, pp. 1278–1286, 2014.
- Saxe et al. (2018) Saxe, Michael A, Bansal, Yamini, Dapello, Joel, Advani, Madhu, Kolchinsky, Artemy, Daniel, Brendan T, and Cox, David D. On the information bottleneck theory of deep learning. International Conference on Learning Representations, 2018.
- Schmidhuber (1992) Schmidhuber, Jürgen. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992.
- Shwartz-Ziv & Tishby (2017) Shwartz-Ziv, Ravid and Tishby, Naftali. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.
- Sønderby et al. (2016) Sønderby, Casper Kaae, Raiko, Tapani, Maaløe, Lars, Sønderby, Søren Kaae, and Winther, Ole. Ladder variational autoencoders. In Advances in neural information processing systems, pp. 3738–3746, 2016.
- Studenỳ & Vejnarova (1998) Studenỳ, M and Vejnarova, J. The multiinformation function as a tool for measuring stochastic dependence. In Learning in graphical models, pp. 261–297. Springer, 1998.
- Thomas et al. (2017) Thomas, Valentin, Pondard, Jules, Bengio, Emmanuel, Sarfati, Marc, Beaudoin, Philippe, Meurs, Marie-Jean, Pineau, Joelle, Precup, Doina, and Bengio, Yoshua. Independently controllable features. arXiv preprint arXiv:1708.01289, 2017.
- Tishby et al. (2000) Tishby, Naftali, Pereira, Fernando C, and Bialek, William. The information bottleneck method. arXiv preprint physics/0004057, 2000.
Van Oord et al. (2016)
Van Oord, Aaron, Kalchbrenner, Nal, and Kavukcuoglu, Koray.
Pixel recurrent neural networks.In International Conference on Machine Learning, pp. 1747–1756, 2016.
- Ver Steeg (2017) Ver Steeg, Greg. Unsupervised learning via total correlation explanation. IJCAI, 2017.
- Ver Steeg & Galstyan (2014) Ver Steeg, Greg and Galstyan, Aram. Discovering structure in high-dimensional data through correlation explanation. In Advances in Neural Information Processing Systems, pp. 577–585, 2014.
- Ver Steeg & Galstyan (2015) Ver Steeg, Greg and Galstyan, Aram. Maximally informative hierarchical representations of high-dimensional data. In Artificial Intelligence and Statistics, pp. 1004–1012, 2015.
- Ver Steeg & Galstyan (2017) Ver Steeg, Greg and Galstyan, Aram. Low complexity gaussian latent factor models and a blessing of dimensionality. arXiv preprint arXiv:1706.03353, 2017.
- Watanabe (1960) Watanabe, Satosi. Information theoretical analysis of multivariate correlation. IBM Journal of research and development, 4(1):66–82, 1960.
- Zhao et al. (2017a) Zhao, Shengjia, Song, Jiaming, and Ermon, Stefano. Infovae: Information maximizing variational autoencoders. arXiv preprint arXiv:1706.02262, 2017a.
- Zhao et al. (2017b) Zhao, Shengjia, Song, Jiaming, and Ermon, Stefano. Learning hierarchical features from generative models. arXiv preprint arXiv:1702.08396, 2017b.