Semi-Supervised Generation with Cluster-aware Generative Models

04/03/2017 ∙ by Lars Maaløe, et al. ∙ 0

Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster-aware Generative Model, that uses unlabelled information to infer a latent representation that models the natural clustering of the data, and additional labelled data points to refine this clustering. The generative performances of the model significantly improve when labelled information is exploited, obtaining a log-likelihood of -79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Variational Auto-Encoders (VAE) (Kingma, 2013; Rezende et al., 2014) and Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have shown promising generative performances on data from complex high-dimensional distributions. Both approaches have spawn numerous related deep generative models, not only to model data points like those in a large unlabelled training data set, but also for semi-supervised classification (Kingma et al., 2014; Maaløe et al., 2016; Springenberg, 2015; Salimans et al., 2016). In semi-supervised classification a few points in the training data are endowed with class labels, and the plethora of unlabelled data aids to improve a supervised classification model.

Could a few labelled training data points in turn improve a deep generative model? This reverse perspective, doing semi-supervised generation, is investigated in this work. Many of the real life data sets contain a small amount of labelled data, but incorporating this partial knowledge in the generative models is not straightforward, because of the risk of overfitting towards the labelled data. This overfitting can be avoided by finding a good scheme for updating the parameters, like the one introduced in the models for semi-supervised classification (Kingma et al., 2014; Maaløe et al., 2016). However, there is a difference in optimizing the model towards optimal classification accuracy and generative performance. We introduce the Cluster-aware Generative Model (CaGeM), an extension of a VAE, that improves the generative performances, by being able to model the natural clustering in the higher feature representations through a discrete variable (Bengio et al., 2013). The model can be trained fully unsupervised, but its performances can be further improved using labelled class information that helps in constructing well defined clusters. A generative model with added labelled data information may be seen as parallel to how humans rely on abstract domain knowledge in order to efficiently infer a causal model from property induction with very few labelled observations (Tenenbaum et al., 2006).

Supervised deep learning models with no stochastic units are able to learn multiple levels of feature abstraction. In VAEs, however, the addition of more stochastic layers is often accompanied with a built-in pruning effect so that the higher layers become disconnected and therefore not exploited by the model (Burda et al., 2015a; Sønderby et al., 2016). As we will see, in CaGeM the possibility of learning a representation in the higher stochastic layers that can model clusters in the data drastically reduces this issue. This results in a model that is able to disentangle some of the factors of variation in the data and that extracts a hierarchy of features beneficial during the generation phase. By using only 100 labelled data points, we present state of the art log-likelihood performance on permutation-invariant models for MNIST, and an improvement with respect to comparable models on the OMNIGLOT data set. While the main focus of this paper is semi-supervised generation, we also show that the same model is able to achieve competitive semi-supervised classification results.

2 Variational Auto-encoders

A Variational Auto-Encoder (VAE) (Kingma, 2013; Rezende et al., 2014) defines a deep generative model for data that depends on latent variable or a hierarchy of latent variables, e.g. , see Figure 0(a)

for a graphical representation. The joint distribution of the two-level generative model is given by

where

are Gaussian distributions with a diagonal covariance matrix and

is typically a parameterized Gaussian (continuous data) or Bernoulli distribution (binary data). The probability distributions of the generative model of a VAE are parameterized using deep neural networks whose parameters are denoted by

. Training is performed by optimizing the Evidence Lower Bound (ELBO), a lower bound to the intractable log-likelihood obtained using Jensen’s inequality:

(1)

The introduced variational distribution is an approximation to the model’s posterior distribution , defined with a bottom-up dependency structure where each variable of the model depends on the variable below in the hierarchy:

Similar to the generative model, the mean and diagonal covariance of both Gaussian distributions defining the inference network are parameterized with deep neural networks that depend on parameters , see Figure 0(b) for a graphical representation.

We can learn the parameters and by jointly maximizing the ELBO in (1

) with stochastic gradient ascent, using Monte Carlo integration to approximate the intractable expectations and computing low variance gradients with the reparameterization trick

(Kingma, 2013; Rezende et al., 2014).

(a) Generative model

(b) Inference model
Figure 1: Generative model and inference model of a Variational Auto-Encoder with two stochastic layers.

Inactive stochastic units

A common problem encountered when training VAEs with bottom-up inference networks is given by the so called inactive units in the higher layers of stochastic variables (Burda et al., 2015a; Sønderby et al., 2016). In a 2-layer model for example, VAEs often learn , i.e. the variational approximation of uses no information coming from the data point through . If we rewrite the ELBO in (1) as

we can see that represents a local maxima of our optimization problem where the KL-divergence term is set to zero and the information flows by first sampling in and then computing (and is therefore independent from ). Several techniques have been developed in the literature to mitigate the problem of inactive units, among which we find annealing of the KL term (Bowman et al., 2015; Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016).

Using ideas from Chen et al. (2017), we notice that the inactive units in a VAE with 2 layers of stochastic units can be justified not only as a poor local maxima, but also from the modelling point of view. Chen et al. (2017) give a bits-back coding interpretation of Variational Inference for a generative model of the form , with data and stochastic units . The paper shows that if the decoder is powerful enough to explain most of the structure in the data (e.g. an autoregressive decoder), then it will be convenient for the model to set not to incur in an extra optimization cost of . The inactive units in a 2-layer VAE can therefore be seen as caused by the flexible distribution that is able to explain most of the structure in the data without using information from . By making , the model can avoid the extra cost of . A more detailed discussion on the topic can be found in Appendix A.

It is now clear that if we want a VAE to exploit the power of additional stochastic layers we need to define it so that the benefits of encoding meaningful information in is greater than the cost that the model has to pay. As we will discuss below, we will achieve this by aiding the generative model to do representation learning.

3 Cluster-aware Generative Models

Hierarchical models parameterized by deep neural networks have the ability to represent very flexible distributions. However, in the previous section we have seen that the units in the higher stochastic layers of a VAE often become inactive. We will show that we can help the model to exploit the higher stochastic layers by explicitly encoding a useful representation, i.e. the ability to model the natural clustering of the data (Bengio et al., 2013), which will also be needed for semi-supervised generation.

We favor the flow of higher-level global information through by extending the generative model of a VAE with a discrete variable representing the choice of one out of different clusters in the data. The joint distribution is computed by marginalizing over :

We call this model Cluster-aware Generative Model (CaGeM), see Figure 2 for a graphical representation. The introduced categorical distribution ( represents the class distribution) depends solely on , that needs therefore to stay active for the model to be able to represent clusters in the data. We further add the dependence of and on , so that they can now both also represent cluster-dependent information.

(a) Generative model

(b) Inference model
Figure 2: Generative model and inference model of a CaGeM with two stochastic layers (black and blue lines). The black lines only represent a standard VAE.

3.1 Inference

As done for the VAE in (1), we can derive the ELBO for CaGeM by maximizing the log-likelihood

We define the variational approximation over the latent variables of the model as

where

In the inference network we then reverse all the dependencies among random variables in the generative model (the arrows in the graphical model in Figure

2). This results in a bottom-up

inference network that performs a feature extraction that is fundamental for learning a good representation of the data. Starting from the data

we construct higher levels of abstraction, first through the variables and , and finally through the variable , that includes the global information used in the generative model. In order to make the higher representation more expressive we add a skip-connection from to , that is however not fundamental to improve the performances of the model.

With this factorization of the variational distribution , the ELBO can be written as

We maximize by jointly updating, with stochastic gradient ascent, the parameters of the generative model and of the variational approximation. When computing the gradients, the summation over is performed analytically, whereas the intractable expectations over and are approximated by sampling. We use the reparameterization trick to reduce the variance of the stochastic gradients.

4 Semi-Supervised Generation with CaGeM

In some applications we may have class label information for some of the data points in the training set. In the following we will show that CaGeM provides a natural way to exploit additional labelled data to improve the performance of the generative model. Notice that this semi-supervised generation approach differs from the more traditional semi-supervised classification task that uses unlabelled data to improve classification accuracies (Kingma et al., 2014; Maaløe et al., 2016; Salimans et al., 2016). In our case in fact, it is the labelled data that supports the generative task. Nevertheless, we will see in our experiment that CaGeM also leads to competitive semi-supervised classification performances.

To exploit the class information, we first set the number of clusters equal to the number of classes

. We can now define two classifiers in CaGeM:

  1. In the inference network we can compute the class probabilities given the data, i.e. , by integrating out the stochastic variables from

  2. Another set of class-probabilities can be computed using the generative model. Given the posterior distribution we have in fact

    The posterior over is intractable, but we can approximate it using the variational approximation , that is obtained by marginalizing out and the variable in the joint distribution :

    While for the labels the summation can be carried out analytically, for the variable and we use Monte Carlo integration. For each of the classes we will then obtain a different sample with a corresponding weight given by . This therefore resembles a cascade of classifiers, as the class probabilities of the classifier will depend on the probabilities of the classifier in the inference model.

As our main goal is to learn representations that will lead to good generative performance, we interpret the classification of the additional labelled data as a secondary task that aids in learning a feature space that can be easily separated into clusters. We can then see this as a form of semi-supervised clustering (Basu et al., 2002), where we know that some data points belong to the same cluster and we are free to learn a data manifold that makes this possible.

The optimal features for the classification task could be very different from the representations learned for the generative task. This is why it is important not to update the parameters of the distributions over , and , in both generative model and inference model, using labelled data information. If this is not done carefully, the model could be prone to overfitting towards the labelled data. We define as the subset of containing the parameters in , and as the subset of containing the parameters in . and then represent the incoming arrows to in Figure 2. We update the parameters and jointly by maximizing the new objective

where is the set of unlabelled training points, is the set of labelled ones, and and are the standard categorical cross-entropies for the and classifiers respectively. Notice that we consider the cross-entropies only a function of and , meaning that the gradients of the cross-entropies with respect to the parameters of the distributions over , and will be 0, and will not depend on the labelled data (as needed when learning meaningful representations of the data to be used for the generative task). To match the relative magnitudes between the ELBO and the two cross-entropies we set as done in (Kingma et al., 2014; Maaløe et al., 2016), where and are the numbers of unlabelled and labelled data points, and is a scaling constant.

Figure 3: Visualizations from CaGeM-100 with a 2-dimensional space. The middle plot shows the latent space, from which we generate random samples (left) and class conditional random samples (right) with a mesh grid (black bounding box). The relative placement of the samples in the scatter plot corresponds to a digit in the mesh grid.

5 Experiments

We evaluate CaGeM by computing the generative log-likelihood performance on MNIST and OMNIGLOT (Lake et al., 2013)

datasets. The model is parameterized by feed-forward neural networks (

) and linear layers (), so that, for Gaussian outputs, each collection of incoming edges to a node in Figure 2 is defined as:

For Bernoulli distributed outputs we simply define a feed-forward neural network with a sigmoid activation function for the output. Between dense layers we use the rectified linear unit as non-linearity and batch-normalization

(Ioffe & Szegedy, 2015). We only collect statistics for the batch-normalization during unlabelled inference. For the log-likelihood experiments we apply temperature on the

-terms during the first 100 epochs of training

(Bowman et al., 2015; Sønderby et al., 2016). The stochastic layers are defined with , and 2-layered neural feed-forward networks with respectively 1024 and 512 units in each layer. Training is performed using the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of and annealing it by every

epochs. The experiments are implemented with Theano

(Bastien et al., 2012), Lasagne (Dieleman et al., 2015) and Parmesan111A variational repository named parmesan on Github..

For both datasets we report unsupervised and semi-supervised permutation invariant log-likelihood performance and for MNIST we also report semi-supervised classification errors. The input data is dynamically binarized and the ELBO is evaluated by taking 5000 importance-weighted (IW) samples, denoted

. We evaluate the performance of CaGeM with different numbers of labelled samples referred to as CaGeM-. When used, the labelled data is randomly sampled evenly across the class distribution. All experiments across datasets are run with the same architecture.

6 Results

Table 1 shows the generative log-likelihood performances of different variants of CaGeM on the MNIST data set. We can see that the more labelled samples we use, the better the generative performance will be. Even though the results are not directly comparable, since CaGeM exploits a small fraction supervised information, we find that using only 100 labelled samples (10 samples per class), CaGeM-100 model achieves state of the art log-likelihood performance on permutation invariant MNIST with a simple 2-layered model. We also trained a ADGM-100 from Maaløe et al. (2016)222We used the code supplied in the repository named auxiliary-deep-generative-models on Github. in order to make a fair comparison on generative log-likelihood in a semi-supervised setting and reached a performance of . This indicates that models that are highly optimized for improving semi-supervised classification accuracy may be a suboptimal choice for generative modeling.

Non-Permutation Invariant
DRAW+VGP (Tran et al., 2016)
IAF VAE (Kingma et al., 2016)
VLAE (Chen et al., 2017)
Permutation Invariant
AVAE, L=2, IW=1 (Maaløe et al., 2016)
IWAE, L=2, IW=50 (Burda et al., 2015a)
LVAE, L=5, IW=10 (Sønderby et al., 2016)
VAE+VGP, L=2 (Tran et al., 2016)
DVAE (Rolfe, 2017)
CaGeM-0, L=2, IW=1, K=20
CaGeM-0, L=2, IW=1, K=10
CaGeM-20, L=2, IW=1
CaGeM-50, L=2, IW=1
CaGeM-100, L=2, IW=1
Table 1: Test log-likelihood for permutation invariant and non-permutation invariant MNIST. L, IW and K denotes the number of stochastic layers (if it is translatable to the VAE), the number of importance weighted samples used during inference, and the number of predefined clusters used.

CaGeM could further benefit from the usage of non-permutation invariant architectures suited for image data, such as the autoregressive decoders used by IAF VAE (Kingma et al., 2016) and VLAE (Chen et al., 2017). The fully unsupervised CaGeM-0 results show that by defining clusters in the higher stochastic units, we achieve better performances than the closely related IWAE (Burda et al., 2015a) and LVAE (Sønderby et al., 2016) models. It is finally interesting to see from Table 1 that CaGeM-0 performs well even when the number of clusters are different from the number of classes in the labelled data set.

In Figure 4 we show in detail how the performance of CaGeM increases as we add more labelled data points. We can also see that the ELBO tightens when adding more labelled information, as compared to and (Sønderby et al., 2016).

Figure 4: Log-likelihood scores for CaGeM on MNIST with 0, 20, 50 and 100 labels with 1, 10 and 5000 IW samples.

The PCA plots of the variable of a VAE, CaGeM-0 and CaGeM-100 are shown in Figure 5 . We see how CaGeMs encode clustered information into the higher stochastic layer. Since CaGeM-0 is unsupervised, it forms less class-dependent clusters compared to the semi-supervised CaGeM-100, that fits its latent space into 10 nicely separated clusters. Regardless of the labelled information added during inference, CaGeM manages to activate a high amount of units, as for CaGeM we obtain , while a LVAE with 2 stochastic layers obtains .

The generative model in CaGeM enables both random samples, by sampling the class variable and feeding it to , and class conditional samples by fixing . Figure 3 shows the generation of MNIST digits from CaGeM-100 with . The images are generated by applying a linearly spaced mesh grid within the latent space and performing random generations (left) and conditional generations (right). When generating samples in CaGeM, it is clear how the latent units and capture different modalities within the true data distribution, namely style and class.

Figure 5: PCA plots of the stochastic units and in a 2-layered model trained on MNIST. The colors corresponds to the true labels.

Regardless of the fact that CaGeM was designed to optimize the semi-supervised generation task, the model can also be used for classification by using the classifier . In Table 2 we show that the semi-supervised classification accuracies obtained with CaGeM are comparable to the performance of GANs (Salimans et al., 2016).

Labels
M1+M2 (Kingma et al., 2014) - - % ()
VAT (Miyato et al., 2015) - - %
CatGAN (Springenberg, 2015) - - % ()
SDGM (Maaløe et al., 2016) - - % ()
Ladder Network (Rasmus et al., 2015) - - % ()
ADGM (Maaløe et al., 2016) - - % ()
Imp. GAN (Salimans et al., 2016) % () % () % ()
CaGeM % % %
Table 2: Semi-supervised test error % benchmarks on MNIST for 20, 50, and 100 randomly chosen and evenly distributed labelled samples. Each experiment was run 3 times with different labelled subsets and the reported accuracy is the mean value.

The OMNIGLOT dataset consists of 50 different alphabets of handwritten characters, where each character is sparsely represented. In this task we use the alphabets as the cluster information, so that the representation should divide correspondingly. From Table 3 we see an improvement over other comparable VAE architectures (VAE, IWAE and LVAE), however, the performance is far from the once reported from the auto-regressive models (Kingma et al., 2016; Chen et al., 2017). This indicates that the alphabet information is not as strong as for a dataset like MNIST. This is also indicated from the accuracy of CaGeM-500, reaching a performance of . Samples from the model can be found in Figure 6.

Figure 6: Generations from CaGeM-500. (left) The input images, (middle) the reconstructions, and (right) random samples from .
VAE, L=2, IW=50 (Burda et al., 2015a)
IWAE, L=2, IW=50 (Burda et al., 2015a)
LVAE, L=5, FT, IW=10 (Sønderby et al., 2016)
RBM (Burda et al., 2015b)
DBN (Burda et al., 2015b)
DVAE (Rolfe, 2017)
CaGeM-500, L=2, IW=1
Table 3: Generative test log-likelihood for permutation invariant OMNIGLOT.

7 Discussion

As we have seen from our experiments, CaGeM offers a way to exploit the added flexibility of a second layer of stochastic units, that stays active as the modeling performances can greatly benefit from capturing the natural clustering of the data. Other recent works have presented alternative methods to mitigate the problem of inactive units when training flexible models defined by a hierarchy of stochastic layers. Burda et al. (2015a) used importance samples to improve the tightness of the ELBO, and showed that this new training objective helped in activating the units of a 2-layer VAE. Sønderby et al. (2016)

trained Ladder Variational Autoencoders (LVAE) composed of up to 5 layers of stochastic units, using a top-down inference network that forces the information to flow in the higher stochastic layers. Contrarily to the bottom-up inference network of CaGeM, the top-down approach used in LVAEs does not enforce a clear separation between the role of each stochastic unit, as proven by the fact that all of them encode some class information. Longer hierarchies of stochastic units unrolled in time can be found in the sequential setting

(Krishnan et al., 2015; Fraccaro et al., 2016). In these applications the problem of inactive stochastic units appears when using powerful autoregressive decoders (Fraccaro et al., 2016; Chen et al., 2017), but is mitigated by the fact that new data information enters the model at each time step.

The discrete variable of CaGeM was introduced to be able to define a better learnable representation of the data, that helps in activating the higher stochastic layer. The combination of discrete and continuous variables for deep generative models was also recently explored by several authors. Maddison et al. (2016); Jang et al. (2016)

used a continuous relaxation of the discrete variables, that makes it possible to efficiently train the model using stochastic backpropagation. The introduced Gumbel-Softmax variables allow to sacrifice log-likelihood performances to avoid the computationally expensive integration over

. Rolfe (2017)

presents a new class of probabilistic models that combines an undirected component consisting of a bipartite Boltzmann machine with binary units and a directed component with multiple layers of continuous variables.

Traditionally, semi-supervised learning applications of deep generative models such as Variational Auto-encoders and Generative Adversarial Networks

(Goodfellow et al., 2014) have shown that, whenever only a small fraction of labelled data is available, the supervised classification task can benefit from additional unlabelled data (Kingma et al., 2014; Maaløe et al., 2016; Salimans et al., 2016). In this work we consider the semi-supervised problem from a different perspective, and show that the generative task of CaGeM can benefit from additional labelled data. As a by-product of our model however, we also obtain competitive semi-supervised classification results, meaning that CaGeM is able to share statistical strength between the generative and classification tasks.

When modeling natural images, the performance of CaGeM could be further improved using more powerful autoregressive decoders such as the ones in (Gulrajani et al., 2016; Chen et al., 2017). Also, an even more flexible variational approximation could be obtained using auxiliary variables (Ranganath et al., 2015; Maaløe et al., 2016) or normalizing flows (Rezende & Mohamed, 2015; Kingma et al., 2016).

8 Conclusion

In this work we have shown how to perform semi-supervised generation with CaGeM. We showed that CaGeM improves the generative log-likelihood performance over similar deep generative approaches by creating clusters for the data in its higher latent representations using unlabelled information. CaGeM also provides a natural way to refine the clusters using additional labelled information to further improve its modelling power.

Appendix A The Problem of Inactive Units

First consider a model without latent units. We consider the asymptotic average properties, so we take the expectation of the log-likelihood over the (unknown) data distribution :

where is the entropy of the distribution and is the KL-divergence. The expected log-likelihood is then simply the baseline entropy of the data generating distribution minus the deviation between the data generating distribution and our model for the distribution.

For the latent variable model the log-likelihood bound is:

We take the expectation over the data generating distribution and apply the same steps as above

where is the (intractable) posterior of the latent variable model. This results shows that we pay an additional price (the last term) for using an approximation to the posterior.

The latent variable model can choose to ignore the latent variables, . When this happens the expression falls back to the log-likelihood without latent variables. We can therefore get an (intractable) condition for when it is advantageous for the model to use the latent variables:

The model will use latent variables when the log-likelihood gain is so high that it can compensate for the loss we pay by using an approximate posterior distribution.

The above argument can also be used to understand why it is harder to get additional layers of latent variables to become active. For a two-layer latent variable model we use a variational distribution and decompose the log likelihood bound using :

Again this expression falls back to the one-layer model when . So whether to use the second layer of stochastic units will depend upon the potential diminishing return in terms of log likelihood relative to the extra -cost from the approximate posterior.

Acknowledgements

We thank Ulrich Paquet for fruitful feedback. The research was supported by Danish Innovation Foundation, the NVIDIA Corporation with the donation of TITAN X GPUs. Marco Fraccaro is supported by Microsoft Research through its PhD Scholarship Programme.

References