1 Introduction
Variational AutoEncoders (VAE) (Kingma, 2013; Rezende et al., 2014) and Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have shown promising generative performances on data from complex highdimensional distributions. Both approaches have spawn numerous related deep generative models, not only to model data points like those in a large unlabelled training data set, but also for semisupervised classification (Kingma et al., 2014; Maaløe et al., 2016; Springenberg, 2015; Salimans et al., 2016). In semisupervised classification a few points in the training data are endowed with class labels, and the plethora of unlabelled data aids to improve a supervised classification model.
Could a few labelled training data points in turn improve a deep generative model? This reverse perspective, doing semisupervised generation, is investigated in this work. Many of the real life data sets contain a small amount of labelled data, but incorporating this partial knowledge in the generative models is not straightforward, because of the risk of overfitting towards the labelled data. This overfitting can be avoided by finding a good scheme for updating the parameters, like the one introduced in the models for semisupervised classification (Kingma et al., 2014; Maaløe et al., 2016). However, there is a difference in optimizing the model towards optimal classification accuracy and generative performance. We introduce the Clusteraware Generative Model (CaGeM), an extension of a VAE, that improves the generative performances, by being able to model the natural clustering in the higher feature representations through a discrete variable (Bengio et al., 2013). The model can be trained fully unsupervised, but its performances can be further improved using labelled class information that helps in constructing well defined clusters. A generative model with added labelled data information may be seen as parallel to how humans rely on abstract domain knowledge in order to efficiently infer a causal model from property induction with very few labelled observations (Tenenbaum et al., 2006).
Supervised deep learning models with no stochastic units are able to learn multiple levels of feature abstraction. In VAEs, however, the addition of more stochastic layers is often accompanied with a builtin pruning effect so that the higher layers become disconnected and therefore not exploited by the model (Burda et al., 2015a; Sønderby et al., 2016). As we will see, in CaGeM the possibility of learning a representation in the higher stochastic layers that can model clusters in the data drastically reduces this issue. This results in a model that is able to disentangle some of the factors of variation in the data and that extracts a hierarchy of features beneficial during the generation phase. By using only 100 labelled data points, we present state of the art loglikelihood performance on permutationinvariant models for MNIST, and an improvement with respect to comparable models on the OMNIGLOT data set. While the main focus of this paper is semisupervised generation, we also show that the same model is able to achieve competitive semisupervised classification results.
2 Variational Autoencoders
A Variational AutoEncoder (VAE) (Kingma, 2013; Rezende et al., 2014) defines a deep generative model for data that depends on latent variable or a hierarchy of latent variables, e.g. , see Figure 0(a)
for a graphical representation. The joint distribution of the twolevel generative model is given by
where
are Gaussian distributions with a diagonal covariance matrix and
is typically a parameterized Gaussian (continuous data) or Bernoulli distribution (binary data). The probability distributions of the generative model of a VAE are parameterized using deep neural networks whose parameters are denoted by
. Training is performed by optimizing the Evidence Lower Bound (ELBO), a lower bound to the intractable loglikelihood obtained using Jensen’s inequality:(1) 
The introduced variational distribution is an approximation to the model’s posterior distribution , defined with a bottomup dependency structure where each variable of the model depends on the variable below in the hierarchy:
Similar to the generative model, the mean and diagonal covariance of both Gaussian distributions defining the inference network are parameterized with deep neural networks that depend on parameters , see Figure 0(b) for a graphical representation.
We can learn the parameters and by jointly maximizing the ELBO in (1
) with stochastic gradient ascent, using Monte Carlo integration to approximate the intractable expectations and computing low variance gradients with the reparameterization trick
(Kingma, 2013; Rezende et al., 2014).Inactive stochastic units
A common problem encountered when training VAEs with bottomup inference networks is given by the so called inactive units in the higher layers of stochastic variables (Burda et al., 2015a; Sønderby et al., 2016). In a 2layer model for example, VAEs often learn , i.e. the variational approximation of uses no information coming from the data point through . If we rewrite the ELBO in (1) as
we can see that represents a local maxima of our optimization problem where the KLdivergence term is set to zero and the information flows by first sampling in and then computing (and is therefore independent from ). Several techniques have been developed in the literature to mitigate the problem of inactive units, among which we find annealing of the KL term (Bowman et al., 2015; Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016).
Using ideas from Chen et al. (2017), we notice that the inactive units in a VAE with 2 layers of stochastic units can be justified not only as a poor local maxima, but also from the modelling point of view. Chen et al. (2017) give a bitsback coding interpretation of Variational Inference for a generative model of the form , with data and stochastic units . The paper shows that if the decoder is powerful enough to explain most of the structure in the data (e.g. an autoregressive decoder), then it will be convenient for the model to set not to incur in an extra optimization cost of . The inactive units in a 2layer VAE can therefore be seen as caused by the flexible distribution that is able to explain most of the structure in the data without using information from . By making , the model can avoid the extra cost of . A more detailed discussion on the topic can be found in Appendix A.
It is now clear that if we want a VAE to exploit the power of additional stochastic layers we need to define it so that the benefits of encoding meaningful information in is greater than the cost that the model has to pay. As we will discuss below, we will achieve this by aiding the generative model to do representation learning.
3 Clusteraware Generative Models
Hierarchical models parameterized by deep neural networks have the ability to represent very flexible distributions. However, in the previous section we have seen that the units in the higher stochastic layers of a VAE often become inactive. We will show that we can help the model to exploit the higher stochastic layers by explicitly encoding a useful representation, i.e. the ability to model the natural clustering of the data (Bengio et al., 2013), which will also be needed for semisupervised generation.
We favor the flow of higherlevel global information through by extending the generative model of a VAE with a discrete variable representing the choice of one out of different clusters in the data. The joint distribution is computed by marginalizing over :
We call this model Clusteraware Generative Model (CaGeM), see Figure 2 for a graphical representation. The introduced categorical distribution ( represents the class distribution) depends solely on , that needs therefore to stay active for the model to be able to represent clusters in the data. We further add the dependence of and on , so that they can now both also represent clusterdependent information.
3.1 Inference
As done for the VAE in (1), we can derive the ELBO for CaGeM by maximizing the loglikelihood
We define the variational approximation over the latent variables of the model as
where
In the inference network we then reverse all the dependencies among random variables in the generative model (the arrows in the graphical model in Figure
2). This results in a bottomupinference network that performs a feature extraction that is fundamental for learning a good representation of the data. Starting from the data
we construct higher levels of abstraction, first through the variables and , and finally through the variable , that includes the global information used in the generative model. In order to make the higher representation more expressive we add a skipconnection from to , that is however not fundamental to improve the performances of the model.With this factorization of the variational distribution , the ELBO can be written as
We maximize by jointly updating, with stochastic gradient ascent, the parameters of the generative model and of the variational approximation. When computing the gradients, the summation over is performed analytically, whereas the intractable expectations over and are approximated by sampling. We use the reparameterization trick to reduce the variance of the stochastic gradients.
4 SemiSupervised Generation with CaGeM
In some applications we may have class label information for some of the data points in the training set. In the following we will show that CaGeM provides a natural way to exploit additional labelled data to improve the performance of the generative model. Notice that this semisupervised generation approach differs from the more traditional semisupervised classification task that uses unlabelled data to improve classification accuracies (Kingma et al., 2014; Maaløe et al., 2016; Salimans et al., 2016). In our case in fact, it is the labelled data that supports the generative task. Nevertheless, we will see in our experiment that CaGeM also leads to competitive semisupervised classification performances.
To exploit the class information, we first set the number of clusters equal to the number of classes
. We can now define two classifiers in CaGeM:

In the inference network we can compute the class probabilities given the data, i.e. , by integrating out the stochastic variables from

Another set of classprobabilities can be computed using the generative model. Given the posterior distribution we have in fact
The posterior over is intractable, but we can approximate it using the variational approximation , that is obtained by marginalizing out and the variable in the joint distribution :
While for the labels the summation can be carried out analytically, for the variable and we use Monte Carlo integration. For each of the classes we will then obtain a different sample with a corresponding weight given by . This therefore resembles a cascade of classifiers, as the class probabilities of the classifier will depend on the probabilities of the classifier in the inference model.
As our main goal is to learn representations that will lead to good generative performance, we interpret the classification of the additional labelled data as a secondary task that aids in learning a feature space that can be easily separated into clusters. We can then see this as a form of semisupervised clustering (Basu et al., 2002), where we know that some data points belong to the same cluster and we are free to learn a data manifold that makes this possible.
The optimal features for the classification task could be very different from the representations learned for the generative task. This is why it is important not to update the parameters of the distributions over , and , in both generative model and inference model, using labelled data information. If this is not done carefully, the model could be prone to overfitting towards the labelled data. We define as the subset of containing the parameters in , and as the subset of containing the parameters in . and then represent the incoming arrows to in Figure 2. We update the parameters and jointly by maximizing the new objective
where is the set of unlabelled training points, is the set of labelled ones, and and are the standard categorical crossentropies for the and classifiers respectively. Notice that we consider the crossentropies only a function of and , meaning that the gradients of the crossentropies with respect to the parameters of the distributions over , and will be 0, and will not depend on the labelled data (as needed when learning meaningful representations of the data to be used for the generative task). To match the relative magnitudes between the ELBO and the two crossentropies we set as done in (Kingma et al., 2014; Maaløe et al., 2016), where and are the numbers of unlabelled and labelled data points, and is a scaling constant.
5 Experiments
We evaluate CaGeM by computing the generative loglikelihood performance on MNIST and OMNIGLOT (Lake et al., 2013)
datasets. The model is parameterized by feedforward neural networks (
) and linear layers (), so that, for Gaussian outputs, each collection of incoming edges to a node in Figure 2 is defined as:For Bernoulli distributed outputs we simply define a feedforward neural network with a sigmoid activation function for the output. Between dense layers we use the rectified linear unit as nonlinearity and batchnormalization
(Ioffe & Szegedy, 2015). We only collect statistics for the batchnormalization during unlabelled inference. For the loglikelihood experiments we apply temperature on theterms during the first 100 epochs of training
(Bowman et al., 2015; Sønderby et al., 2016). The stochastic layers are defined with , and 2layered neural feedforward networks with respectively 1024 and 512 units in each layer. Training is performed using the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of and annealing it by everyepochs. The experiments are implemented with Theano
(Bastien et al., 2012), Lasagne (Dieleman et al., 2015) and Parmesan^{1}^{1}1A variational repository named parmesan on Github..For both datasets we report unsupervised and semisupervised permutation invariant loglikelihood performance and for MNIST we also report semisupervised classification errors. The input data is dynamically binarized and the ELBO is evaluated by taking 5000 importanceweighted (IW) samples, denoted
. We evaluate the performance of CaGeM with different numbers of labelled samples referred to as CaGeM. When used, the labelled data is randomly sampled evenly across the class distribution. All experiments across datasets are run with the same architecture.6 Results
Table 1 shows the generative loglikelihood performances of different variants of CaGeM on the MNIST data set. We can see that the more labelled samples we use, the better the generative performance will be. Even though the results are not directly comparable, since CaGeM exploits a small fraction supervised information, we find that using only 100 labelled samples (10 samples per class), CaGeM100 model achieves state of the art loglikelihood performance on permutation invariant MNIST with a simple 2layered model. We also trained a ADGM100 from Maaløe et al. (2016)^{2}^{2}2We used the code supplied in the repository named auxiliarydeepgenerativemodels on Github. in order to make a fair comparison on generative loglikelihood in a semisupervised setting and reached a performance of . This indicates that models that are highly optimized for improving semisupervised classification accuracy may be a suboptimal choice for generative modeling.
NonPermutation Invariant  

DRAW+VGP (Tran et al., 2016)  
IAF VAE (Kingma et al., 2016)  
VLAE (Chen et al., 2017)  
Permutation Invariant  
AVAE, L=2, IW=1 (Maaløe et al., 2016)  
IWAE, L=2, IW=50 (Burda et al., 2015a)  
LVAE, L=5, IW=10 (Sønderby et al., 2016)  
VAE+VGP, L=2 (Tran et al., 2016)  
DVAE (Rolfe, 2017)  
CaGeM0, L=2, IW=1, K=20  
CaGeM0, L=2, IW=1, K=10  
CaGeM20, L=2, IW=1  
CaGeM50, L=2, IW=1  
CaGeM100, L=2, IW=1 
CaGeM could further benefit from the usage of nonpermutation invariant architectures suited for image data, such as the autoregressive decoders used by IAF VAE (Kingma et al., 2016) and VLAE (Chen et al., 2017). The fully unsupervised CaGeM0 results show that by defining clusters in the higher stochastic units, we achieve better performances than the closely related IWAE (Burda et al., 2015a) and LVAE (Sønderby et al., 2016) models. It is finally interesting to see from Table 1 that CaGeM0 performs well even when the number of clusters are different from the number of classes in the labelled data set.
In Figure 4 we show in detail how the performance of CaGeM increases as we add more labelled data points. We can also see that the ELBO tightens when adding more labelled information, as compared to and (Sønderby et al., 2016).
The PCA plots of the variable of a VAE, CaGeM0 and CaGeM100 are shown in Figure 5 . We see how CaGeMs encode clustered information into the higher stochastic layer. Since CaGeM0 is unsupervised, it forms less classdependent clusters compared to the semisupervised CaGeM100, that fits its latent space into 10 nicely separated clusters. Regardless of the labelled information added during inference, CaGeM manages to activate a high amount of units, as for CaGeM we obtain , while a LVAE with 2 stochastic layers obtains .
The generative model in CaGeM enables both random samples, by sampling the class variable and feeding it to , and class conditional samples by fixing . Figure 3 shows the generation of MNIST digits from CaGeM100 with . The images are generated by applying a linearly spaced mesh grid within the latent space and performing random generations (left) and conditional generations (right). When generating samples in CaGeM, it is clear how the latent units and capture different modalities within the true data distribution, namely style and class.
Regardless of the fact that CaGeM was designed to optimize the semisupervised generation task, the model can also be used for classification by using the classifier . In Table 2 we show that the semisupervised classification accuracies obtained with CaGeM are comparable to the performance of GANs (Salimans et al., 2016).
Labels  

M1+M2 (Kingma et al., 2014)      % () 
VAT (Miyato et al., 2015)      % 
CatGAN (Springenberg, 2015)      % () 
SDGM (Maaløe et al., 2016)      % () 
Ladder Network (Rasmus et al., 2015)      % () 
ADGM (Maaløe et al., 2016)      % () 
Imp. GAN (Salimans et al., 2016)  % ()  % ()  % () 
CaGeM  %  %  % 
The OMNIGLOT dataset consists of 50 different alphabets of handwritten characters, where each character is sparsely represented. In this task we use the alphabets as the cluster information, so that the representation should divide correspondingly. From Table 3 we see an improvement over other comparable VAE architectures (VAE, IWAE and LVAE), however, the performance is far from the once reported from the autoregressive models (Kingma et al., 2016; Chen et al., 2017). This indicates that the alphabet information is not as strong as for a dataset like MNIST. This is also indicated from the accuracy of CaGeM500, reaching a performance of . Samples from the model can be found in Figure 6.
7 Discussion
As we have seen from our experiments, CaGeM offers a way to exploit the added flexibility of a second layer of stochastic units, that stays active as the modeling performances can greatly benefit from capturing the natural clustering of the data. Other recent works have presented alternative methods to mitigate the problem of inactive units when training flexible models defined by a hierarchy of stochastic layers. Burda et al. (2015a) used importance samples to improve the tightness of the ELBO, and showed that this new training objective helped in activating the units of a 2layer VAE. Sønderby et al. (2016)
trained Ladder Variational Autoencoders (LVAE) composed of up to 5 layers of stochastic units, using a topdown inference network that forces the information to flow in the higher stochastic layers. Contrarily to the bottomup inference network of CaGeM, the topdown approach used in LVAEs does not enforce a clear separation between the role of each stochastic unit, as proven by the fact that all of them encode some class information. Longer hierarchies of stochastic units unrolled in time can be found in the sequential setting
(Krishnan et al., 2015; Fraccaro et al., 2016). In these applications the problem of inactive stochastic units appears when using powerful autoregressive decoders (Fraccaro et al., 2016; Chen et al., 2017), but is mitigated by the fact that new data information enters the model at each time step.The discrete variable of CaGeM was introduced to be able to define a better learnable representation of the data, that helps in activating the higher stochastic layer. The combination of discrete and continuous variables for deep generative models was also recently explored by several authors. Maddison et al. (2016); Jang et al. (2016)
used a continuous relaxation of the discrete variables, that makes it possible to efficiently train the model using stochastic backpropagation. The introduced GumbelSoftmax variables allow to sacrifice loglikelihood performances to avoid the computationally expensive integration over
. Rolfe (2017)presents a new class of probabilistic models that combines an undirected component consisting of a bipartite Boltzmann machine with binary units and a directed component with multiple layers of continuous variables.
Traditionally, semisupervised learning applications of deep generative models such as Variational Autoencoders and Generative Adversarial Networks
(Goodfellow et al., 2014) have shown that, whenever only a small fraction of labelled data is available, the supervised classification task can benefit from additional unlabelled data (Kingma et al., 2014; Maaløe et al., 2016; Salimans et al., 2016). In this work we consider the semisupervised problem from a different perspective, and show that the generative task of CaGeM can benefit from additional labelled data. As a byproduct of our model however, we also obtain competitive semisupervised classification results, meaning that CaGeM is able to share statistical strength between the generative and classification tasks.When modeling natural images, the performance of CaGeM could be further improved using more powerful autoregressive decoders such as the ones in (Gulrajani et al., 2016; Chen et al., 2017). Also, an even more flexible variational approximation could be obtained using auxiliary variables (Ranganath et al., 2015; Maaløe et al., 2016) or normalizing flows (Rezende & Mohamed, 2015; Kingma et al., 2016).
8 Conclusion
In this work we have shown how to perform semisupervised generation with CaGeM. We showed that CaGeM improves the generative loglikelihood performance over similar deep generative approaches by creating clusters for the data in its higher latent representations using unlabelled information. CaGeM also provides a natural way to refine the clusters using additional labelled information to further improve its modelling power.
Appendix A The Problem of Inactive Units
First consider a model without latent units. We consider the asymptotic average properties, so we take the expectation of the loglikelihood over the (unknown) data distribution :
where is the entropy of the distribution and is the KLdivergence. The expected loglikelihood is then simply the baseline entropy of the data generating distribution minus the deviation between the data generating distribution and our model for the distribution.
For the latent variable model the loglikelihood bound is:
We take the expectation over the data generating distribution and apply the same steps as above
where is the (intractable) posterior of the latent variable model. This results shows that we pay an additional price (the last term) for using an approximation to the posterior.
The latent variable model can choose to ignore the latent variables, . When this happens the expression falls back to the loglikelihood without latent variables. We can therefore get an (intractable) condition for when it is advantageous for the model to use the latent variables:
The model will use latent variables when the loglikelihood gain is so high that it can compensate for the loss we pay by using an approximate posterior distribution.
The above argument can also be used to understand why it is harder to get additional layers of latent variables to become active. For a twolayer latent variable model we use a variational distribution and decompose the log likelihood bound using :
Again this expression falls back to the onelayer model when . So whether to use the second layer of stochastic units will depend upon the potential diminishing return in terms of log likelihood relative to the extra cost from the approximate posterior.
Acknowledgements
We thank Ulrich Paquet for fruitful feedback. The research was supported by Danish Innovation Foundation, the NVIDIA Corporation with the donation of TITAN X GPUs. Marco Fraccaro is supported by Microsoft Research through its PhD Scholarship Programme.
References
 Bastien et al. (2012) Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. In Deep Learning and Unsupervised Feature Learning, workshop at Neural Information Processing Systems, 2012.

Basu et al. (2002)
Basu, Sugato, Banerjee, Arindam, and Mooney, Raymond J.
Semisupervised clustering by seeding.
In
Proceedings of the International Conference on Machine Learning
, 2002.  Bengio et al. (2013) Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 2013.
 Bowman et al. (2015) Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R., and Bengio, S. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
 Burda et al. (2015a) Burda, Yuri, Grosse, Roger, and Salakhutdinov, Ruslan. Importance Weighted Autoencoders. arXiv preprint arXiv:1509.00519, 2015a.

Burda et al. (2015b)
Burda, Yuri, Grosse, Roger, and Salakhutdinov, Ruslan.
Accurate and conservative estimates of mrf loglikelihood using reverse annealing.
InProceedings of the International Conference on Artificial Intelligence and Statistics
, 2015b.  Chen et al. (2017) Chen, Xi, Kingma, Diederik P., Salimans, Tim, Duan, Yan, Dhariwal, Prafulla, Schulman, John, Sutskever, Ilya, and Abbeel, Pieter. Variational Lossy Autoencoder. In International Conference on Learning Representations, 2017.
 Dieleman et al. (2015) Dieleman, Sander, Schlüter, Jan, Raffel, Colin, Olson, Eben, Sønderby, Søren K, Nouri, Daniel, van den Oord, Aaron, and and, Eric Battenberg. Lasagne: First release., August 2015.
 Fraccaro et al. (2016) Fraccaro, Marco, Sønderby, Søren Kaae, Paquet, Ulrich, and Winther, Ole. Sequential neural models with stochastic layers. In Advances in Neural Information Processing Systems. 2016.
 Goodfellow et al. (2014) Goodfellow, Ian, PougetAbadie, Jean, Mirza, Mehdi, Xu, Bing, WardeFarley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2014.
 Gulrajani et al. (2016) Gulrajani, Ishaan, Kumar, Kundan, Ahmed, Faruk, Ali Taiga, Adrien, Visin, Francesco, Vazquez, David, and Courville, Aaron. PixelVAE: A latent variable model for natural images. arXiv eprints, 1611.05013, November 2016.
 Ioffe & Szegedy (2015) Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of International Conference on Machine Learning, 2015.
 Jang et al. (2016) Jang, Eric, Gu, Shixiang, and Poole, Ben. Categorical reparameterization with gumbelsoftmax. arXiv preprint arXiv:1611.01144, 2016.
 Kingma & Ba (2014) Kingma, Diederik and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 12 2014.
 Kingma et al. (2014) Kingma, Diederik P., Rezende, Danilo Jimenez, Mohamed, Shakir, and Welling, Max. SemiSupervised Learning with Deep Generative Models. In Proceedings of the International Conference on Machine Learning, 2014.
 Kingma et al. (2016) Kingma, Diederik P, Salimans, Tim, Jozefowicz, Rafal, Chen, Xi, Sutskever, Ilya, and Welling, Max. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems. 2016.
 Kingma (2013) Kingma, Diederik P; Welling, Max. AutoEncoding Variational Bayes. arXiv preprint arXiv:1312.6114, 12 2013.
 Krishnan et al. (2015) Krishnan, Rahul G, Shalit, Uri, and Sontag, David. Deep Kalman filters. arXiv:1511.05121, 2015.
 Lake et al. (2013) Lake, Brenden M, Salakhutdinov, Ruslan R, and Tenenbaum, Josh. Oneshot learning by inverting a compositional causal process. In Advances in Neural Information Processing Systems. 2013.
 Maaløe et al. (2016) Maaløe, Lars, Sønderby, Casper K., Sønderby, Søren K., and Winther, Ole. Auxiliary Deep Generative Models. In Proceedings of the International Conference on Machine Learning, 2016.
 Maddison et al. (2016) Maddison, Chris J., Mnih, Andriy, and Teh, Yee Whye. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, abs/1611.00712, 2016.
 Miyato et al. (2015) Miyato, Takeru, Maeda, Shinichi, Koyama, Masanori, Nakae, Ken, and Ishii, Shin. Distributional Smoothing with Virtual Adversarial Training. arXiv preprint arXiv:1507.00677, 7 2015.
 Ranganath et al. (2015) Ranganath, Rajesh, Tran, Dustin, and Blei, David M. Hierarchical variational models. arXiv preprint arXiv:1511.02386, 11 2015.
 Rasmus et al. (2015) Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri, and Raiko, Tapani. Semisupervised learning with ladder networks. In Advances in Neural Information Processing Systems, 2015.
 Rezende et al. (2014) Rezende, Danilo J., Mohamed, Shakir, and Wierstra, Daan. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. arXiv preprint arXiv:1401.4082, 04 2014.
 Rezende & Mohamed (2015) Rezende, Danilo Jimenez and Mohamed, Shakir. Variational Inference with Normalizing Flows. In Proceedings of the International Conference on Machine Learning, 2015.
 Rolfe (2017) Rolfe, Jason Tyler. Discrete variational autoencoders. In Proceedings of the International Conference on Learning Representations, 2017.
 Salimans et al. (2016) Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
 Sønderby et al. (2016) Sønderby, Casper Kaae, Raiko, Tapani, Maaløe, Lars, Sønderby, Søren Kaae, and Winther, Ole. Ladder variational autoencoders. In Advances in Neural Information Processing Systems 29. 2016.
 Springenberg (2015) Springenberg, J.T. Unsupervised and semisupervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
 Tenenbaum et al. (2006) Tenenbaum, Joshua B., Griffiths, Thomas L., and Kemp, Charles. Theorybased Bayesian models of inductive learning and reasoning. Trends in cognitive sciences, 10(7):309–318, July 2006.
 Tran et al. (2016) Tran, Dustin, Ranganath, Rajesh, and Blei, David M. Variational Gaussian process. In Proceedings of the International Conference on Learning Representations, 2016.
Comments
There are no comments yet.