Scalable Deep Unsupervised Clustering with Concrete GMVAEs

by   Mark Collier, et al.
HubSpot, Inc.

Discrete random variables are natural components of probabilistic clustering models. A number of VAE variants with discrete latent variables have been developed. Training such methods requires marginalizing over the discrete latent variables, causing training time complexity to be linear in the number clusters. By applying a continuous relaxation to the discrete variables in these methods we can achieve a reduction in the training time complexity to be constant in the number of clusters used. We demonstrate that in practice for one such method, the Gaussian Mixture VAE, the use of a continuous relaxation has no negative effect on the quality of the clustering but provides a substantial reduction in training time, reducing training time on CIFAR-100 with 20 clusters from 47 hours to less than 6 hours.


page 1

page 2

page 3

page 4


Deep Unsupervised Clustering with Clustered Generator Model

This paper addresses the problem of unsupervised clustering which remain...

Probabilistic Adaptive Computation Time

We present a probabilistic model with discrete latent variables that con...

Clustering by Directly Disentangling Latent Space

To overcome the high dimensionality of data, learning latent feature rep...

Discond-VAE: Disentangling Continuous Factors from the Discrete

We propose a variant of VAE capable of disentangling both variations wit...

Theory and Experiments on Vector Quantized Autoencoders

Deep neural networks with discrete latent variables offer the promise of...

Extractive Summary as Discrete Latent Variables

In this paper, we compare various methods to compress a text using a neu...

Clustering of variables for enhanced interpretability of predictive models

A new strategy is proposed for building easy to interpret predictive mod...

1 Concrete GMVAE

Variational Autoencoders (

VAE) [6, 11]

are popular latent variable probabilistic unsupervised learning methods, suitable for use with deep neural networks. The standard VAE formulation has a single continuous latent vector in its probabilistic model. However traditional clustering models such as the Gaussian Mixture Model contain a discrete latent variable representing the cluster id. While one can perform clustering using a standard VAE by for example first training a VAE and then performing K-means clustering on the inferred latent variables for each training example, it may be beneficial from a modelling and computational point of view to train end-to-end a VAE capable of clustering via a discrete latent variable.

The Gaussian Mixture VAE (GMVAE) [2, 3]

is one of a number of VAE variants with discrete latent variables which can be used for unsupervised clustering and semi-supervised learning

[7]. The GMVAE defines the following generative model of the observed data :


Where , and are chosen to be deep neural networks. When is not distributed Bernoulli a different likelihood model is used. As is standard for VAEs we introduce a factorized inference model

for the latents which are also deep networks. The collection of networks are then trained end-to-end to maximize a single sample Monte Carlo estimate of the ELBO

, a lower bound on :


with . Thus even the single sample Monte Carlo estimate of the ELBO requires a summation over all possible settings of the cluster label , making training time linear in the number of clusters used. When one wishes to use a large number of clusters, this linear training time complexity is prohibitive especially when combined with the large datasets and deep neural networks the GMVAE is designed to be applied to.

But the summation over cluster ids is only required because sampling from the categorical distribution is non-differentiable. To avoid the linear scaling of the training time complexity we could use a REINFORCE style gradient estimator [12]

but such methods tend to provide high variance gradient estimates and corresponding slow practical convergence. We instead propose the

Concrete GMVAE by continuously relaxing using the Concrete (also known as Gumbel-Softmax) distribution [10, 4]. As is now a continuous approximation to the discrete Categorical distribution, sampling from is differentiable using the reparameterization trick [6, 11, 10, 4]. It follows that the single sample Monte Carlo estimate of the ELBO can be obtained in a time independent of the number of clusters used:


where , which can be obtained through ancestral sampling. In practice, as has been found when applying VAEs to text [1, 13], we found that the direct use of eq. (7) leads to the network ignoring the latent variable by reducing the KL divergence to zero. We adopt a similar strategy to [1] and introduce a weight on which we anneal from zero to one during the course of training. Our MC estimate of the ELBO thus becomes:


We find this modification to be sufficient to encourage use of .

2 Experiments

We test our Concrete GMVAE on the binarized MNIST and CIFAR-100 datasets

[9, 8]. For MNIST we set K=10 clusters, one for each class. For CIFAR-100 in order to keep the standard GMVAE training time reasonable we use K=20 clusters, one for each “super-class” in the dataset [8]. We anneal the KL weight following: . Further experimental details and neural network architectures are given in appendix A. We note that at test time, for all models we fully discretize

, by one-hot encoding the argmax of


We can see from table 1 that for both datasets there is no significant difference in the test set log-likelihood for the GMVAE and the Concrete GMVAE i.e. our proposed method learns an equally good model of the data to the standard model. Without KL annealing we obsvserved that at convergence was close to zero and thus the model failed to learn to cluster the dataset. The computational advantage of using the Concrete GMVAE is clear from the training times, training the Concrete GMVAE is approximately 4X and 8X faster than the standard GMVAE for K=10 and K=20 respectively. As K grows this gap would further increase.

Dataset K GMVAE Concrete GMVAE
- - Test Training time Test Training time
CIFAR-100 20
Table 1:

Comparison of Concrete GMVAE vs. standard GMVAE. Mean and standard deviation of test set log-likelihood and training time (in hours) over 11 training runs are shown. Training was done on an Amazon EC2 p3.2xlarge instance with 1 NVIDIA V100 Tensor Core GPU and 8 virtual CPUs.

These results demonstrate that the theoretical reduction in training time complexity from linear to constant scaling in K, has in practice reduced training time substantially in a realistic training setup (see appendix A) with early stopping, etc. Despite significant speedup in training time, our introduction of a continuous relaxation of the latent variable in a GMVAE has had no significant negative impact on test set log-likelihood. We hope that the introduction of the Concrete GMVAE will enable the application of GMVAEs to large scale problems and problems requiring a large number of clusters where the required training time was previously considered prohibitive.


  • [1] S. R. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio (2016) Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 10–21. Cited by: §1.
  • [2] N. Dilokthanakul, P. A. Mediano, M. Garnelo, M. C. Lee, H. Salimbeni, K. Arulkumaran, and M. Shanahan (2016) Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648. Cited by: §1.
  • [3] Gaussian mixture vae: lessons in variational inference, generative models, and deep nets. Note: 2019-03-23 Cited by: §1.
  • [4] E. Jang, S. Gu, and B. Poole (2016) Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Cited by: §1.
  • [5] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Appendix 0.A.
  • [6] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1, §1.
  • [7] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling (2014) Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581–3589. Cited by: §1.
  • [8] A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §2.
  • [9] Y. LeCun, C. Cortes, and C. Burges (2010) MNIST handwritten digit database. AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist 2, pp. 18. Cited by: §2.
  • [10] C. J. Maddison, A. Mnih, and Y. W. Teh (2016) The concrete distribution: a continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712. Cited by: §1.
  • [11] D. J. Rezende, S. Mohamed, and D. Wierstra (2014)

    Stochastic backpropagation and approximate inference in deep generative models


    International Conference on Machine Learning

    pp. 1278–1286. Cited by: §1, §1.
  • [12] R. J. Williams (1992)

    Simple statistical gradient-following algorithms for connectionist reinforcement learning

    Book Section In Reinforcement Learning, pp. 5–32. Cited by: §1.
  • [13] Z. Yang, Z. Hu, R. Salakhutdinov, and T. Berg-Kirkpatrick (2017) Improved variational autoencoders for text modeling using dilated convolutions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3881–3890. Cited by: §1.

Appendix 0.A Experimental Details

For both datasets we train using the Adam optimizer [5] with initial learning rate of

, for up to 200 epochs, stopping training early if the validation set loss has not improved for 10 consecutive epochs. Each image is passed through a shared encoder convolutional neural network with 20 5x5 filters with stride 1 followed by 40 5x5 filters with stride 1, each layer is followed a 2x2 max pooling layer with a stride of 2 and the RELU activation function and finally a fully-connected layer with 512 units and RELU activation. From this shared encoding of the raw image, we compute

with two further fully-connected layers of 512 and 256 units also with RELU activation. Whether sampling or marginalizing out, is computed by concatenating the output of the shared encoder with the one-hot encoded and passing it through two fully-connected layers of 512 and 256 units with RELU activation. is implemented as a single fully-connected layer from the one-hot encoded . is implemented with a single fully-connected layer followed by two layers of transposed convolutions to those used for the shared encoder. The temperature of the Concrete distribution is set to 0.3 for both datasets throughout training.