1 Problem statement
We define a heterogeneous dataset as a collection of objects, where each object is defined by attributes and these attributes correspond to either numerical (continuous or discrete) or nominal variables. We denote each object in the dataset as a
-dimensional vector
, where each attribute corresponds to one of the following data types:-
Numerical variables:
-
Real-valued data, which takes values in the real line, i.e., .
-
Positive real-valued data, which takes values in the positive real line, i.e., .
-
(Discrete) count data, which takes values in the natural numbers, i.e., .
-
-
Nominal variables:
-
Categorical data, which takes values in a finite unordered set, e.g., ‘blue’, ‘red’, ‘black’.
-
Ordinal data, which takes values in a finite ordered set, e.g., ‘never’, ‘sometimes’, ‘often’, ‘usually’, ‘always’.
-
Additionally, we consider that a random set of entries in the data is incomplete, such that each object can potentially contain any combination of observed and missing attributes. Let () be the index set of observed (missing) attributes for the -th data point, where . Also, let () represent the sliced vector, stacking any dimension with index in (). Figure 1(a) shows an example of an incomplete heterogenous dataset, where we observe that the different attributes (or dimensions) in the data correspond with different types of numerical and nominal variables, and missing values appear ubiquitously across the data.

Diverging from common trends in the deep generative community, we consider databases that do not contain highly-structured homogeneous data, but instead each observed object is a set of scalar mixed numerical and nominal, attributes and the underlying structure is in many cases mild. Since the dimensionality of these datasets can be relatively small (compared to images for instance), we need to carefully design the generative model to avoid overfitting on the observed data, while keeping the model flexible enough to incorporate both heterogeneous data types and random patterns of missing data.
2 Revisited VAE
In this section, we show how to extend the vanilla VAE introduced in (Kingma and Welling, 2014) to handle incomplete and heterogeneous data.
2.1 Handling incomplete data
In a standard VAE, missing data affect both the generative (decoder) and the recognition (encoder) models. The ELBO is defined over the complete data, and it is not straightforward to decouple the missing entries from rest of the data, particularly when these entries appear randomly in the dataset. To this end, we first propose to use the following factorization for the decoder (Figure 1(b)):
(1) |
where is the latent -dimensional vector representation of the object , and . This factorization allows to easily marginalize out from the variational ELBO the missing attributes for each object. We parametrize the likelihood with the set of parameters —here, is a DNN that transforms the latent variable into the likelihood parameters .
Note that the above factorization of the likelihood allows us to separate the contributions of the observed data from missing data as
(2) |
The recognition model also needs to account for incomplete data, such that the distribution of the latent variable only depends on the observed attributes , i.e.,
(3) |
The recognition model is graphically represented in Figure 1(c). Note that, we need a recognition model that is flexible enough to handle any combination of observed and missing attributes. To this end, we propose an input drop-out recognition distribution whose parameters are the output of a DNN with input , such that
(4) |
where the input is a -length vector that resembles the original observed vector but the missing dimensions are replaced by zeros, and and are parametrized DNNs with input whose output determine the mean and the diagonal covariance matrix of (4). By setting certain dimensions to zero in , the contribution of the missing attributes to and and their derivative with respect to the network parameters is zero.
An alternative approach, proposed in (Vedantam et al., 2017)
, consists of exploiting the properties of Gaussian distributions in the linear factor analysis case
(Williams and Nash, 2018) and extending them to non-linear models, designing a factorized recognition model: , where , and therefore, with(5) | ||||
Note that, in contrast to our input drop-out recognition model, in this case we need to train an independent DNN per attribute , which might not only result in a higher computational cost, as well as in overfitting, but it also loses the ability of DNNs to amortize the inference of the parameters across attributes, and therefore, across different missing data patterns.
Given the above generative and recognition models, described respectively by (1) and (3), the ELBO of the marginal likelihood (computed only on the observed data ) can be written as
(6) |
where the first term of the ELBO corresponds to the reconstruction term of the observed data , and the Kullback-Liebler (KL) divergence in the second term penalizes that the posterior differs for the prior . Note that the KL divergence can be computed in closed-form (Kingma and Welling, 2014).
Remark. This VAE for incomplete data can readily be used to estimate the missing values in the data as follows
(7) |
The KL term in (2.1), promotes a missing-data recognition model that does not rely on the observed attributes, i.e., . When the data are highly structured (i.e, when the statistical dependencies among the attributes in the data are strong), the reconstruction term tends to dominate and therefore this situation is avoided. However, this might not be the case for non-structured heterogeneous data, for which the combination of a variety of likelihood reconstruction terms may result in overall reconstruction log-likelihoods that are comparable to the KL term during the optimization. In such cases, one could replace the standard Normal prior by a more structured prior to easily capture the (sometimes weak) statistical dependencies in the data. We discuss this approach in more detail in Section 3.
2.2 Handling heterogenous data
In contrast to homogeneous likelihood models, where the likelihood parameters can be directly captured by a joint DNN with shared weights across dimensions (for example, all the pixels in an image are often jointly modeled by a single convolutional DNN), parameter sharing in heterogenous likelihood models is not straightforward. Interestingly, the factorized decoder in (1) can be used to easily accommodate a variety of likelihood functions, one per input attribute, where an independent DNN, , models the likelihood parameters of every likelihood model , as shown in Fig. 1(b).
Here we account for the numerical and nominal data types introduced in Section 1, for which we assume the following likelihood models:
1. Real-valued data. For real-valued data, we assume a Gaussian likelihood model, i.e.,
(8) |
with , where the mean
and the variance
are computed as the outputs of DNNs with input .2. Positive real-valued data. For positive real-valued data, we assume a log-normal likelihood model, i.e.,
(9) |
with , where the likelihood parameters and (which corresponds to the mean and variance of the variable’s natural logarithm) are the outputs of DNNs with input .
3. Count data. For count data, we assume a Poisson likelihood model, i.e,
(10) |
with
, where the mean parameter of the Poisson distribution corresponds to the output of a DNN.
4. Categorical data.
For categorical data, codified using one-hot encoding, we assume a multinomial logit model such that the
-dimensional output of a DNNrepresents the vector of unnormalized probabilities, such that the probability of every category is given by
(11) |
To ensure identifiability, we fix the value of to zero.
5. Ordinal data. For ordinal data, codified using thermometer encoding,111 As an example, in an ordinal variable with three categories the lowest value is encoded as
(12) |
with
(13) |
Here, the thresholds divide the real line into regions and indicates the region (category) in which falls. Therefore, the likelihood parameters are , which we model as the output of a DNN. To guarantee that , we apply a cumulative sum function to the positive real-valued outputs of the network.
Moreover, for all the likelihood parameters which need to be positive, we use the softplus function .
Remark. The caveat of the generative model in Figure 1 is that we are losing the ability of deep neural networks to capture correlations among data attributes by amortizing the parameters. An alternative would be to use the approach in (Suh and Choi, 2016), where categorical one-hot encoded variables are approximated by continuous variables using jitter noise (uniform on [0,1]). However, this approach does not allow to combine different likelihood models or distinguish categorical and ordinal data. In Section 3, we show how to solve this limitation by using a hierarchical model.
Handling heterogenous data ranges. Apart from different types of attributes, heterogeneous datasets commonly contain numerical attributes whose values correspond to completely different domains. For example, a dataset may contain the height of different individuals with values in the order of meters, and also their income, which might reach tens or even hundreds thousands of dollars per year. In order to learn the parameters of both the generative and the reconstruction models in Figure 1
, one might rely on stochastic gradient descent using at every iteration a minibatch estimate of the ELBO in (
2.1).222Although here we use the standard ELBO for VAEs, tighter log-likelihood lower bound, such as the one proposed in the importance weight encoder (IWAE) in (Burda et al., 2015), could also be applied.However, the heterogenous nature of the data and these differences of value ranges between continuous variables, result in broadly different likelihood parameters (e.g., the mean of the height is much lower than the mean of the income), leading in practice to heterogenous (and potentially unstable) gradient evaluations. To avoid that the gradient evaluations of the ELBO are dominated by a subset of attributes, we apply a batch normalization layer at the input of the reconstruction model for the numerical variables, and we apply the complementary batch denormalization at the output layer of the generative model to denormalize the likelihood parameters.
In particular, for real-valued variables, we shift and scale the input data to the recognition model to ensure that the normalized minibatch has zero mean and variance equal to one. These shift and scale normalization parameters, and , are afterwards used to denormalize the likelihood parameters of the Gaussian distribution, i.e., . For positive real-valued variables, for which a log-Normal model is used, we apply the same batch normalization at the encoder and denormalization at the decoder used for real-valued variables, but to the natural logarithm of the data, instead of directly to the data. We note that count variables are not batch denormalized at the decoder, but a normalized transformation is used to feed the recognition network. With this batch normalization and denormalization layers at respectively the recognition and the generative models, we do not only enforce more stable evaluations (free of numerical errors) of the gradients, but we also speed-up the convergence of the optimization.
Generative | |
---|---|
, where | |
Recognition | |
ELBO | |
Likelihoods | Real-valued data: |
Positive real-valued data: | |
Count data: | |
Categorical: | |
Ordinal: |

3 The Heterogeneous-Incomplete VAE (HI-VAE)
In the previous section, we have introduced a simple VAE architecture that handles incomplete and heterogeneous data. However, this approach might be too restrictive to capture complex and high-dimensional data. More specifically, we have assumed a standard Gaussian prior on the latent variables , which might be too narrow based on the literature (Tomczak and Welling, 2018) and particularly problematic when the final goal is to estimate missing values in unstructured datasets (refer to the discussion under (7)). Similarly, we have assumed a generative model that fully factorizes for every (heterogenous) dimension in the data, losing the properties of an amortized generative model where the different dimensions share the weights of a common DNN capturing the relationships between attributes (as CNNs capture correlations between pixels in an image). In this section, we overcome these limitations of the model discussed in the previous section and remark that the models proposed in this paper are, in fact, compatible with the current developments in VAE literature.
In order to prevent the KL term in (2.1) from dominating the ELBO, thus penalizing rich posterior distributions for , we can impose structure in the latent variable representation through its prior distribution. We propose a Gaussian mixture prior (Dilokthanakul et al., 2016), such that
(14) | ||||
(15) |
where is a one-hot encoding vector representing the component in the mixture, i.e., the mean and the variance of the Gaussian that generates . For simplicity, we assume a uniform Gaussian mixture with for all .
Moreover, to ease that the model accurately captures the statistical dependencies among heterogeneous attributes, we propose a hierarchical structure that allows different attributes to share network parameters (i.e., to amortize the generative model). More specifically, we introduce an intermediate homogenous representation of the data , which is jointly generated by a single DNN with input , . Then, the likelihood parameters of each attribute are the output of an independent DNN with inputs and , such that . Note that, in this hierarchical structure, the top level (from to ) captures statistical dependencies among the attributes through the shared DNN , while the bottom level in the hierarchy (from and to ) accounts for heterogeneous likelihood models using independent DNNs .
The resulting generative model, that is hereafter referred to as Heterogeneous-Incomplete VAE (HI-VAE), is shown in Figure 2 and formulates as indicated in Table 1,
which also shows how we parametrize in the HI-VAE the different likelihood models provided in Section 3.2.333 Other likelihood functions (e.g., a Gamma distribution) and data types (e.g., interval data using e.g. a Beta distribution) can be readily be incorporated.

4 Experiments
In this section, we first compare the performance of the HI-VAE to other methods in the literature for data completion tasks in heterogeneous data. Then, we focus on a classification task, where we evaluate the classification degradation due to performing mean imputation for the missing data in supervised models compared to using the fully generative HI-VAE, which does not require data imputation. An additional empirical comparison between the HI-VAE with an input drop-out encoder, a factorized encoder as in (5) to handle missing data, and a non-structured VAE with a simple Gaussian prior instead of a mixture model distribution at the latent space is provided in the Appendix. The code to reproduce all our experiments can be found in the following public repository https://github.com/probabilistic-learning/HI-VAE.
4.1 Missing data imputation
In our first experiment, we evaluate the performance of the proposed HI-VAE at imputing missing data. We use six different heterogenous datasets from the UCI repository (Lichman, 2013), which vary both in the number of instances and attributes, as well as in the statistical data types of the attributes. The details of all these datasets are provided in the Appendix. For each dataset we generate different incomplete datasets, removing completely at random a percentage of the data, ranging from a deletion to a .
Model | Adult | Breast | DefaultCredit | Letter | Spam | Wine |
---|---|---|---|---|---|---|
Mean imputation | ||||||
MICE | ||||||
GLFM | ||||||
GAIN | ||||||
HI-VAE |
Average and standard deviation of the imputation error for a 20% of missing data, evaluated exclusively over
numeric variables.Comparison. We compare the performance of the following methods for missing data imputation:
-
Mean Imputation: We use as baseline an algorithm that imputes the mean of each continuous attribute and the mode of each discrete attribute.
-
MICE: Multiple Imputation by Chained Equations (Azur et al., 2011), which is an iterative method that performs a series of supervised regression models, in which missing data is modeled conditional upon the other variables in the data, including those imputed in previous rounds of the algorithm. We use MICE implementation within the fancyimpute package https://github.com/iskandr/fancyimpute, which in its current implementation only allows to pick for a homogeneous regression model for all attributes, independently of whether they are numerical or nominal.
-
GLFM: General latent feature model for heterogeneous data (Valera et al., 2017b), which was initially introduced for table completion in heterogeneous datasets in (Valera and Ghahramani, 2014). This method handles all the numerical and nominal data types described in Section 1 and perform MCMC inference. We run 5000 iterations of the sampler using the available implementation in https://github.com/ivaleraM/GLFM.
-
GAIN: Generative adversarial network for missing data imputation (Yoon et al., 2018)
, which uses MSE as a loss function for numerical variables, and cross-entropy for binary variables. We train GAIN for
epochs using the network specifications and hyperparameters reported in (Yoon et al., 2018). -
HI-VAE: Model introduced in Section 3
, which we implement in TensorFlow using only one dense layer for all the parameters of the encoder and decoder of the HI-VAE). We set the dimensionality of
and to 10, 5 and 10, respectively. The parameter of the Gumbel-Softmax is annealed using a linear decreasing function on the number of epochs, from to . We train our algorithms for epochs using minibatches of 1000 samples.
Imputation strategy. Once the HI-VAE model is trained, the imputation of missing data is performed in a three step process: First, we perform the MAP estimate of to obtain and . With these MAP estimates, we evaluate the generative model, obtaining and for every attribute. Finally, the imputed values are obtained computing the mode of each distribution , where the computation of the mode depends on the likelihood model of the attribute. A further discussion on imputation methods is available in the Appendix.
Imputation error. We compare the above models in terms of average imputation error computed as
, where we use the normalized root mean square error (NRMSE) for numerical variables, the accuracy error for categorical variables, and the displacement error for ordinal variables. See the Appendix for a precise definition of each error metric.
Model | Adult | Breast | DefaultCredit | Letter | Spam | Wine |
---|---|---|---|---|---|---|
Mean imputation | ||||||
MICE | ||||||
GLFM | ||||||
GAIN | ||||||
HI-VAE |
% Missing | Model | Breast | DefaultCredit | Letter | Spam | Wine |
---|---|---|---|---|---|---|
0% | DLR | |||||
CVAE | ||||||
HIVAE | ||||||
10% | DLR | |||||
CVAE | ||||||
HIVAE | ||||||
50% | DLR | |||||
CVAE | ||||||
HIVAE |
Results. Figure 3 summarizes the average imputation error (AvgErr) for each database as we vary the fraction of missing data. Observe that the proposed HI-VAE (with input drop-out encoder) presents the more robust results across all datasets. The second more robust model is the GLFM, which performs best in small datasets (Breast and Wine). This might be explained by the fact that while it accounts mixed nominal and discrete data, it relies on Gibbs-sampling for inference, scaling and mixing poorly for larger datasets. In contrast, the MICE and GAIN444We would like to clarify that the reported results do not quite match those provided in (Yoon et al., 2018), despite using the code and the hyperparameters provided by the authors. For the sake of reproducibility, we will incorporate the GAIN implementation to our public repository. are outperformed by the Mean-imputation baseline in several datasets, most likely, due to the fact that they do not account for different types of mixed nominal and numerical attributes.
A deeper understanding of the results in Figure 3 can be obtained by separately analyzing the error in numeric variables (real, positive and count variables) in Table 2, and nominal variables (categorical/ordinal variables) in Table 3. In both cases, we use 20% of missing data. While for numeric variables HI-VAE achieves a comparable error w.r.t. the rest of methods, it is in the imputation of nominal variables where HI-VAE achieves a remarkable gain, being the best performing method in four out of six cases. These results demonstrate the superior ability of HI-VAE to exploit underlying correlations among the set of heterogeneous attributes. For a further discussion on the imputation for each type of nominal and numerical variable, refer to the Appendix. We note that we use the same HI-VAE configuration (i.e., DNN structure and number of latent variables) in all experiments and, therefore, further improvements could be achieved by cross-validating the structure of the HI-VAE for each database.
4.2 Predictive Task
Finally, although the HI-VAE is a fully unsupervised generative model, we evaluate its performance at solving a classification task, a multi-class classification problem for the Letter dataset (with 26 classes corresponding to the different letters) and a binary classification problem for the rest. We use 50% of the data for training, which for HI-VAE means that we remove 50% of the labels to train the generative model. Regarding the training data, we consider three different scenarios: the first assumes complete input attributes in the training set (no missing data), the second assumes 10% of missing values in the input training data, and the third assumes 50% of missing values. Since these supervised methods cannot handle missing data, we impute the mean of each attribute to the missing input values during training. Here, we compare our HI-VAE with two supervised methods: deep logistic regression (DLR) and the conditional VAE (CVAE) in
(Sohn et al., 2015).Results. Table 4 summarizes the results, where we observe that our HI-VAE method provides competitive results in all cases except for the Letter database. This may be due to the fact that we are using the same HI-VAE configuration for all datasets, independently of their complexity.Furthermore, note HI-VAE provides the best results for both Wine and Breast, while showing less degradation with increasing fraction of missing input data in the DefaultCredit and Spam. These results show that a fully generative model might be preferred over a supervised model with imputed data.
5 Acknowledgments
The authors wish to thank Christopher K. I. Williams, for fruitful discussions and helpful comments to the manuscript. Alfredo Nazabal would like to acknowledge the funding provided by the UK Government’s Defence & Security Programme in support of the Alan Turing Institute. The work of Pablo M. Olmos is supported by Spanish government MEC under grant TEC2016-78434-C3-3-R, by Comunidad de Madrid under grant IND2017/TIC-7618, and by the European Union (FEDER). Zoubin Ghahramani acknowledges support from the Alan Turing Institute (EPSRC Grant EP/N510129/1) and EPSRC Grant EP/N014162/1, and donations from Google and Microsoft Research. Isabel Valera is supported by the MPG Minerva Fast Track program. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
References
- Arjovsky et al. (2017) Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. URL http://arxiv.org/abs/1701.07875.
- Arora and Zhang (2017) Sanjeev Arora and Yi Zhang. Do gans actually learn the distribution? an empirical study. CoRR, abs/1706.08224, 2017. URL http://arxiv.org/abs/1706.08224.
- Azur et al. (2011) Melissa J. Azur, Elizabeth A. Stuart, Constantine Frangakis, and Philip J. Leaf. Multiple imputation by chained equations: What is it and how does it work? International Journal of Methods in Psychiatric Research, 20(1):40–49, 3 2011. ISSN 1049-8931. doi: 10.1002/mpr.329.
- Bando et al. (2017) Yoshiaki Bando, Masato Mimura, Katsutoshi Itoyama, Kazuyoshi Yoshii, and Tatsuya Kawahara. Statistical speech enhancement based on probabilistic integration of variational autoencoder and non-negative matrix factorization. CoRR, abs/1710.11439, 2017. URL http://arxiv.org/abs/1710.11439.
- Burda et al. (2015) Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
- Castrejon et al. (2016) Lluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Learning aligned cross-modal representations from weakly aligned data. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. IEEE, 2016.
- Chen et al. (2016) Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. CoRR, abs/1611.02731, 2016. URL http://arxiv.org/abs/1611.02731.
- Dilokthanakul et al. (2016) Nat Dilokthanakul, Pedro A. M. Mediano, Marta Garnelo, Matthew C. H. Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. CoRR, 2016.
- Ganin et al. (2016) Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17(1):2096–2030, January 2016. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=2946645.2946704.
- Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5769–5779, 2017. URL http://papers.nips.cc/paper/7159-improved-training-of-wasserstein-gans.
- Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. CoRR, abs/1611.01144, 2016. URL http://arxiv.org/abs/1611.01144.
-
Kim et al. (2017)
Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim.
Learning to discover cross-domain relations with generative
adversarial networks.
In Doina Precup and Yee Whye Teh, editors,
Proceedings of the 34th International Conference on Machine Learning
, volume 70 of Proceedings of Machine Learning Research, pages 1857–1865, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/kim17a.html. - Kingma and Welling (2014) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR). 2014.
- Kingma et al. (2014) Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3581–3589. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5352-semi-supervised-learning-with-deep-generative-models.pdf.
-
Li et al. (2015)
Yujia Li, Kevin Swersky, and Richard S. Zemel.
Generative moment matching networks.
In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 1718–1727, 2015. URL http://jmlr.org/proceedings/papers/v37/li15.html. - Lichman (2013) M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
-
Liu et al. (2017)
Ming-Yu Liu, Thomas Breuel, and Jan Kautz.
Unsupervised image-to-image translation networks.
In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 700–708. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/6672-unsupervised-image-to-image-translation-networks.pdf. - McCullagh (1980) Peter McCullagh. Regression models for ordinal data. Journal of the Royal Statistical Society. Series B (Methodological), 42(2):109–142, 1980. ISSN 00359246. URL http://www.jstor.org/stable/2984952.
- Mescheder et al. (2017) Lars M. Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2391–2400, 2017. URL http://proceedings.mlr.press/v70/mescheder17a.html.
- Mroueh et al. (2017) Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature matching GAN. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2527–2535, 2017. URL http://proceedings.mlr.press/v70/mroueh17a.html.
- Nash and Williams (2017) Charlie Nash and Chris Williams. The shape variational autoencoder: A deep generative model of part-segmented 3d objects. Computer Graphics Forum, 36(5):1–12, 2017. doi: 10.1111/cgf.13240. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13240.
- Nowozin et al. (2016) Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. pages 271–279, 2016.
- Rezende and Mohamed (2015) Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1530–1538, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/rezende15.html.
- Salimans et al. (2016) Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2226–2234, 2016. URL http://papers.nips.cc/paper/6125-improved-techniques-for-training-gans.
- Sohn et al. (2015) Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3483–3491. Curran Associates, Inc., 2015.
- Suh and Choi (2016) Suwon Suh and Seungjin Choi. Gaussian copula variational autoencoders for mixed data. arXiv preprint arXiv:1604.04960, 2016.
- Taigman et al. (2016) Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. CoRR, abs/1611.02200, 2016. URL http://arxiv.org/abs/1611.02200.
- Tolstikhin et al. (2017) Ilya O. Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and Bernhard Schölkopf. Adagan: Boosting generative models. CoRR, abs/1701.02386, 2017. URL http://arxiv.org/abs/1701.02386.
-
Tomczak and Welling (2018)
Jakub M. Tomczak and Max Welling.
VAE with a vampprior.
In
International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 9-11 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain
, pages 1214–1223, 2018. - Vahdat et al. (2018) Arash Vahdat, William G. Macready, Zhengbing Bian, and Amir Khoshaman. Dvae++: Discrete variational autoencoders with overlapping transformations, 2018. URL http://arxiv.org/abs/1802.04920. cite arxiv:1802.04920.
- Valera et al. (2017a) I. Valera, M. F. Pradier, M. Lomeli, and Z. Ghahramani. General latent feature models for heterogeneous datasets. arXiv preprint arXiv:1706.03779, 2017a.
- Valera and Ghahramani (2014) Isabel Valera and Zoubin Ghahramani. General table completion using a bayesian nonparametric model. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 981–989. Curran Associates, Inc., 2014.
- Valera et al. (2017b) Isabel Valera, Melanie F. Pradier, and Zoubin Ghahramani. General latent feature modeling for data exploration tasks. CoRR, abs/1707.08352, 2017b. URL http://arxiv.org/abs/1707.08352.
- Vedantam et al. (2017) Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy. Generative models of visually grounded imagination. arXiv preprint arXiv:1705.10762, 2017.
- Williams and Nash (2018) Christopher KI Williams and Charlie Nash. Autoencoders and probabilistic inference with missing data: An exact solution for the factor analysis case. arXiv preprint arXiv:1801.03851, 2018.
- Yang et al. (2017) Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved variational autoencoders for text modeling using dilated convolutions. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3881–3890, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/yang17d.html.
- Yoon et al. (2018) Jinsung Yoon, James Jordon, and Mihaela van der Schaar. Gain: Missing data imputation using generative adversarial nets. In Proceedings of the 35th International Conference on Machine Learning (ICML’18), Stockholm (Sweden), July 2018., 2018.
- Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 2242–2251, 2017. doi: 10.1109/ICCV.2017.244. URL https://doi.org/10.1109/ICCV.2017.244.
Appendix
Error metrics.
We compare the above models in terms of average imputation error computed as , where we use the following error metrics for each attribute, since the computation of the errors depends on the type of variable we are considering:
-
Normalized Root Mean Square Error (NRMSE) for numerical variables, i.e.,
(16) -
Accuracy error for categorical variables, i.e.,
(17) -
Displacement error for ordinal variables, i.e.,
(18)
Databases characteristics.
We use six databases borrowed from the UCI repository https://archive.ics.uci.edu/ml/index.php. We summarize their main characteristics in Table 5.
Database | Objects | Attributes | # Real | # Positive | # Categorical | # Ordinal | # Count |
---|---|---|---|---|---|---|---|
Adult | 32561 | 12 | 0 | 3 | 6 | 1 | 2 |
Breast | 699 | 10 | 0 | 0 | 1 | 9 | 0 |
Default Credit | 30000 | 24 | 6 | 7 | 4 | 6 | 1 |
Letter | 20000 | 17 | 0 | 0 | 1 | 16 | 0 |
Spam | 4601 | 58 | 0 | 57 | 1 | 0 | 0 |
Wine | 6497 | 13 | 0 | 11 | 1 | 0 | 1 |
Imputation error per attribute.
We augment the experimental evaluation in Section 5.1 of the main document by illustrating the imputation error per attribute when we have a 20% fraction of missing data. It can be seen that HI-VAE is in general superior for imputing nominal variables (ordinal or categorical ones).

Variations on the HI-VAE construction.
In Figures 5 and 6 we compare three different approaches to implement the HI-VAE generative model. We compare the HI-VAE with mixture model prior distribution at the latent space with input dropout (HI-VAE), which is the model we use in the main document, with a HI-VAE that uses the factorized model (5) in the main document to handle missing data (HI-VAE factorized), and a HI-VAE in which the latent variables in the generative model are Gaussian distributed, e.g. we do not use a mixture model at the latent space (HI-VAE Gaussian prior). We compare the results in terms of imputation errors, Figure 5, and in terms of test log-likelihood, Figures 6. The standard HI-VAE provides slightly better error performance for both Default Credit and Wine datasets, and provides the best test log-likelihood in the Breast dataset.
Beyond a slight imputation improvement in some cases, HI-VAE has much less parameters than HI-VAE factorized (which has a different NN per missing dimension in the inference model) and, compared to HI-VAE Gaussian prior, the structure provided by the mixture model in the latent space naturally yields data clustering at the latent space, hence providing more discriminative data embeddings and emphasises more interpretable generative models. For instance, in Figure 7, we show the latent space induced by the HI-VAE Gaussian prior and the HI-VAE for the Breast dataset, when both use a latent space dimension of 2 and there is a 50% of missing data. It can be observed that HI-VAE induces more separated and disentangled clusters.
HI-VAE imputation: sampling vs. mode.
Once we have trained the generative model, to impute missing data we can either sample from the generative model or use the inferred parameters of the output distribution, e.g. impute the mode of the inferred distribution (this is what we did in the main document). To illustrate the differences, we show in Figures 8 and 9
the goodness of fit provided by the HI-VAE and the GLFM in a positive real-valued variable and a categorical variable with 6 categories, both belonging to the Adult dataset. Specifically, we show (top row) the true distribution of the data together with the HI-VAE output distribution for the observed data and the HI-VAE output distribution for the missing values. We show results for HI-VAE using the mode of the distribution and HI-VAE using one sample for imputation. We also show results for the GLFM. Further, in the bottom row we show the Q-Q plot for the positive-real variable and the confusion matrix for the categorical one. See the figure caption for more details. Note that, while both the HI-VAE and the GLFM result in a good fitting of the positive variable (although the HI-VAE provides a smoother, and thus, more realistic distribution for the data); the GLFM fails at capturing the categorical variable–it assigns all the probability to a single category. These results are consistent with Table 3 in the paper, which demonstrate the superior ability of the HI-VAE to perform missing data imputation in nominal variables.




