Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data

03/07/2015 ∙ by Yarin Gal, et al. ∙ University of Cambridge 0

Multivariate categorical data occur in many applications of machine learning. One of the main difficulties with these vectors of categorical variables is sparsity. The number of possible observations grows exponentially with vector length, but dataset diversity might be poor in comparison. Recent models have gained significant improvement in supervised tasks with this data. These models embed observations in a continuous space to capture similarities between them. Building on these ideas we propose a Bayesian model for the unsupervised task of distribution estimation of multivariate categorical data. We model vectors of categorical variables as generated from a non-linear transformation of a continuous latent space. Non-linearity captures multi-modality in the distribution. The continuous representation addresses sparsity. Our model ties together many existing models, linking the linear categorical latent Gaussian model, the Gaussian process latent variable model, and Gaussian process classification. We derive inference for our model based on recent developments in sampling based variational inference. We show empirically that the model outperforms its linear and discrete counterparts in imputation tasks of sparse data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multivariate categorical data is common in fields ranging from language processing to medical diagnosis. Recently proposed supervised models have gained significant improvement in tasks involving big labelled data of this form (see for example Bengio et al. (2006); Collobert & Weston (2008)). These models rely on information that had been largely ignored before: similarity between vectors of categorical variables. But what should we do in the unsupervised setting, when we face small unlabelled data of this form?

Medical diagnosis provides good examples of small unlabelled data. Consider a dataset composed of test results of a relatively small number of patients. Each patient has a medical record composed often of dozens of examinations, taking various categorical test results. We might be faced with the task of deciding which tests are necessary for a patient under examination to take, and which examination results could be deduced from the existing tests. This can be achieved with distribution estimation.

Several tools in the Bayesian framework could be used for this task of distribution estimation of unlabelled small datasets. Tools such as the Dirichlet-Multinomial distribution and its extensions are an example of such. These rely on relative frequencies of categorical variables appearing with others, with the addition of various smoothing techniques. But when faced with long multivariate sequences, these models run into problems of sparsity. This occurs when the data consists of vectors of categorical variables with most configurations of categorical values not in the dataset. In medical diagnosis this happens when there is a large number of possible examinations compared to a small number of patients.

Building on ideas used for big labelled discrete data, we propose a Bayesian model for distribution estimation of small unlabelled data. Existing supervised models for discrete labelled data embed the observations in a continuous space. This is used to find the similarity between vectors of categorical variables. We extend this idea to the small unlabelled domain by modelling the continuous embedding as a latent variable. A generative model is used to find a distribution over the discrete observations by modelling them as dependent on the continuous latent variables.

Following the medical diagnosis example, patient would be modelled by a continuous latent variable . For each examination , the latent

induces a vector of probabilities

, one probability for each possible test result . A categorical distribution returns test result based on these probabilities, resulting in a patient’s medical assessment . We need to decide how to model the distribution over the latent space and vectors of probabilities .

We would like to capture sparse multi-modal categorical distributions. A possible approach would be to model the continuous representation with a simple latent space and a non-linear transformation of points in the space to obtain probabilities. In this approach we place a standard normal distribution prior on a latent space, and feed the output of a non-linear transformation of the latent space into a Softmax to obtain probabilities. We use sparse Gaussian processes (GPs) to transform the latent space non-linearly. Sparse GPs form a distribution over functions supported on a small number of points with linear time complexity

(Quiñonero-Candela & Rasmussen, 2005; Titsias, 2009). We use a covariance function that is able to transform the latent space non-linearly. We name this model the Categorical Latent Gaussian Process (CLGP). Using a Gaussian process with a linear covariance function recovers the linear Gaussian model (LGM, Marlin et al., 2011). This model linearly transform a continuous latent space resulting in discrete observations.

The Softmax likelihood is not conjugate to the our Gaussian prior, and integrating the latent variables with a Softmax distribution is intractable. A similar problem exists with LGMs. Marlin et al. (2011) solved this by using variational inference and various bounds for the likelihood in the binary case, or alternative likelihoods to the Softmax in the categorical case (Khan et al., 2012). Many bounds have been studied in the literature for the binary case: Jaakkola and Jordan’s bound (Jaakkola & Jordan, 1997), the tilted bound (Knowles & Minka, 2011), piecewise linear and quadratic bounds (Marlin et al., 2011), and others. But for categorical data fewer bounds exist, since the multivariate Softmax is hard to approximate in high-dimensions. The Bohning bound (Böhning, 1992) and Blei and Lafferty’s bound (Blei & Lafferty, 2006) give poor approximation (Khan et al., 2012).

Instead we use recent developments in sampling-based variational inference (Blei et al., 2012) to avoid integrating the latent variables with the Softmax analytically. Our approach takes advantage of this tool to obtain simple yet powerful model and inference. We use Monte Carlo integration to approximate the non-conjugate likelihood obtaining noisy gradients (Kingma & Welling, 2013; Rezende et al., 2014; Titsias & Lázaro-Gredilla, 2014). We then use learning-rate free stochastic optimisation (Tieleman & Hinton, 2012) to optimise the noisy objective. We leverage symbolic differentiation

(Theano, Bergstra et al.,

2010) to obtain simple and modular code111Python code for the model and inference is given in the appendix, and available at http://github.com/yaringal/CLGP.

We experimentally show the advantages of using non-linear transformations for the latent space. We follow the ideas brought in Paccanaro & Hinton (2001) and evaluate the models on relation embedding and relation learning. We then demonstrate the utility of the model in the real-world sparse data

domain. We use a medical diagnosis dataset where data is scarce, comparing our model to discrete frequency based models. We use the estimated distribution for a task similar to the above, where we attempt to infer which test results can be deduced from the others. We compare the models on the task of imputation of raw data studying the effects of government terror warnings on political attitudes. We then evaluate the continuous models on a binary Alphadigits dataset composed of binary images of handwritten digits and letters, where each class contains a small number of images. We inspect the latent space embeddings and separation of the classes. Lastly, we evaluate the robustness of our inference, inspecting the Monte Carlo estimate variance over time.

2 Related Work

Figure 1: Relations between existing models and the model proposed in this paper (Categorical Latent Gaussian Process); the model can be seen as a non-linear version of the latent Gaussian model (left to right, Khan et al. (2012)), it can be seen as a latent counterpart to the Gaussian process classification model (back to front, Williams & Rasmussen (2006)), or alternatively as a discrete extension of the Gaussian process latent variable model (top to bottom, Lawrence (2005)).

Our model (CLGP) relates to some key probabilistic models (fig. 1). It can be seen as a non-linear version of the latent Gaussian model (LGM, Khan et al. (2012)) as discussed above. In the LGM we have a standard normal prior placed on a latent space, which is transformed linearly and fed into a Softmax likelihood function. The probability vector output is then used to sample a single categorical value for each categorical variable (e.g. medical test results) in a list of categorical variables (e.g. medical assessment). These categorical variables correspond to elements in a multivariate categorical vector. The parameters of the linear transformation are optimised directly within an EM framework. Khan et al. (2012) avoid the hard task of approximating the Softmax likelihood by using an alternative function (product of sigmoids) which is approximated using numerical techniques. Our approach avoids this cumbersome inference.

Our proposed model can also be seen as a latent counterpart to the Gaussian process classification model (Williams & Rasmussen, 2006), in which a Softmax function is again used to discretise the continuous values. The continuous valued outputs are obtained from a Gaussian process, which non-linearly transforms the inputs to the classification problem. Compared to GP classification where the inputs are fully observed, in our case the inputs are latent. Lastly, our model can be seen as a discrete extension of the Gaussian process latent variable model (GPLVM, Lawrence, 2005)

. This model has been proposed recently as means of performing non-linear dimensionality reduction (counterpart to the linear principal component analysis

(Tipping & Bishop, 1999)) and density estimation in continuous space.

3 A Latent Gaussian Process Model for Multivariate Categorical Data

We consider a generative model for a dataset with observations (patients for example) and categorical variables (different possible examinations). The -th categorical variable in the -th observation, , is a categorical variable that can take an integer value from to . For ease of notation, we assume all the categorical variables have the same cardinality, i.e. .

In our generative model, each categorical variable follows a categorical distribution with probability given by a Softmax with weights . Each weight is the output of a nonlinear function of a dimensional latent variable :

. To complete the generative model, we assign an isotropic Gaussian distribution prior with standard deviation

for the latent variables, and a Gaussian process prior for each of the nonlinear functions. We also consider a set of auxiliary variables which are often called inducing inputs. These inputs lie in the latent space with their corresponding outputs lying in the weight space (together with ). The inducing points are used as “support” for the function. Evaluating the covariance function of the GP on these instead of the entire dataset allows us to perform approximate inference in time complexity instead of (where is the number of inducing points and is the number of data points (Quiñonero-Candela & Rasmussen, 2005)).

The model is expressed as:

(1)
(2)
(3)
(4)

for (the set of naturals between 1 and ), , , , , and where the Softmax distribution is defined as,

(5)
(6)

for and with .

Following our medical diagnosis example from the introduction, each patient is modelled by latent ; for each examination , has a sequence of weights , one weight for each possible test result , that follows a Gaussian process; Softmax returns test result based on these weights, resulting in a patient’s medical assessment .

Define and define

. The joint distribution of

with the latent nonlinear function, , marginalized under the GP assumption, is a multi-variate Gaussian distribution . It is easy to verify that when we further marginalize the inducing outputs, we end up with a joint distribution of the form . Therefore, the introduction of inducing outputs does not change the marginal likelihood of the data . These are used in the variational inference method in the next section and the inducing inputs are considered as variational parameters.

We use the automatic relevance determination (ARD) radial basis function (RBF) covariance function for our model. ARD RBF is able to select the dimensionality of the latent space automatically and transform it non-linearly.

4 Inference

The marginal log-likelihood, also known as log-evidence, is intractable for our model due to the non-linearity of the covariance function of the GP and the Softmax likelihood function. We first describe a lower bound of the log-evidence (ELBO) by applying Jensen’s inequality with a variational distribution of the latent variables following Titsias & Lawrence (2010).

Consider a variational approximation to the posterior distribution of , and factorized as follows:

(7)

We can obtain the ELBO by applying Jensen’s inequality

(8)
(9)
(10)
(11)
(12)
(13)
(14)

where

(15)

with

(16)
(17)

Notice however that the integration of in eq. 14 involves a nonlinear function ( from eq. 6) and is still intractable. Consequently, we do not have an analytical form for the optimal variational distribution of unlike in Titsias & Lawrence (2010). Instead of applying a further approximation/lower bound on , we want to obtain better accuracy by following a sampling-based approach (Blei et al., 2012; Kingma & Welling, 2013; Rezende et al., 2014; Titsias & Lázaro-Gredilla, 2014) to compute the lower bound and its derivatives with the Monte Carlo method.

Specifically, we draw samples of , and from , , and respectively and estimate with the sample average. Another advantage of using the Monte Carlo method is that we are not constrained to a limited choice of covariance functions for the GP that is otherwise required for an analytical solution in standard approaches to GPLVM for continuous data (Titsias & Lawrence, 2010; Hensman et al., 2013).

We consider a mean field approximation for the latent points as in Titsias & Lawrence (2010) and a joint Gaussian distribution with the following factorisation for :

(18)
(19)

where the covariance matrix is shared for the same categorical variable (remember that is the number of values this categorical variable can take). The KL divergence in can be computed analytically with the given variational distributions. The parameters we need to optimise over222Note that the number of parameters to optimise over can be reduced by transforming the latent space non-linearly to a second low-dimensional latent space, which is then transformed linearly to the weight space containing points . include the hyper-parameters for the GP , variational parameters for the inducing points , , , and the mean and standard deviation of the latent points , .

4.1 Transforming the Random Variables

In order to obtain a Monte Carlo estimate to the gradients of with low variance, a useful trick introduced in Kingma & Welling (2013)

is to transform the random variables to be sampled so that the randomness does not depend on the parameters with which the gradients will be computed. We present the transformation of each variable to be sampled as follows:

Transforming .

For the mean field approximation, the transformation for is straightforward as

(20)
Transforming .

The variational distribution of is a joint Gaussian distribution. Denote the Cholesky decomposition of as . We can rewrite as

(21)

We optimize the lower triangular matrix instead of .

Transforming .

Since the conditional distribution in Eq. 15 is factorized over we can define a new random variable for every :

(22)

Notice that the transformation of the variables does not change the distribution of the original variables and therefore does not change the KL divergence in Eq. 15.

4.2 Lower Bound with Transformed Variables

Given the transformation we just defined, we can represent the lower bound as

(23)
(24)
(25)
(26)

where the expectation in the last line is with respect to the fixed distribution defined in Eqs. 20, 21 and 22. Each expectation term that involves the Softmax likelihood, denoted as , can be estimated using Monte Carlo integration as

(27)
(28)

with drawn from their corresponding distributions. Since these distributions do not depend on the parameters to be optimized, the derivatives of the objective function

are now straight-forward to compute with the same set of samples using the chain rule.

4.3 Stochastic Gradient Descent

We use gradient descent to find an optimal variational distribution. Gradient descent with noisy gradients is guaranteed to converge to a local optimum given decreasing learning rate with some conditions, but is hard to work with in practice. Initial values set for the learning rate influence the rate at which the algorithm converges, and misspecified values can cause it to diverge. For this reason new techniques have been proposed that handle noisy gradients well. Optimisers such as AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012)

, and RMSPROP

(Tieleman & Hinton, 2012) have been proposed, each handling the gradients slightly differently, all averaging over past gradients. Schaul et al. (2013) have studied empirically the different techniques, comparing them to one another on a variety of unit tests. They found that RMSPROP works better on many test sets compared to other optimisers evaluated. We thus chose to use RMSPROP for our experiments.

A major advantage of our inference is that it is extremely easy to implement and adapt. The straight-forward computation of derivatives through the expectation makes it possible to use symbolic differentiation. We use Theano (Bergstra et al., 2010) for the inference implementation, where the generative model is implemented as in Eqs. 20, 21 and 22, and the optimisation objective, evaluated on samples from the generative model, is given by Eq. 26.

5 Experimental Results

We evaluate the categorical latent Gaussian process (CLGP), comparing it to existing models for multivariate categorical distribution estimation. These include models based on a discrete latent representation (such as the Dirichlet-Multinomial), and continuous latent representation with a linear transformation of the latent space (latent Gaussian model (LGM, Khan et al., 2012)). We demonstrate over-fitting problems with the LGM, and inspect the robustness of our inference.

For the following experiments, both the linear and non-linear models were initialised with a 2D latent space. The mean values of the latent points, , were initialised at random following a standard normal distribution. The mean values of the inducing outputs () were initialised with a normal distribution with standard deviation . This is equivalent to using a uniform initial distribution for all values. We initialise the standard deviation of each latent point () to 0.1, and initialise the length-scales of the ARD RBF covariance function to 0.1. We then optimise the variational distribution for 500 iterations. At every iteration we optimise the various quantities while holding ’s variational parameters fixed, and then optimise ’s variational parameters holding the other quantities fixed.

Our setting supports semi-supervised learning with partially observed data. The latents of partially observed points are optimised with the training set, and then used to predict the missing values. We assess the performance of the models using test-set perplexity (a measure of how much the model is surprised by the missing test values). This is defined as the exponent of the negative average log predicted probability.

(a) LGM
(b) CLGP
Figure 4: Density over the latent space for XOR as predicted by the linear model (top, LGM), and non-linear model (bottom, CLGP). Each figure shows the probability as a function of , for (first, second, and third digits in the triplet left to right). The darker the shade of green, the higher the probability. In blue are the latents corresponding to the training points, in yellow are the latents corresponding to the four partially observed test points, and in red are the inducing points used to support the function.

5.1 Linear Models Have Difficulty with Multi-modal Distributions

We show the advantages of using a non-linear categorical distribution estimation compared to a linear one, evaluating the CLGP against the linear LGM. We implement the latent Gaussian model using a linear covariance function in our model; we remove the KL divergence term in following the model specification in (Khan et al., 2012), and use our inference scheme described above. Empirically, the Monte Carlo inference scheme with the linear model results in the same test error on (Inoguchi, 2008) as the piece-wise bound based inference scheme developed in (Khan et al., 2012).

5.1.1 Relation Embedding – Exclusive Or

A simple example of relational learning (following Paccanaro & Hinton (2001)) can be used to demonstrate when linear latent space models fail. In this task we are given a dataset with example relations and the model is to capture the distribution that generated them. A non-linear dataset is constructed using the XOR (exclusive or) relation. We collect 25 positive examples of each assignment of the binary relation (triplets of the form , corresponding to and so on). We then maximise the variational lower bound using RMSPROP for both the linear and non-linear models with 20 samples for the Monte Carlo integration. We add four more triplets to the dataset: . We evaluate the probabilities the models assign to each of the missing values (also known as imputation) and report the results.

We assessed the test-set perplexity repeating the experiment 3 times and averaging the results. The linear model obtained a test-set perplexity (with standard deviation) of , whereas the non-linear model obtained a test-set perplexity of . A perplexity of 1 corresponds to always predicting correctly.

During optimisation the linear model jumps between different local modes, trying to capture all four possible triplets (fig. 4). The linear model assigns probabilities to the missing values by capturing some of the triplets well, but cannot assign high probability to the others. In contrast, the CLGP model is able to capture all possible values of the relation. Sampling from probability vectors from the latent variational posterior for both models, we get a histogram of the posterior distribution (fig. 8). As can be seen, the CLGP model is able to fully capture the distribution whereas the linear model is incapable of it.

(a) LGM
(b) CLGP
(c) Ground truth
Figure 8: Histogram of categorical values for XOR (encoded in binary for the 8 possible values) for samples drawn from the posterior of the latent space of the linear model (left, LGM), the non-linear model (middle, CLGP), and the data used for training (right).

5.1.2 Relation Learning – Complex Example

We repeat the previous experiment with a more complex relation. We generated 1000 strings from a simple probabilistic context free grammar with non-terminals , start symbol , terminals , and derivation rules:

where we give the probability of following a derivation rule inside square brackets. Following the start symbol and the derivation rules we generate strings of the form , , , and so on. We add two “start” and “end” symbols to each string obtaining strings of the form . We then extract triplets of consecutive letters from these strings resulting in training examples of the form , , …, . This is common practice in text preprocessing in the natural language community. It allows us to generate relations from scratch by conditioning on the prefix to generate the first non-terminal (say ), then condition on the prefix , and continue in this vein iteratively. Note that a simple histogram based approach will do well on this dataset (and the previous dataset) as it is not sparse.

Split LGM CLGP
1
2
3
Table 1: Test-set perplexity on relational data. Compared are the linear LGM and the non-linear CLGP.

We compared the test-set perplexity of the non-linear CLGP (with 50 inducing points) to that of the linear LGM on these training inputs, repeating the same experiment set-up as before. We impute one missing value at random in each test string, using of the strings as a test set with 3 different splits. The results are presented in table 1. The linear model cannot capture this data well, and seems to get very confident about misclassified values (resulting in very high test-set perplexity).

5.2 Sparse Small Data

Split Baseline Multinomial Uni-Dir-Mult Bi-Dir-Mult LGM CLGP
1
2
3
Table 2: Test-set perplexity on Breast cancer dataset, predicting randomly missing categorical test results. The models compared are: Baseline predicting uniform probability for all values, Multinomial – predicting the probability for a missing value based on its frequency, Uni-Dir-Mult – Unigram Dirichlet Multinomial with concentration parameter , Bi-Dir-Mult – Bigram Dirichlet Multinomial with concentration parameter , LGM, and the proposed model (CLGP).

We assess the model in the real-world domain of small sparse data. We compare the CLGP model to a histogram based approach, demonstrating the difficulty with frequency models for sparse data. We further compare our model to the linear LGM.

5.2.1 Medical Diagnosis

We use the Wisconsin breast cancer dataset for which obtaining samples is a long and expensive process (Zwitter & Soklic, 1988). The dataset is composed of 683 data points, with 9 categorical variables taking values between 1 and 10, and an additional categorical variable taking 0,1 values – possible configurations. We use three quarters of the dataset for training and leave the rest for testing, averaging the test-set perplexity on three repetitions of the experiment. We use three different random splits of the dataset as the distribution of the data can be fairly different among different splits. In the test set we randomly remove one of the 10 categorical values, and test the models’ ability to recover that value. Note that this is a harder task than the usual use of this dataset for binary classification. We use the same model set-up as in the first experiment.

We compare our model (CLGP) to a baseline predicting uniform probability for all values (Baseline), a multinomial model predicting the probability for a missing value based on its frequency (Multinomial), a unigram Dirichlet Multinomial model with concentration parameter (Uni-DirMult), a bigram Dirichlet Multinomial with concentration parameter (Bi-Dir-Mult), and the linear continuous space model (LGM). More complicated frequency based approaches are possible, performing variable selection and then looking at frequencies of triplets of variables. These will be very difficult to work with in this sparse small data problem.

We evaluate the models’ performance using the test-set perplexity metric as before (table 2). As can be seen, the frequency based approaches obtain worse results than the continuous latent space approaches. The frequency model with no smoothing obtains perplexity for split 2 because one of the test points has a value not observed before in the training set. Using smoothing solves this. The baseline (predicting uniformly) obtains the highest perplexity on average. The linear model exhibits high variance for the last split, and in general has higher perplexity standard deviation than the non-linear model.

(a) Example alphadigits
(b) Train & Test log perplexity
(c) LGM latent space
(d) CLGP latent space
Figure 13: Binary alphadigit dataset. (a) example of each class (b) Train and test log perplexity for 1000 iterations. (c,d) 2-D latend embedding of the 36 alphadigit classes with LGM (left) and CLGP model (right). Each color-marker combination denotes one class.

5.2.2 Terror Warning Effects on Political Attitude

We further compare the models on raw data collected in an experimental study of the effects of government terror warnings on political attitudes (from the START Terrorism Data Archive Dataverse333Obtained from the Harvard Dataverse Network thedata.harvard.edu/dvn/dv/start/faces/study/StudyPage.xhtml?studyId=71190). This dataset consists of 1282 cases and 49 variables. We used 1000 cases for training and 282 for test. From the 49 variables, we used the 17 corresponding to categorical answers to study questions (the other variables are subjects’ geographic info and an answer to an open question). The 17 variables take 5 or 6 possible values for each variable with some of the values missing.

Baseline Multinomial LGM CLGP
2.50 2.20
Table 3: Test-set perplexity for terror warning effects on political attitude. Compared are the linear LGM and the non-linear CLGP.

We repeat the same experiment set-up as before, with a 6 dimensional latent dimensions, 100 inducing points, 5 samples to estimate the lower bound, and running the optimiser for 1000 iterations. We compare a Baseline

using a uniform distribution for all values, a

Multinomial model using the frequencies of the individual variables, the linear LGM, and the CLGP. The results are given in table 3. On the training set, the CLGP obtained a training error of , and the LGM obtained a training error of . The linear model seems to over-fit to the data.

5.2.3 Handwritten Binary Alphadigits

We evaluate the performance of our model on a sparse dataset with a large number of dimensions. The dataset, Binary Alphadigits, is composed of binary images of 10 handwritten digits and 26 handwritten capital letters, each class with 39 images444Obtained from http://www.cs.nyu.edu/~roweis/data.html. We resize each image to

, and obtain a dataset of 1404 data points each with 80 binary variables. We repeat the same experiment set-up as before with 2 latent dimensions for ease of visual comparison of the latent embeddings. Fig. 

(a)a shows an example of each alphadigit class. Each class is then randomly split to 30 training and 9 test examples. In the test set, we randomly remove of the pixels and evaluate the prediction error.

Fig. (b)b shows the training and test error in log perplexity for both models. LGM converges faster than CLGP but ends up with a much higher prediction error due to its limited modeling capacity with 2 dimensional latent variables. This is validated with the latent embedding shown in fig. (c)c and (d)d. Each color-marker combination denotes one alphadigit class. As can be seen, CLGP has a better separation of the different classes than LGM even though the class labels are not used in the training.

(a) LGM
(b) CLGP
Figure 16: Train and test log perplexities for LGM (left) and the CLGP model (right) for one of the splits of the breast cancer dataset. The train log perplexity of LGM decreases while the test log perplexity starts increasing at iteration 50.

5.3 Latent Gaussian Model Over-fitting

It is interesting to note that the latent Gaussian model (LGM) over-fits on the different datasets. It is possible to contribute this to the lack of regularisation over the linear transformation – the weight matrix used to transform the latent space to the Softmax weights is optimised without a prior. In all medical diagnosis experiment repetitions, for example, the model’s training log perplexity decreases while the test log perplexity starts increasing (see fig. 16 for the train and test log perplexities of split 1 of the breast cancer dataset). Note that even though the test log perplexity starts increasing, at its lowest point it is still higher than the end test log perplexity of the CLGP model. This is observed for all splits and all repetitions.

It is worth noting that we compared the CLGP to the LGM on the ASES survey dataset (Inoguchi, 2008) used to assess the LGM in (Khan et al., 2012). We obtained a test perplexity of 1.98, compared to LGM’s test perplexity of 1.97. We concluded that the dataset was fairly linearly separable.

5.4 Inference Robustness

Figure 17: ELBO and standard deviation per iteration (on log scale) for the Alphadigits dataset.

Lastly, we inspect the robustness of our inference, evaluating the Monte Carlo (MC) estimate standard deviation as optimisation progresses. Fig. 17 shows the ELBO and MC standard deviation per iteration (on log scale) for the Alphadigit dataset. It seems that the estimator standard deviation decreases as the approximating variational distribution fits to the posterior. This makes the stochastic optimisation easier, speeding up convergence. A further theoretical study of this behaviour in future research will be of interest.

6 Discussion and Conclusions

We have presented the first Bayesian model capable of capturing sparse multi-modal categorical distributions based on a continuous representation. This model ties together many existing models in the field, linking the linear and discrete latent Gaussian models to the non-linear continuous space Gaussian process latent variable model and to the fully observed discrete Gaussian process classification.

In future work we aim to answer short-comings in the current model such as scalability and robustness. We scale the model following research on GP scalability done in (Hensman et al., 2013; Gal et al., 2014). The robustness of the model depends on the sample variance in the Monte Carlo integration. As discussed in Blei et al. (2012), variance reduction techniques can help in the estimation of the integral, and methods such as the one developed in Wang et al. (2013) can effectively increase inference robustness.

References

  • Bengio et al. (2006) Bengio, Yoshua, Schwenk, Holger, Senécal, Jean-Sébastien, Morin, Fréderic, and Gauvain, Jean-Luc. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137–186. Springer, 2006.
  • Bergstra et al. (2010) Bergstra, James, Breuleux, Olivier, Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Desjardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
  • Blei & Lafferty (2006) Blei, David and Lafferty, John. Correlated topic models. Advances in neural information processing systems, 18:147, 2006.
  • Blei et al. (2012) Blei, David M, Jordan, Michael I, and Paisley, John W.

    Variational Bayesian inference with stochastic search.

    In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 1367–1374, 2012.
  • Böhning (1992) Böhning, Dankmar.

    Multinomial logistic regression algorithm.

    Annals of the Institute of Statistical Mathematics, 44(1):197–200, 1992.
  • Collobert & Weston (2008) Collobert, Ronan and Weston, Jason.

    A unified architecture for natural language processing: Deep neural networks with multitask learning.

    In Proceedings of the 25th international conference on Machine learning, pp. 160–167. ACM, 2008.
  • Duchi et al. (2011) Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011.
  • Gal et al. (2014) Gal, Yarin, van der Wilk, Mark, and Rasmussen, Carl E. Distributed variational inference in sparse Gaussian process regression and latent variable models. In Advances in Neural Information Processing Systems 27. 2014.
  • Hensman et al. (2013) Hensman, James, Fusi, Nicolo, and Lawrence, Neil D. Gaussian processes for big data. arXiv preprint arXiv:1309.6835, 2013.
  • Inoguchi (2008) Inoguchi, Takashi. Asia Europe survey (ASES): A multinational comparative study in 18 countries, 2001. In Inter-university Consortium for Political and Social Research (ICPSR), 2008.
  • Jaakkola & Jordan (1997) Jaakkola, Tommi S. and Jordan, Michael I. A variational approach to Bayesian logistic regression models and their extensions. In

    Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics

    , 1997.
  • Khan et al. (2012) Khan, Mohammad E, Mohamed, Shakir, Marlin, Benjamin M, and Murphy, Kevin P. A stick-breaking likelihood for categorical data analysis with latent Gaussian models. In International conference on Artificial Intelligence and Statistics, pp. 610–618, 2012.
  • Kingma & Welling (2013) Kingma, Diederik P and Welling, Max. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013.
  • Knowles & Minka (2011) Knowles, David A and Minka, Tom. Non-conjugate variational message passing for multinomial and binary regression. In Advances in Neural Information Processing Systems, pp. 1701–1709, 2011.
  • Lawrence (2005) Lawrence, Neil. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. The Journal of Machine Learning Research, 6:1783–1816, 2005.
  • Marlin et al. (2011) Marlin, Benjamin M, Khan, Mohammad Emtiyaz, and Murphy, Kevin P. Piecewise bounds for estimating bernoulli-logistic latent Gaussian models. In ICML, pp. 633–640, 2011.
  • Paccanaro & Hinton (2001) Paccanaro, Alberto and Hinton, Geoffrey E.

    Learning distributed representations of concepts using linear relational embedding.

    Knowledge and Data Engineering, IEEE Transactions on, 13(2):232–244, 2001.
  • Quiñonero-Candela & Rasmussen (2005) Quiñonero-Candela, Joaquin and Rasmussen, Carl Edward. A unifying view of sparse approximate Gaussian process regression. The Journal of Machine Learning Research, 6:1939–1959, 2005.
  • Rezende et al. (2014) Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan.

    Stochastic backpropagation and approximate inference in deep generative models.

    In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1278–1286, 2014.
  • Schaul et al. (2013) Schaul, Tom, Antonoglou, Ioannis, and Silver, David. Unit tests for stochastic optimization. abs/1312.6055, 2013. URL http://arxiv.org/abs/1312.6055.
  • Tieleman & Hinton (2012) Tieleman, T. and Hinton, G. Lecture 6.5 - rmsprop, COURSERA: Neural networks for machine learning, 2012.
  • Tipping & Bishop (1999) Tipping, Michael E and Bishop, Christopher M. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611–622, 1999.
  • Titsias & Lawrence (2010) Titsias, Michalis and Lawrence, Neil. Bayesian Gaussian process latent variable model. In Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
  • Titsias & Lázaro-Gredilla (2014) Titsias, Michalis and Lázaro-Gredilla, Miguel. Doubly stochastic variational Bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1971–1979, 2014.
  • Titsias (2009) Titsias, Michalis K. Variational learning of inducing variables in sparse Gaussian processes. In Artificial Intelligence and Statistics 12, pp. 567–574, 2009.
  • Wang et al. (2013) Wang, Chong, Chen, Xi, Smola, Alex, and Xing, Eric. Variance reduction for stochastic gradient optimization. In Advances in Neural Information Processing Systems, pp. 181–189, 2013.
  • Williams & Rasmussen (2006) Williams, Christopher KI and Rasmussen, Carl Edward. Gaussian processes for machine learning. the MIT Press, 2(3):4, 2006.
  • Zeiler (2012) Zeiler, Matthew D. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
  • Zwitter & Soklic (1988) Zwitter, M. and Soklic, M. Breast cancer dataset. In University Medical Centre, Institute of Oncology, Ljubljana, Yugoslavia, 1988.

Appendix A Appendix

a.1 Code

The basic model and inference (without covariance matrix caching) can be implemented in 20 lines of Python and Theano for each categorical variable:

1m = T.dmatrix(’m’) # ..and other variables
2X = m + s * randn(N, Q)
3U = mu + L.dot(randn(M, K))
4Kmm = RBF(sf2, l, Z)
5Kmn = RBF(sf2, l, Z, X)
6Knn = RBFnn(sf2, l, X)
7KmmInv = T.matrix_inverse(Kmm)
8A = KmmInv.dot(Kmn)
9B = Knn - T.sum(Kmn * KmmInv.dot(Kmn), 0)
10F = A.T.dot(U)+B[:,None]**0.5 * randn(N,K)
11S = T.nnet.softmax(F)
12KL_U, KL_X = get_KL_U(), get_KL_X()
13LS = T.sum(T.log(T.sum(Y * S, 1)))
14            - KL_U - KL_X
15LS_func = theano.function([’’’inputs’’’],
16            LS)
17dLS_dm = theano.function([’’’inputs’’’],
18            T.grad(LS, m)) # and others
19# ... and optimise LS with RMS-PROP