Bayesian Manifold Learning: The Locally Linear Latent Variable Model (LL-LVM)

10/24/2014 ∙ by Mijung Park, et al. ∙ Google UCL 0

We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships. The model allows straightforward variational optimisation of the posterior distribution on coordinates and locally linear maps from the latent space to the observation space given the data. Thus, the LL-LVM encapsulates the local-geometry preserving intuitions that underlie non-probabilistic methods such as locally linear embedding (LLE). Its probabilistic semantics make it easy to evaluate the quality of hypothesised neighbourhood relationships, select the intrinsic dimensionality of the manifold, construct out-of-sample extensions and to combine the manifold model with additional probabilistic models that capture the structure of coordinates within the manifold.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many high-dimensional datasets comprise points derived from a smooth, lower-dimensional manifold embedded within the high-dimensional space of measurements and possibly corrupted by noise. For instance, biological or medical imaging data might reflect the interplay of a small number of latent processes that all affect measurements non-linearly. Linear multivariate analyses such as principal component analysis (PCA) or multidimensional scaling (MDS) have long been used to estimate such underlying processes, but cannot always reveal low-dimensional structure when the mapping is non-linear (or, equivalently, the manifold is curved). Thus, there has been substantial recent interest in algorithms to identify non-linear manifolds in data.

Many more-or-less heuristic methods for non-linear manifold discovery are based on the idea of preserving the geometric properties of local neighbourhoods within the data, while embedding, unfolding or otherwise transforming the data to occupy fewer dimensions. Thus, algorithms such as locally-linear embedding (LLE) and Laplacian eigenmap attempt to preserve local linear relationships or to minimise the distortion of local derivatives

[1, 2]

. Others, like Isometric feature mapping (Isomap) or maximum variance unfolding (MVU) preserve local distances, estimating global manifold properties by continuation across neighbourhoods before embedding to lower dimensions by classical methods such as PCA or MDS

[3]. While generally hewing to this same intuitive path, the range of available algorithms has grown very substantially in recent years [4, 5].

However, these approaches do not define distributions over the data or over the manifold properties. Thus, they provide no measures of uncertainty on manifold structure or on the low-dimensional locations of the embedded points; they cannot be combined with a structured probabilistic model within the manifold to define a full likelihood relative to the high-dimensional observations; and they provide only heuristic methods to evaluate the manifold dimensionality. As others have pointed out, they also make it difficult to extend the manifold definition to out-of-sample points in a principled way [6].

An established alternative is to construct an explicit probabilistic model of the functional relationship between low-dimensional manifold coordinates and each measured dimension of the data, assuming that the functions instantiate draws from Gaussian-process priors. The original Gaussian process latent variable model (GP-LVM) required optimisation of the low-dimensional coordinates, and thus still did not provide uncertainties on these locations or allow evaluation of the likelihood of a model over them [7]; however a recent extension exploits an auxiliary variable approach to optimise a more general variational bound, thus retaining approximate probabilistic semantics within the latent space [8]. The stochastic process model for the mapping functions also makes it straightforward to estimate the function at previously unobserved points, thus generalising out-of-sample with ease. However, the GP-LVM gives up on the intuitive preservation of local neighbourhood properties that underpin the non-probabilistic methods reviewed above. Instead, the expected smoothness or other structure of the manifold must be defined by the Gaussian process covariance function, chosen a priori.

Here, we introduce a new probabilistic model over high-dimensional observations, low-dimensional embedded locations and locally-linear mappings between high and low-dimensional linear maps within each neighbourhood, such that each group of variables is Gaussian distributed given the other two. This

locally linear latent variable model (LL-LVM) thus respects the same intuitions as the common non-probabilistic manifold discovery algorithms, while still defining a full-fledged probabilistic model. Indeed, variational inference in this model follows more directly and with fewer separate bounding operations than the sparse auxiliary-variable approach used with the GP-LVM. Thus, uncertainty in the low-dimensional coordinates and in the manifold shape (defined by the local maps) is captured naturally. A lower bound on the marginal likelihood of the model makes it possible to select between different latent dimensionalities and, perhaps most crucially, between different definitions of neighbourhood, thus addressing an important unsolved issue with neighbourhood-defined algorithms. Unlike existing probabilistic frameworks with locally linear models such as mixtures of factor analysers (MFA)-based and local tangent space analysis (LTSA)-based methods [9, 10, 11], LL-LVM does not require an additional step to obtain the globally consistent alignment of low-dimensional local coordinates.111This is also true of one previous MFA-based method [12] which finds model parameters and global coordinates by variational methods similar to our own.

This paper is organised as follows. In section 2, we introduce our generative model, LL-LVM, for which we derive the variational inference method in section 3. We briefly describe out-of-sample extension for LL-LVM and mathematically describe the dissimilarity between LL-LVM and GP-LVM at the end of section 3. In section 4, we demonstrate the approach on several real world problems.

Notation:

In the following, a diagonal matrix with entries taken from the vector

is written . The vector of ones is and the identity matrix is . The Euclidean norm of a vector is , the Frobenius norm of a matrix is . The Kronecker delta is denoted by ( if , and otherwise). The Kronecker product of matrices and is . For a random vector

, we denote the normalisation constant in its probability density function by

. The expectation of a random vector with respect to a density is .

2 The model: LL-LVM

Suppose we have data points , and a graph on nodes with edge set

. We assume that there is a low-dimensional (latent) representation of the high-dimensional data, with coordinates

, . It will be helpful to concatenate the vectors to form and .

Our key assumption is that the mapping between high-dimensional data and low-dimensional coordinates is locally linear (Fig. 1). The tangent spaces are approximated by and , the pairwise differences between the th point and neighbouring points . The matrix at the th point linearly maps those tangent spaces as

(1)

Under this assumption, we aim to find the distribution over the linear maps and the latent variables that best describe the data likelihood given the graph :

(2)

The joint distribution can be written in terms of priors on and the likelihood of as

(3)

In the following, we highlight the essential components the Locally Linear Latent Variable Model (LL-LVM). Detailed derivations are given in the Appendix.

Adjacency matrix and Laplacian matrix

The edge set of for data points specifies a symmetric adjacency matrix . We write for the th element of , which is 1 if and are neighbours and 0 if not (including on the diagonal). The graph Laplacian matrix is then .

Prior on

We assume that the latent variables are zero-centered with a bounded expected scale, and that latent variables corresponding to neighbouring high-dimensional points are close (in Euclidean distance). Formally, the log prior on the coordinates is then

where the parameter controls the expected scale (

). This prior can be written as multivariate normal distribution on the concatenated

:

Figure 1: Locally linear mapping for th data point transforms the tangent space, at in the low-dimensional space to the tangent space, at the corresponding data point in the high-dimensional space. A neighbouring data point is denoted by and the corresponding latent variable by .
Prior on

We assume that the linear maps corresponding to neighbouring points are similar in terms of Frobenius norm (thus favouring a smooth manifold of low curvature). This gives

(4)

where . The second line corresponds to the matrix normal density, giving as the prior on . In our implementation, we fix to a small value222 sets the scale of the average linear map, ensuring the prior precision matrix is invertible., since the magnitude of the product is determined by optimising the hyper-parameter above.

Likelihood

Under the local-linearity assumption, we penalise the approximation error of Eq. (1), which yields the log likelihood

(5)

where and .333The term centers the data and ensures the distribution can be normalised. It applies in a subspace orthogonal to that modelled by and and so its value does not affect the resulting manifold model. Thus, is drawn from a multivariate normal distribution given by

with , , and ; . For computational simplicity, we assume . The graphical representation of the generative process underlying the LL-LVM is given in Fig. 2.

Figure 2: Graphical representation of generative process in LL-LVM. Given a dataset, we construct a neighbourhood graph . The distribution over the latent variable is controlled by the graph as well as the parameter . The distribution over the linear map is also governed by the graph . The latent variable and the linear map together determine the data likelihood.

3 Variational inference

Our goal is to infer the latent variables () as well as the parameters in LL-LVM. We infer them by maximising the lower bound of the marginal likelihood of the observations

(6)

Following the common treatment for computational tractability, we assume the posterior over () factorises as [13]. We maximise the lower bound w.r.t.  and

by the variational expectation maximization algorithm

[14], which consists of (1) the variational expectation step for computing by

(7)
(8)

then (2) the maximization step for estimating by .

Variational-E step

Computing from Eq. (7) requires rewriting the likelihood in Eq. (5) as a quadratic function in

where the normaliser has all the terms that do not depend on from Eq. (5). Let . The matrix is given by where the th block is and each th block of is given by The vector is defined as with the component -dimensional vectors given by The likelihood combined with the prior on gives us the Gaussian posterior over (i.e., solving Eq. (7))

(9)

Similarly, computing from Eq. (8) requires rewriting the likelihood in Eq. (5) as a quadratic function in

(10)

where the normaliser has all the terms that do not depend on from Eq. (5), and . The matrix where the th subvector of the th column is . We define whose th block is .

The likelihood combined with the prior on gives us the Gaussian posterior over (i.e., solving Eq. (8))

(11)

The expected values of and are given in the Appendix.

Figure 3: A simulated example. A: data points drawn from Swiss Roll. B: true latent points () in 2D used for generating the data. C: Posterior mean of and D: posterior mean of after 50 EM iterations given , which was chosen by maximising the lower bound across different ’s. E: Average lower bounds as a function of . Each point is an average across random seeds.
Variational-M step

We set the parameters by maximising w.r.t.  which is split into two terms based on dependence on each parameter: (1) expected log-likelihood for updating by ; and (2) negative KL divergence between the prior and the posterior on for updating by

. The update rules for each hyperparameter are given in the Appendix.

The full EM algorithm444An implementation is available from http://www.gatsby.ucl.ac.uk/resources/lllvm. starts with an initial value of . In the E-step, given , compute as in Eq. (9). Likewise, given , compute as in Eq. (11). The parameters are updated in the M-step by maximising Eq. (6). The two steps are repeated until the variational lower bound in Eq. (6) saturates. To give a sense of how the algorithm works, we visualise fitting results for a simulated example in Fig. 3. Using the graph constructed from D observations given different , we run our EM algorithm. The posterior means of and given the optimal chosen by the maximum lower bound resemble the true manifolds in 2D and 3D spaces, respectively.

Out-of-sample extension

In the LL-LVM model one can formulate a computationally efficient out-of-sample extension technique as follows. Given data points denoted by , the variational EM algorithm derived in the previous section converts into the posterior : . Now, given a new high-dimensional data point , one can first find the neighbourhood of without changing the current neighbourhood graph. Then, it is possible to compute the distributions over the corresponding locally linear map and latent variable via simply performing the E-step given (freezing all other quantities the same) as .

Comparison to GP-LVM

A closely related probabilistic dimensionality reduction algorithm to LL-LVM is GP-LVM [7]. GP-LVM defines the mapping from the latent space to data space using Gaussian processes. The likelihood of the observations ( is the vector formed by the th element of all high dimensional vectors) given latent variables is defined by , where the th element of the covariance matrix is of the exponentiated quadratic form: with smoothness-scale parameters [8]. In LL-LVM, once we integrate out from Eq. (5), we also obtain the Gaussian likelihood given ,

In contrast to GP-LVM, the precision matrix depends on the graph Laplacian matrix through and . Therefore, in LL-LVM, the graph structure directly determines the functional form of the conditional precision.

4 Experiments

Figure 4: Resolving short-circuiting problems using variational lower bound. A: Visualization of samples drawn from a Swiss Roll in 3D space. Points 28 (red) and 29 (blue) are close to each other (dotted grey) in 3D. B: Visualization of the samples on the latent 2D manifold. The distance between points and is seen to be large. C: Posterior mean of with/without short-circuiting the th and the th data points in the graph construction. LLLVM achieves a higher lower bound when the shortcut is absent. The red and blue parts are mixed in the resulting estimate in 2D space (right) when there is a shortcut. The lower bound is obtained after 50 EM iterations.

4.1 Mitigating the short-circuit problem

Like other neighbour-based methods, LL-LVM is sensitive to misspecified neighbourhoods; the prior, likelihood, and posterior all depend on the assumed graph. Unlike other methods, LL-LVM provides a natural way to evaluate possible short-circuits using the variational lower bound of Eq. (6). Fig. 4 shows samples drawn from a Swiss Roll in 3D space (Fig. 4A). Two points, labelled 28 and 29, happen to fall close to each other in 3D, but are actually far apart on the latent (2D) surface (Fig. 4B). A k-nearest-neighbour graph might link these, distorting the recovered coordinates. However, evaluating the model without this edge (the correct graph) yields a higher variational bound (Fig. 4C). Although it is prohibitive to evaluate every possible graph in this way, the availability of a principled criterion to test specific hypotheses is of obvious value.

In the following, we demonstrate LL-LVM on two real datasets: handwritten digits and climate data.

4.2 Modelling USPS handwritten digits

Figure 5: USPS handwritten digit dataset described in section 4.2. A: Mean (in solid) and variance (1 standard n deviation shading) of the variational lower bound across 10 different random starts of EM algorithm with different ’s. The highest lower bound is achieved when . B: The posterior mean of in 2D. Each digit is colour coded. On the right side are reconstructions of for randomly chosen query points . Using neighbouring and posterior means of we can recover successfully (see text). C: Fitting results by GP-LVM using the same data. D: ISOMAP () and E: LLE (=40). Using the extracted features (in 2D), we evaluated a

-NN classifier for digit identity with 10-fold cross-validation (the same data divided into

training and test sets). The classification error is shown in F. LL-LVM features yield the comparably low error with GP-LVM and ISOMAP.

As a first real-data example, we test our method on a subset of samples each of the digits from the USPS digit dataset, where each digit is of size (i.e., , . We follow [7], and represent the low-dimensional latent variables in 2D.

Fig. 5A shows variational lower bounds for different values of , using different EM initialisations. The posterior mean of obtained from LL-LVM using the best is illustrated in Fig. 5B. Fig. 5B also shows reconstructions of one randomly-selected example of each digit, using its 2D coordinates as well as the posterior mean coordinates , tangent spaces and actual images of its closest neighbours. The reconstruction is based on the assumed tangent-space structure of the generative model (Eq. (5)), that is: . A similar process could be used to reconstruct digits at out-of-sample locations. Finally, we quantify the relevance of the recovered subspace by computing the error incurred using a simple classifier to report digit identity using the 2D features obtained by LL-LVM and various competing methods (Fig. 5C-F). Classification with LL-LVM coordinates performs similarly to GP-LVM and ISOMAP (), and outperforms LLE ().

(a) 400 weather stations
(b) LLE
(c) LTSA
(d) ISOMAP
(e) GP-LVM
(f) LL-LVM
Figure 6: Climate modelling problem as described in section 4.3. Each example corresponding to a weather station is a 12-dimensional vector of monthly precipitation measurements. Using only the measurements, the projection obtained from the proposed LL-LVM recovers the topological arrangement of the stations to a large degree.

4.3 Mapping climate data

In this experiment, we attempted to recover 2D geographical relationships between weather stations from recorded monthly precipitation patterns. Data were obtained by averaging month-by-month annual precipitation records from 2005–2014 at 400 weather stations scattered across the US (see Fig. 6) 555The dataset is made available by the National Climatic Data Center at http://www.ncdc.noaa.gov/oa/climate/research/ushcn/. We use version 2.5 monthly data [15].. Thus, the data set comprised 400 12-dimensional vectors. The goal of the experiment is to recover the two-dimensional topology of the weather stations (as given by their latitude and longitude) using only these 12-dimensional climatic measurements. As before, we compare the projected points obtained by LL-LVM with several widely used dimensionality reduction techniques. For the graph-based methods LL-LVM, LTSA, ISOMAP, and LLE, we used 12-NN with Euclidean distance to construct the neighbourhood graph.

The results are presented in Fig. 6. LL-LVM identified a more geographically-accurate arrangement for the weather stations than the other algorithms. The fully probabilistic nature of LL-LVM and GPLVM allowed these algorithms to handle the noise present in the measurements in a principled way. This contrasts with ISOMAP which can be topologically unstable [16] i.e. vulnerable to short-circuit errors if the neighbourhood is too large. Perhaps coincidentally, LL-LVM also seems to respect local geography more fully in places than does GP-LVM.

5 Conclusion

We have demonstrated a new probabilistic approach to non-linear manifold discovery that embodies the central notion that local geometries are mapped linearly between manifold coordinates and high-dimensional observations. The approach offers a natural variational algorithm for learning, quantifies local uncertainty in the manifold, and permits evaluation of hypothetical neighbourhood relationships.

In the present study, we have described the LL-LVM model conditioned on a neighbourhood graph. In principle, it is also possible to extend LL-LVM so as to construct a distance matrix as in [17], by maximising the data likelihood. We leave this as a direction for future work.

Acknowledgments

The authors were funded by the Gatsby Charitable Foundation.

References

  • [1] S. T. Roweis and L. K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500):2323–2326, 2000.
  • [2] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, pages 585–591, 2002.
  • [3] J. B. Tenenbaum, V. Silva, and J. C. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 290(5500):2319–2323, 2000.
  • [4] L.J.P. van der Maaten, E. O. Postma, and H. J. van den Herik. Dimensionality reduction: A comparative review, 2008. http://www.iai.uni-bonn.de/~jz/dimensionality_reduction_a_comparative_review.pdf.
  • [5] L. Cayton. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep, pages 1–17, 2005. http://www.lcayton.com/resexam.pdf.
  • [6] J. Platt. Fastmap, metricmap, and landmark MDS are all Nyström algorithms. In

    Proceedings of 10th International Workshop on Artificial Intelligence and Statistics

    , pages 261–268, 2005.
  • [7] N. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In NIPS, pages 329–336, 2003.
  • [8] M. K. Titsias and N. D. Lawrence. Bayesian Gaussian process latent variable model. In AISTATS, pages 844–851, 2010.
  • [9] S. Roweis, L. Saul, and G. Hinton. Global coordination of local linear models. In NIPS, pages 889–896, 2002.
  • [10] M. Brand. Charting a manifold. In NIPS, pages 961–968, 2003.
  • [11] Y. Zhan and J. Yin. Robust local tangent space alignment. In NIPS, pages 293–301. 2009.
  • [12] J. Verbeek. Learning nonlinear image manifolds by global alignment of local linear models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8):1236–1250, 2006.
  • [13] C. Bishop. Pattern recognition and machine learning. Springer New York, 2006.
  • [14] M. J. Beal.

    Variational Algorithms for Approximate Bayesian Inference

    .
    PhD thesis, Gatsby Unit, University College London, 2003.
  • [15] M. Menne, C. Williams, and R. Vose. The U.S. historical climatology network monthly temperature data, version 2.5. Bulletin of the American Meteorological Society, 90(7):993–1007, July 2009.
  • [16] Mukund Balasubramanian and Eric L. Schwartz. The isomap algorithm and topological stability. Science, 295(5552):7–7, January 2002.
  • [17] N. Lawrence. Spectral dimensionality reduction via maximum entropy. In AISTATS, pages 51–59, 2011.
  • [18] K. B. Petersen and M. S. Pedersen. The matrix cookbook, nov 2012. Version 20121115.

LL-LVM supplementary material

Notation

The vectorized version of a matrix is . We denote an identity matrix of size with . Other notations are the same as used in the main text.

Appendix A Matrix normal distribution

The matrix normal distribution generalises the standard multivariate normal distribution to matrix-valued variables. A matrix is said to follow a matrix normal distribution with parameters and if its density is given by

(12)

If , then , a relationship we will use to simplify many expressions.

Appendix B Matrix normal expressions of priors and likelihood

Recall that .

Prior on low dimensional latent variables

(13)
(14)

where

is known as a graph Laplacian. It follows that . The prior covariance can be rewritten as

(15)
(16)
(17)

By the relationship of a matrix normal and multivariate normal distributions described in section A, the equivalent prior for the matrix , constructed by reshaping , is given by

(18)

Prior on locally linear maps

Recall that where each . We formulate the log prior on as

(19)

In the first line, the first term imposes a constraint that the mean of should not be too large. The second term encourages the the locally linear maps of neighbouring points and to be similar in the sense of the Frobenius norm. Notice that the last line is in the form of a the log of a matrix normal density with mean 0 where is given by

(20)

The expression is equivalent to

(21)

In our implementation, we fix to a small value, since the magnitude of and can be controlled by the hyper-parameter , which is optimized in the M-step.

Likelihood

We penalise linear approximation error of the tangent spaces. Assume that the noise precision matrix is a scaled identify matrix ei.g., .

(22)
(23)

where

(24)
(25)
(26)
(27)
(28)

By completing the quadratic form in , we want to write down the likelihood as a multivariate Gaussian 666The equivalent expression in term of matrix normal distribution for

where . The covariance in Eq. (24) decomposes
:

(29)
(30)

By equating Eq. (22) with Eq. (29), we get the normalisation term

(31)
(32)
(33)
(34)

Therefore, the normalised log-likelihood can be written as

(35)

Convenient form for EM

For the EM derivation in the next section, it is convenient to write the exponent term in terms of linear and quadratic functions in and , respectively. The linear terms appear in , which we write as a linear function in or

(36)
(37)

where

(38)
(39)

The quadratic terms appear in , which we write as a quadratic function of or a quadratic function of

(40)
(41)

where the th chunk of is given by

(42)

The matrix and and the th column of this matrix is denoted by . The th chunk (of length ) of the th column is given by

(43)

Appendix C Variational inference

In LL-LVM, the goal is to infer the latent variables () as well as to learn the hyper-parameters . We infer them by maximising the lower bound of the marginal likelihood of the observations .

For computational tractability, we assume that the posterior over () factorizes as

(44)

where and are multivariate normal distributions.

We maximize the lower bound w.r.t.