Uniform Interpolation Constrained Geodesic Learning on Data Manifold

02/12/2020 ∙ by Cong Geng, et al. ∙ 9

In this paper, we propose a method to learn a minimizing geodesic within a data manifold. Along the learned geodesic, our method can generate high-quality interpolations between two given data samples. Specifically, we use an autoencoder network to map data samples into latent space and perform interpolation via an interpolation net-work. We add prior geometric information to regularize our autoencoder for the convexity of representations so that for any given interpolation approach, the generated interpolations remain within the distribution of the data manifold. Before the learning of a geodesic, a proper Riemannianmetric should be defined. Therefore, we induce a Riemannian metric by the canonical metric in the Euclidean space which the data manifold is isometrically immersed in. Based on this defined Riemannian metric, we introduce a constant speed loss and a minimizing geodesic loss to regularize the interpolation network to generate uniform interpolation along the learned geodesic on the manifold. We provide a theoretical analysis of our model and use image translation as an example to demonstrate the effectiveness of our method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditionally, manifold learning methods aim to infer latent representations that can capture the intrinsic geometric structure of data. They are generally applied in dimensionality reduction and data visualization. Classical nonlinear manifold learning algorithms such as Isomap 

(Tenenbaum et al., 2000), locally linear embedding (Roweis and Saul, 2000) and local tangent space alignment (LTSA) (Zhang and Zha, 2003)

preserve local structures in small neighborhoods and derive the intrinsic features of data manifolds. With the success of deep learning on approximating complex non-linear functions, representing data manifolds using neural networks has received considerable interest and been studied from multiple perspectives. Autoencoders (AE) 

(Kingma and Welling, 2014) and Generative adversarial network (GAN) (Goodfellow et al., 2014) are two notable generative neural network models for learning the geometry structure or the density/distribution of data on the manifold. Recent studies such as adversarial autoencoder (AAE) (Makhzani et al., 2015) and Wasserstein autoencoder (WAE) (Tolstikhin et al., 2018) try to inherit the merits of both AE and GAN while avoiding the disadvantages of them. Other manifold learning methods such as DIMAL (Pai et al., 2019) integrate deep learning into the classical manifold embedding algorithms. These approaches provide the possibility of interpolation on the manifold.

The interpolation between data samples attracts wide attention among the field because it can achieve a smooth transition from one sample to another, such as intermediate face generation and path planning. A path planning algorithm needs to determine the intermediate milestones for a driver who has provided his departure and terminal position. There are several works of literature focusing on the interpolation of data manifold. Agustsson et al. (2019) propose to use distribution matching transport maps to obtain analogous latent space interpolation which preserves the prior distribution of the latent space, while minimally changing the original operation. It can generate interpolated samples with higher quality but it doesn’t have encoder to map the given data samples to latent space. Several other works (Berthelot et al., 2019; Sainburg et al., 2018; Chen et al., 2019b) that combine autoencoders with GANs are proposed to generate interpolated points on data manifold. But limited by the ability of GAN, it’s difficult to generate a convex latent representation when the input manifold has a changing curvature such as swiss-roll and S-curve. To avoid this problem, Arvanitidis et al. (2018) and Chen et al. (2018) all present that the magnification factor (Bishop et al., 1997) can be seen as a measure of the local distortion of the latent space. This helps the generated curves to follow the regions of high data density in the latent space. These methods achieve approximations of the geodesics to make the optimized curve remain within the data manifold. Furthermore, Yang et al. (2018) and Chen et al. (2018) discretize the curve into multiple points and add a penalty term to ensure that the geodesic remain within the data manifold. Chen et al. (2019a) also find the shortest path in a finite graph of samples as the geodesics on data manifold. This finite graph is built in the latent space using a binary tree data structure with edge weights based on Riemannian distances.

In this paper, we propose a method to capture the geometric structure on a data manifold and find smooth geodesics between data samples on it. Motivated by DIMAL (Pai et al., 2019), we combine our network framework with traditional manifold learning methods. In contrast to existing combinations of autoencoders with GANs, we add some prior geometric information to represent the local geometry of data manifold. The decoding of the latent space is the major obstacle for autoencoders because curvature information of data manifold will be lost in latent representations. We revisit the classical manifold learning algorithms to obtain an approximation of latent embeddings that preserve the local geometric structure of data manifold. Then the encoder and decoder are guided to be optimized towards the expected output with this approximation being the constraint. Using this method, we resolve other algorithms’ problems when the latent distribution is non-convex on some manifolds such as S-curve and swiss-roll. It means our method can unfold the data manifold to flattened latent representations and the new testing points can be mapped to the learned subspace without difficulty.

For manifold interpolation, we propose a geodesic learning method to establish smooth geodesics between data samples on the manifold. We introduce a constant speed loss and minimizing geodesic loss to train a network to interpolate along the geodesics. The Riemannian metric on data manifold can be induced by the canonical metric in the Euclidean space which the manifold is isometrically immersed in. So the geodesics can be obtained by the defined Riemannian metric. Existing methods cannot guarantee that the interpolation along geodesics has a constant speed. We parameterize the generated curve by a parameter . Our constant speed loss achieves uniform interpolation by making be an arc-length parameter of the interpolation curve. According to Riemannian geometry (Carmo, 1992), a curve is a geodesic if and only if its second derivative versus parameter is orthogonal to the tangent space. Based on this theory, we propose a geodesic loss to force the interpolation network to generate points along the geodesics. Furthermore, geodesic connecting two points may not be unique, such as the geodesics on a sphere. But geodesics are locally unique and the minimizing geodesic is the minimal curve connecting two points. Therefore, we also discretize the curve into multiple equally-spaced sample points and minimize the summation of the Euclidean distance of adjacent points. By applying these losses on our interpolation network, we can generate uniform interpolated points along the minimizing geodesics. Our major contributions include:

  • We propose a framework in which an autoencoder and an interpolation network are introduced to reconstruct the manifold structure. The autoencoder network promotes convex latent representations by adding some prior geometric information to guarantee the generated samples remaining within the distribution of the data manifold.

  • We propose a constant speed loss and a minimizing geodesic loss to generate geodesic on the manifold given two endpoints. We parameterize each geodesic by an arc-length parameter , such that we can generate points moving along the minimizing geodesic with constant speed and thus fulfill a uniform interpolation.

2 Preliminaries

Suppose

is a Riemannian differentiable manifold. First we define tangent vector and tangent space

(Carmo, 1992) on . Let be a differentiable function called a differentiable curve in . Suppose that , let be the set of functions on that are differentiable at . Now we can define tangent vectors.

Definition 2.1 (Tangent vector) The tangent vector to the curve at is a function given by

(1)

The set of all tangent vectors to at will be indicated by tangent space . Let be a parametrization of p on manifold M, for , suppose . Then the tangent vector at of the coordinate curve is defined as

(2)

spans a tangent space at . If the tangent vector is orthonormal with each other, is the orthonormal basis of tangent space.

Definition 2.2 (Geodesic) A parametrized curve is a geodesic at if at the point ; if is a geodesic at , for all , we say that is a geodesic.

is the covariant derivative of a vector field along (Carmo, 1992). is the tangent vector at t. If is a geodesic, then

(3)

where is a symmetric,biliear and positive-definite inner product corresponding to Riemannian metric. That is, the length of the tangent vector is constant. In the following, we introduce two useful theorems. Chen and Li (2004) and Carmo (1992) provide the proofs for Thereom 2.1 and Theorem 2.2 respectively.

Theorem 2.1 Suppose is a -dimensional Riemannian manifold isometric immersed in euclidean space . A differentiable curve is a geodesic on Riemannian manifold ,if and only if

(4)

where represents orthogonal projection of tangent space from to .

Theorem 2.2 If a piece-wise differentiable curve (Carmo, 1992), with parameter proportional to arc length, has length less or equal to the length of any other piece-wise differentiable curve joining to then is a geodesic.

A curve which minimizes the action between its endpoints is called a minimizing geodesic. The Hopf-Rinow theorem (Carmo, 1992) guarantees that if the manifold is complete, then any two points in are joined by at least one minimizing geodesic. Moreover, minimizing geodesic are nonbranching, locally unique and almost everywhere unique.

3 Manifold Reconstruction

Original AE and some combinations between AE and GANs can generate high-quality samples from specific latent distributions. But they failed to get convex embeddings on some manifolds with changing curvature such as swiss-roll or S-curve. Some other works (Berthelot et al., 2019; Sainburg et al., 2018; Chen et al., 2019b) were proposed to generate interpolations within the distribution of real data by distinguishing interpolations from real samples. But they also generate unsatisfying samples on the above-mentioned specific manifolds. The encoding results can be seen in Fig. 1

. The reason for this problem may be the insufficient ability of GANs and the decoder of the AE. The discriminator of GANs can only distinguish the similarities of distributions between generated samples and real ones. It cannot estimate whether the generated samples are on the data manifold. Furthermore, the autoencoders are prone to put the curvature information of data manifold in latent representations to make the network simple as possible. So the latent embeddings also have a changing curvature to save the curvature information. It may result in the non-convex representations.

(a) (b) (c)
(d) (e) (f)
Figure 1: The encoding results of different methods for swiss-roll. (a) swiss-roll data manifold. (b) Encoding result of AAE (Makhzani et al., 2015). (c) Encoding result of GAIA (Berthelot et al., 2019). (d) Encoding result of ACAI (Sainburg et al., 2018). (e) Encoding result motivated by Chen’s method (Chen et al., 2019b), employed GAN on interpolated points in latent space. (f) Encoding result of ours.
Figure 2: Framework of our method.

In our method, we add some prior geometric information obtained by traditional manifold learning approaches to encoding a convex latent representation. Traditional nonlinear manifold learning approaches such as Isomap, locally linear embedding (LLE), local tangent space alignment (LTSA) are classical algorithms to get a convex embedding by unfolding the manifold. We apply them to our method by adding regularization to the autoencoder. The loss function of the autoencoder can be written as:

(5)

where is the input sampled on the data manifold. represents the low-dimensional embedding of input . Encoder and Decoder of an autoencoder are trained to minimize the above loss . With as an expected low-dimensional approximation, the encoder is forced to train towards obtaining a convex latent representation while the decoder is forced to learn the lost curvature information on latent embeddings. The original autoencoder loss in brings input back to the original input. Behaviors induced by the loss can be observed in Fig. 1: the swiss-roll can be flattened on 2-dimensional latent space. For our experiments, we choose local tangent space alignment (LTSA) to compute the approximated low-dimensional embeddings for regularizing the autoencoder due to its learned local geometry.

4 Geodesic Learning

In our model, we denote as a data manifold. The Riemannian metric on can be obtained in the following way. We can establish an identical mapping to immerse manifold into a high-dimensional Euclidean space . Then the Riemannian metric on manifold can be induced by the canonical metric in the Euclidean space to guarantee that is an isometric immersion. Thus to obtain a geodesic on manifold , we can use the Riemannian geometry on subspace and the characteristics of isometric immersion .

4.1 Interpolation Network

The method of interpolating in the latent space can be varied in different situations. The simplest interpolation is linear interpolation as . This method enforces the encoder to map manifold into a linear space. For geodesic learning, linear interpolation is not applicable in most situations. Yang et al. (2018) propose to use the restricted class of quadratic functions and Chen et al. (2018) apply a neural network to parameterize the geodesic curves. Considering our manifold reconstruction method can unfold the data manifold, thus we employ polynomial functions similar to Yang et al. (2018)’s approach for our interpolation network. Different from Yang et al. (2018)’s method, we employ cubic functions to parameterize due to the diversity of latent representations. Therefore, a curve has four free parametric vectors. Each vector is -dimensional while is the dimension of latent coordinates. In practice, we train a geodesic curve that connects two pre-specified points and , so the function should be constrained to satisfy and . We perform the optimization using gradient descent.

4.2 Constant Speed Loss

We can produce interpolations along a curve on manifold by decoding the output of the interpolation network as from 0 to 1. Then the isometric immersion maps curve on manifold to a curve on . Suppose is the -th coordinate in , , then is the tangent vector along -th coordinate curve in . Then we get

(6)

where is the tangent vector along curve at and is a tangent mapping (Chen and Li, 2004). is the -th component of . The length of tangent vector can be defined as

(7)

where is the canonical inner product in Euclidean space. Furthermore, we can deduce that

(8)

where denotes the gradients of networks’ output with respect to at . denotes the Frobenius norm of a matrix or 2-norm of a vector. From Section 2 we know if a curve is a geodesic, then the length of the tangent vector along the curve is constant. Thus if is a geodesic, because is an isometric immersion, we can obtain

(9)

For our constant speed loss, we sample

points subjecting to uniform distribution on [0,1],

, then we need the corresponding length of tangent vectors to be equal. So the constant speed loss is defined as

(10)

where

(11)

and denotes an all-one vector with number .

Constant speed loss guarantees that parameter is an arc-length parameter which means that the parameter is proportional to arc length of the curve . can be viewed as the velocity at along and represents the magnitude of this velocity. Therefore, has constant speed with the parameter . Equivalently, considering is an isometric immersion, is equal to the magnitude of the velocity at along . Thus also has constant speed as moves from 0 to 1.

4.3 Minimizing Geodesic Loss

After guaranteeing the output curve of our decoder has a constant speed, we need to let curve be a geodesic. Due to the non-uniqueness of geodesics on a manifold, we further expect that the geodesics are minimizing. Theorem 2.1 in Section 2 provides a mathematical foundation.

(a) (b) (c) (d)
Figure 3: Results of different interpolation methods on semi-sphere Dataset. (a) Result of linear interpolation on AAE (Makhzani et al., 2015). (b) Result of A&H (Arvanitidis et al., 2019). (c) Result of Nutan Chen’s method (Chen et al., 2018). (d) Our result.
(a) (b) (c) (d) (e)
Figure 4: Results of different interpolation methods on swiss-roll Dataset. (a) Result of linear interpolation on AAE (Makhzani et al., 2015). (b) Result of ACAI (Sainburg et al., 2018). (c) Result of GAIA (Berthelot et al., 2019). (d) Result motivated by Chen et al. (2019b)’s method, employed GAN on interpolated points in latent space. The interpolation method is linear interpolation. (e) Our result.

According to Theorem 2.1, we can conclude that is a geodesic if and only if is orthogonal to the tangent space . Our encoder employed a local tangent space alignment (LTSA) algorithm to compute the approximated low-dimensional embeddings. LTSA represents the local geometry of the manifold using tangent space in a neighborhood of each data point. Therefore, our encoder maps a point on the manifold to its local coordinates. According to the definition of tangent space in Riemannian geometry, the tangent vector at along local coordinate curve are the basis of its tangent space . Suppose is the local coordinate of , suppose is the -th coordinate of , the tangent vector at of coordinate curve can be represented as . Then we can obtain:

(12)

where is the local coordinate of , is the -th component of the decoder function . denotes the gradients of decoder’s output with respect to at . The tangent space can be obtained as . From Theorem 2.1, we can deduce that if is a geodesic, then

(13)

which is equivalent to

(14)

where denotes the second derivative of the decoder’s output with respect to at . Now we can readily define our geodesic loss as follows:

(15)

where

(16)

For in practice, we can approximate it by

(17)

Geodesic loss and constant speed loss jointly force curve to have zero acceleration. That is, is a geodesic on data manifold. But geodesic connecting two points may not be unique, such as the geodesic on a sphere. According to Theorem 2.2 in Section 2, the minimizing geodesic is the curve with minimal length connecting two points. Thus our model employs the characteristic of minimal length to guarantee is a minimizing geodesic. Different from the summation corresponding to the curve energy by Yang et al. (2018) and the summation of the rate of change at interpolated points by Chen et al. (2018), we just approximate the curve length using the summation of Euclidean distance between adjacent interpolated samples on which slacks the choice of . The minimizing loss to minimize curve length is proposed as:

(18)

denotes the Frobenius norm of an output matrix or 2-norm of an output vector.

To summarize this part, the overall loss function of our interpolation network is:

(19)

where , and are the weights to balance these three losses. Under this loss constraint, we can generate interpolations moving along the minimizing geodesic with constant speed and thus fulfill a uniform interpolation. The framework of our method is shown in Fig 2.

5 Experimental Results

In this section, we present experiments on the geodesic generation and image translation to demonstrate the effectiveness of our method.

5.1 Geodesic Generation

First, we do experiments on 3-dimensional datasets since their geodesics can be better visualized. We choose the semi-sphere and swiss-roll as our data manifolds.

Semi-sphere dataset. We randomly sample 4,956 points subjecting to the uniform distribution on semi-sphere. In Fig. 3, we compare our approach with other interpolation methods, i.e., AAE (Goodfellow et al., 2014), A&H (Arvanitidis et al., 2019) and Nutan Chen’s method (Chen et al., 2018). From Riemannian geometry, we know the geodesic of a sphere under our defined Riemannian metric is a circular arc or part of it. The center of this circular arc is the center of this sphere. AAE cannot guarantee that the curve connecting two points is a geodesic. A&H can find the shortest path connecting two corresponding reconstructed endpoints. But the endpoints are inconsistent with the original inputs due to the uncertainty of their VAE network and their stochastic Riemannian metric. Nutan Chen’s method can generate interpolations along a geodesic, but they cannot fulfill a uniform interpolation. In Fig. 3, we observe that our method can generate uniform interpolation along a fairly accurate geodesic on semi-sphere based on the defined Riemannian metric.

Swiss-roll dataset. We choose swiss-roll to demonstrate the effectiveness of our method on manifolds that have large curvature variations. We randomly sample 5,000 points subjecting to the uniform distribution on the swiss-roll manifold. We compare our approach with AAE (Makhzani et al., 2015), ACAI (Sainburg et al., 2018), GAIA (Berthelot et al., 2019) and the method that applies GAN on feature space proposed by Chen et al. (2019b). Fig. 4 shows the experimental results. We observe that except our approach, other methods fail to generate interpolations within the data manifold because they cannot encode efficient latent embeddings for linear interpolations.

We do ablation studies on the semi-sphere dataset to investigate the effect of different losses proposed in our method, including the constant speed loss, geodesic loss and minimizing loss. Fig. 5 presents the results obtained by the combinations of different losses. We observe that the constant speed loss can promote a uniform interpolation. Compared with linear interpolation, our network can generate a shorter path that is close to real geodesic without the geodesic loss. Although our network can quickly converge to a fairly satisfactory result, the accuracy of the geodesic is not optimal. When incorporated with the geodesic loss, our interpolated points are fine-tuned to move along an accurate geodesic. The estimated curve length shown in Table 1 demonstrates our observation. The generated curve trained with all three losses contributes the best result and the estimated curve length 1.1028 is the closest to the length of real geodesic, namely 1.0776.

Loss Distance
Linear 1.3110
1.4710
1.1571
1.1277
6.0460
1.1028
real geodesic 1.0776
Table 1: The estimated distance of interpolation curve trained with different losses proposed in our method.
(a) (b)
(c) (d)
(e) (f)
Figure 5: Ablation study of results trained with different losses prposed by our method. (a) Results of linear interpolation. (b) Results trained with . (c) Results trained with . (d) Results trained with and . (e) Results trained with and . (f) Results trained with the total loss .
(a)
(b)
Figure 6: Results of linear interpolation and our interpolation method on Minist Dataset. (a) Results of linear interpolation. (b) Results of our interpolation method.
(a)
(b)
Figure 7: Results of linear interpolation and our interpolation method on CelebA Dataset. (a) Results of linear interpolation. (b) Results of our interpolation method.

5.2 Image translation

To further demonstrate our model’s effectiveness in image translation, we choose MNIST (LeCun, 1998) and CelebA (Liu et al., 2015) datasets. MNIST dataset contains handwritten digits with 10 different classes and CelebA dataset contains 202,599 face images. First, we need to employ the LTSA algorithm to compute the approximated low-dimensional embeddings to train our autoencoder. Due to the computing limitation of our device, we randomly select 1,079 images on each dataset to be applied to LTSA. Then we use these computed embeddings to train our autoencoder network. Next, for each dataset, we randomly select two images as the start point and endpoint in the data manifold. We employ different interpolation methods to realize an image translation between these two images. For the MNIST dataset, we do a comparative experiment using linear interpolation and our approach. In Fig. 6, we observe that only the start image and end image can be well reconstructed with linear interpolation. The quality of the interpolated images is bad that sometimes we cannot even tell whether they are digital images or not. Our interpolation method can fulfill a uniform interpolation with high visual quality. The reason for the different results of the two interpolation methods lies in that linear interpolation may make the interpolated points to traverse highly uncertain regions on the manifold. Our interpolation method ensures our generated curves are following the data manifold.

For the CelebA dataset, we also use the methods above to interpolate images between two selected face images and show the results in Fig. 7. Only a linear interpolation fails to generate uniform interpolated face images, i.e., the interpolated images are easy to change suddenly rather than changing gradually. On the contrary, our interpolation method can reconstruct visually pleasing face images with a uniform translation. Experiments of image translation on the two datasets all demonstrate the usefulness and validity of our model.

6 Conclusion

We propose a framework to explore the geometric structure of the data manifold. We apply an autoencoder to generate points on the desired manifold and propose a constant speed loss and a minimizing geodesic loss to generate geodesic on the underlying manifold given two endpoints. Different from existing methods in which geodesic is defined as the shortest path on the graph connecting data points, our model defines geodesic consistent with the definition of a geodesic in Riemannian geometry. For application, we apply our model to image translation to fulfill a uniform interpolation along the minimizing geodesics.

References

  • E. Agustsson, A. Sage, R. Timofte, and L. Van Gool (2019) Optimal transport maps for distribution preserving operations on latent spaces of Generative Models. In Proceedings of the 7th International Conference on Learning Representations(ICLR), External Links: 1711.01970 Cited by: §1.
  • G. Arvanitidis, L. K. Hansen, and S. Hauberg (2018) Latent Space Oddity: on the Curvature of Deep Generative Models. In Proceedings of the 6th International Conference on Learning Representations(ICLR), (en). External Links: 1710.11379 Cited by: §1.
  • G. Arvanitidis, S. Hauberg, P. Hennig, and M. Schober (2019) Fast and Robust Shortest Paths on Manifolds Learned from Data. In

    Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)

    ,
    pp. 10 (en). Cited by: Figure 3, §5.1.
  • D. Berthelot, C. Raffel, A. Roy, and I. Goodfellow (2019) Understanding and improving interpolation in autoencoders via an adversarial regularizer. In Proceedings of the 7th International Conference on Learning Representations(ICLR), (en). Cited by: §1, Figure 1, §3, Figure 4, §5.1.
  • C. M. Bishop, M. Svens’ en, and C. K. Williams (1997) Magnification factors for the som and gtm algorithms. In

    Proceedings 1997 Workshop on Self-Organizing Maps

    ,
    Cited by: §1.
  • M. P. d. Carmo (1992) Riemannian geometry. Birkhäuser. Cited by: §1, §2, §2, §2, §2.
  • N. Chen, F. Ferroni, A. Klushyn, A. Paraschos, J. Bayer, and P. van der Smagt (2019a) Fast approximate geodesics for deep generative models. In International Conference on Artificial Neural Networks, Cited by: §1.
  • N. Chen, A. Klushyn, R. Kurle, X. Jiang, J. Bayer, and P. van der Smagt (2018) Metrics for Deep Generative Models. in Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS). External Links: 1711.01204 Cited by: §1, Figure 3, §4.1, §4.3, §5.1.
  • W. Chen and X. Li (2004) Introduction to riemann geometry: volume i. Peking University Press. Cited by: §2, §4.2.
  • Y. Chen, X. Xu, Z. Tian, and J. Jia (2019b)

    Homomorphic latent space interpolation for unpaired image-to-image translation

    .
    In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 2408–2416. Cited by: §1, Figure 1, §3, Figure 4, §5.1.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1, §5.1.
  • D. P. Kingma and M. Welling (2014) Auto-Encoding Variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), External Links: 1312.6114 Cited by: §1.
  • P. Langley (2000)

    Crafting papers on machine learning

    .
    In Proceedings of the 17th International Conference on Machine Learning (ICML 2000), P. Langley (Ed.), Stanford, CA, pp. 1207–1216. Cited by: §6.
  • Y. LeCun (1998)

    The mnist database of handwritten digits

    .
    http://yann. lecun. com/exdb/mnist/. Cited by: §5.2.
  • Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision(ICCV), pp. 3730–3738. Cited by: §5.2.
  • A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §1, Figure 1, Figure 3, Figure 4, §5.1.
  • G. Pai, R. Talmon, A. Bronstein, and R. Kimmel (2019) DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). External Links: 1711.06011 Cited by: §1, §1.
  • S. T. Roweis and L. K. Saul (2000) Nonlinear dimensionality reduction by locally linear embedding. science 290 (5500), pp. 2323–2326. Cited by: §1.
  • T. Sainburg, M. Thielk, B. Theilman, B. Migliori, and T. Gentner (2018) Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions. arXiv preprint arXiv:1807.06650. Cited by: §1, Figure 1, §3, Figure 4, §5.1.
  • J. B. Tenenbaum, V. De Silva, and J. C. Langford (2000) A global geometric framework for nonlinear dimensionality reduction. science 290 (5500), pp. 2319–2323. Cited by: §1.
  • I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf (2018) Wasserstein Auto-Encoders. In Proceedings of the 6th International Conference on Learning Representations (ICLR). External Links: 1711.01558 Cited by: §1.
  • T. Yang, G. Arvanitidis, D. Fu, X. Li, and S. Hauberg (2018) Geodesic Clustering in Deep Generative Models. arXiv:1809.04747 [cs, stat]. External Links: 1809.04747 Cited by: §1, §4.1, §4.3.
  • Z. Zhang and H. Zha (2003) Nonlinear dimension reduction via local tangent space alignment. In International Conference on Intelligent Data Engineering and Automated Learning, pp. 477–481. Cited by: §1.