1 Introduction
One of the major open problems in machine learning involves the development of theoreticallysound methods for identifying and exploiting the lowdimensional structures that are often present in highdimensional data. Such methodology would have applications not only in supervised learning, but also in visualization and nonlinear dimensionality reduction, semisupervised learning, and manifold regularization.
Some initial steps have been made in this direction over the years under the rubric of “manifold learning methods.” These nonlinear dimension reduction (NLDR) methods have permitted interesting theoretical analysis and allowed the field to move beyond linear dimension reduction. But the theoretical results have fallen short of providing characterizations of the overall scope of the problem, including the similarities and differences of existing methods and their respective advantages. Consider, for example, the classical problem of finding nonlinear embeddings of a Swiss roll with hole, shown in Figure 1. Of the methods shown, Local Tangent Space Alignment is clearly the best at recovering the underlying planar structure of the manifold, but there is no existing theoretical explanation of this fact. Nor are there answers to the natural question of whether there are scenarios in which other methods would be better, why other methods perform worse, or whether the deficiencies can be corrected. The current paper aims to tackle some of these problems. Not only do we provide new characterizations of NLDR methods, but by correcting deficiencies of existing methods we are able to propose new methods in LaplaceBeltrami approximation theory.
We analyze the general class of manifold learning methods that we refer to as local, spectral methods. These methods construct a matrix using only information in local neighborhoods and take a spectral decomposition to find a nonlinear embedding. This framework includes the commonly used methods: Laplacian Eigenmaps (LE) Belkin & Niyogi (2003), Local Linear Embedding (LLE) Roweis & Saul (2000), Hessian LLE (HLLE) Donoho & Grimes (2003), Local Tangent Space Alignment (LTSA) Zhang & Zha (2004), and Diffusion Maps (DM) Coifman & Lafon (2006). We also consider several recent improvements to these classical methods, including LowDimensional RepresentationLLE Goldberg & Ritov (2008), ModifiedLLE Zhang & Wang (2007), and MALLER Cheng & Wu (2013)
. Outside of our scope are global methods that construct dense matrices that encode global pairwise information; these include multidimensional scaling, principal components analysis, isomap
Tenenbaum et al. (2000), and maximum variance unfolding
Weinberger & Saul (2006). We note in passing that these global methods have serious practical limitations, either in terms of a strong linearity assumption or computational intractability.Our general approach proceeds by showing that the embeddings for convergent local methods are solutions of differential equations, with a set of boundary conditions induced by the method. The treatment of the boundary conditions is the critical novelty of our approach. This approach allows us to categorize the methods considered into three classes of differential operators and accompanying boundary conditions, as shown in Table 1. We are also able to delineate properties that allow comparisons of methods within a class; see Table 2. These theoretical results allow us to, for example, conclude that when the goal is to find an isometrypreserving, global coordinate chart, LTSA is best among the classical methods considered. We can also show that HLLE belongs to the same class and converges to the same limit. Hence, they can be used exchangeably as smoothness penalties. We also improve existing LaplaceBeltrami approximations. In particular, we give a LaplaceBeltrami approximation that is consistent as a smoothing penalty even when boundary conditions are not satisfied.
Our analysis is based on the following two insights. First, matrices obtained from local, spectral methods can be seen as an operator that returns the bias from smoothing a function. This allows us to make a connection to the theory of linear smoothing in statistics. Second, as the neighborhood sizes for each local method decrease, we obtain a linear operator that evaluates infinitesimal differences in that neighborhood. In other words, we obtain convergence to a differential operator. Thus, the asymptotic behavior of a procedure can be characterized by the corresponding differential operator in the interior of the manifold and (crucially) by the boundary bias in the linear smoother.
Boundary Condition  2nd order penalty  

None  Coefficient Laplacian  
LLRLaplacian  
Diffusion maps  
Laplacian Eigenmaps  
(LDR, m)LLE  HLLE  
LDRLLE+  LTSA  
LLRLaplacian 
Method 
Order of smoother 
Converges & stable 

Diffusion Maps  ✗  ✓  
Laplacian Eigenmaps  ✓  ✓  
LLE  ✗  ✗  
LDRLLE  ✗  ✗  
LDRLLE+  ✗  ✓  
LLRLaplacian  ✗  ✓  
HLLE  ✓  ✓  
LTSA  ✓  ✓  
Coefficient Laplacian  ✓  ✓ 
2 Preliminaries
We begin by introducing some basic mathematical concepts and providing a highlevel overview of the relationship between NLDR and linear smoothing.
Local, spectral methods share a construction paradigm:

[nolistsep]

Choose a neighborhood for each point ;

Construct a matrix with only if is a neighbor of ;

Obtain a nonlinear embedding from an eigen or singular value decomposition of
.
Step 2 for constructing the matrix can be seen to be equivalent to constructing a linear smoother. We use this equivalence to compare existing NLDR methods and generate new ones by examining the asymptotic bias of the smoother on the interior and boundary of the manifold. In particular, we show that whenever the neighborhoods shrink in Step 1 as the number of points , the matrix converges to a differential operator on the interior of the manifold. Nonlinear dimensionality reduction methods thus solve a eigenproblem involving this differential operator. The solutions of the eigenproblem on a compact manifold are typically only wellseparated and identifiable in the presence of boundary conditions. If is symmetric and the smoother’s bias has a different asymptotic rate on the boundary than the interior, then it imposes a corresponding boundary condition for the eigenproblem. If is not symmetric, then the boundary conditions can be determined from the boundary bias for and its adjoint .
2.1 Linear smoothers
Consider the standard regression problem for data generated by the model where is a zeromean error term. A regression model is a linear smoother if
(1) 
and the smoothing matrix does not depend on the outcomes . Examples of linear smoothers include linear least squares regression on a matrix
, kernel and local polynomial regression, and ridge regression.
In the regression problem, one wishes to infer the conditional mean function given noisy observations . For manifold learning problems, the goal is to learn noiseless global coordinates but the outcomes are never observed. In both linear smoothing and manifold learning, the smoothing matrix makes no use of the response and simply maps rough functions into a space of smoother functions. The embedding that is learned by NLDR methods is a space of functions that are best reconstructed by the smoother. In other words, the smoother defines a form of autoencoder and the learned space is a space of likely outputs from the autoencoder.
2.2 NLDR constructions of linear smoothers
It is a simple but crucial fact that each of the NLDR methods that we consider construct or approximate a matrix where is a diagonal matrix and is a linear smoother. Thus, measures the bias of the prediction weighted by .
For example, Diffusion Maps, with a Gaussian kernel constructs the NadarayaWatson smoother , where is the diagonal degree matrix and
is the kernel matrix. The constructed embedding consists of the right singular vectors corresponding to the smallest singular values of
. Laplacian Eigenmaps, using an unnormalized graph Laplacian, differ only by the reweighting , which ensures that the matrix is positive semidefinite.2.3 Assumptions
The usual setting for linear smoothing is Euclidean space. To account for the manifold setting, we must make some regularity assumptions and demonstrate that calculations in Euclidean space approximate calculations on the manifold sufficiently closely.
We consider a smooth, compact dimensional Riemannian manifold with smooth boundary embedded in . From this manifold, points
are drawn from a uniform distribution on the manifold. This uniformity assumption can be weakened to admit a smooth density and obtain density weighted asymptotic results.
In the continuous case, we consider neighborhoods for a given bandwidth where distance is Euclidean distance in the ambient space. When appropriate, we will take as . In the discrete case, denote by the indices for neighbors of
in the point cloud. This neighborhood construction is only mildly restrictive since kNN neighborhoods are asymptotically equivalent to
neighborhoods in the interior of the manifold when points are sampled uniformly and the radius of the neighborhoods is on the boundary.2.4 Local coordinates
Most existing methods and the ones we construct require access to a local coordinate system with manifold dimension
for each point. This coordinate system is estimated using a local PCA or SVD at each point. Let
be the matrix of ambient space coordinates for points in neighborhood centered on and be the corresponding matrix centered on . The top right singular vectors(2) 
give estimated tangent vectors at . The top rescaled left singular vectors, , project points in to the tangent space. The normal coordinates , corresponding to the geodesics traced out by the tangent vectors , agree closely with these tangent space coordinates. Specifically, by Lemma 6 in Coifman & Lafon (2006), , where is a homogeneous polynomial of degree three whenever the coordinates are in a unit ball of radius . We adopt the convention that refers to the normal coordinates at for .
Our results rely on integration in normal coordinates for a ball of radius in the distance metric of the ambient space. To account for the manifold curvature, the volume form and neighborhood sizes must be accounted for in the integral. Lemma 7 in Coifman & Lafon (2006) further provides a Taylor expansion for the volume form for the Riemannian metric , where is a homogeneous polynomial of degree two. Likewise, distances in the ambient space and normal coordinates differ by where denotes normal coordinates for about , is homogenous degree 2, and the distance on the lefthand side is with respect to the ambient space. Consequently, integrals for any homogeneous polynomial of degree one or two satisfy
(3) 
where the integral on the right represents integration in and the term hides a smooth function that depends on the curvature of the manifold at .
For the purposes of this paper, we do not account for error from estimation of the tangent space. Thus we assume that we have a sufficiently accurate estimate of the normal coordinates. Accounting for this estimation error is a natural direction for further work.
3 Analysis of existing methods
Each of the existing NLDR methods construct a matrix from a point cloud of points with bandwidth
. The resulting nonlinear embeddings are obtained from the bottom eigenvectors of
or . We consider as a discrete approximation to an operator on functions . We examine the limit operator constructed by each of the NLDR methods for a fixed bandwidth . We show that as the bandwidth , where is a differential operator. The exception is LLE where there is no well defined limit. The stochastic convergence of the empirical constructions is not considered in this paper.3.1 Taylor expansions and local polynomial bases
As described in Section 2.2, most existing NLDR methods can be expressed as , where is a multiplication operator corresponding to a diagonal matrix and is a linear smoother. Denote by the linear function such that . This function exists by the Riesz representation theorem. We say is local when for all , the support of is contained in a ball of radius centered on . We assume that is a bounded operator (as unbounded operators are necessarily poor smoothers).
Consider applied to a function . A Taylor expansion of in a normal coordinates at gives
(4) 
where the error term holds since has bounded derivatives due to the compactness of . Here is a function denoting the normal coordinate map at . From this it is clear that the asymptotic behavior of in a neighborhood of is determined by its behavior on a basis of local polynomials. Furthermore, it can well approximated by examining the behavior on only lowdegree polynomials.
3.2 Convergence to a differential operator
Of particular interest is the case where operates locally on shrinking neighborhoods. In this case, the following theorem (proven in the supplementary material) is an immediate consequence of applying the sequence of smoothers to the Taylor expansion.
Theorem 1.
Let be a sequence of linear smoothers where the support of is contained in a ball of radius . Further assume that the bias in the residuals for some integer and all . Then if and converges to a bounded linear operator as , then it must converge to a differential operator of order at most on the domain .
As a result, each of the existing NLDR methods considered can be seen as discrete approximations for the solutions of differential equations given some boundary conditions, where is some differential operator and .
3.3 Laplacian and boundary behavior
The convergence of Laplacians constructed from point clouds to a weighted LaplaceBeltrami operator has been wellstudied by Hein et al. (2007) and Ting et al. (2010). In particular, the spectral convergence of Laplacians constructed using kernels has been studied by Belkin & Niyogi (2007), von Luxburg et al. (2008), and Berry & Sauer (2016). In the presence of a smooth boundary, the eigenproblem for the Diffusion Maps Laplacian that converges to the unweighted LaplaceBeltrami operator has been shown by Coifman & Lafon (2006) and Singer & Wu (2016) to impose Neumann boundary conditions. Specifically, it solves the differential equation
(5) 
where is the vector normal to the boundary at . This result is easily extended to other Laplacian constructions.
As the Diffusion Maps operator is the bias operator for a NadarayaWatson kernel smoother, and the unnormalized Laplacian is a rescaling of this bias, the Neumann boundary conditions can be derived from existing analyses for kernel smoothers. The boundary bias of the kernel smoother is for some constant when while in the interior it is of order
when points are sampled uniformly. This trivially follows from noting that the first moment of a symmetric kernel is zero when the underlying distribution is uniform and that the boundary introduces asymmetry in the direction
. Hence, the operator must be scaled by to obtain a nontrivial limit in the interior, but this scaling yields divergent behavior at the boundary when the Neumann boundary conditionis not met. Since eigenfunctions cannot exhibit this divergent behavior, they must satisfy the boundary conditions. The significance of this result is given by the following simple theorem.
Theorem 2.
Let be an operator imposing the Neumann boundary condition . Then, if there exists an isometrypreserving global coordinate chart, a spectral decomposition of cannot recover the global coordinates, as they are not in the span of the eigenfunctions of .
3.4 Local Linear Embedding
Local Linear Embedding (LLE) solves the boundary bias problem by explicitly canceling out the first two terms in the Taylor expansion. For a point cloud , LLE constructs a weight matrix satisfying and . Goldberg & Ritov (2008) and Ting et al. (2010) showed that this condition is not sufficient to identify a limit operator. The behavior of LLE depends on a regularization term. We give a more refined analysis of the constrained ridge regression procedure used to generate the weights.
Let be the neighborhood that excludes and be the points in centered on . To reconstruct from its neighbors, one solves the regression problem
(6) 
under the constraint that the weights sum to one, . Adding a ridge regression penalty and applying Lagrange multipliers gives .
Consider the singular value decomposition , and let be the orthogonal complement of . This allows us to rewrite the weights
(7)  
(8) 
In other words, the weights are the same as the constant weights used by a ball Laplacian but with a correction term. This correction term in Eq. 8 depends only on the curvature of the manifold. One can see this by noting that the top left singular vectors correspond to the points expressed in normal coordinates around . As the neighborhood is symmetric around , it follows that . The remaining left singular vectors correspond to directions of curvature of the manifold. From this, is it easy to see that LLE with ball neighborhoods is equivalent to in the interior of the manifold where is the ball Laplacian and is some correction term that reduces the penalty on the curvature of a function when it matches the curvature of the manifold.
3.5 Hessian LLE
Hessian LLE performs a local quadratic regression in each neighborhood. It then discards the locally linear portion in order to construct a functional that penalizes second derivatives. The Taylor expansion in Eq. 3.1 gives that the entries of the Hessian are coefficients of a local quadratic polynomial regression.
Given a basis of monomials of degree in local coordinates for each point in neighborhood where the first basis elements span linear and constant functions, the entries in the estimated Hessian are where is the projection onto the through coordinates.
Rather than directly estimating the Hessian, HLLE performs the following approximation. Orthonormalize the basis of monomials using GramSchmidt and discard the first vectors to obtain . By orthonormality, the regression coefficients in this basis are simply the inner products . We note that the approximation only recovers the Hessian in the interior of the manifold and when the underlying sampling distribution is uniform.
Given Hessian estimates for each point, the sum of their Frobenius norms is easily expressed as
3.6 Local Tangent Space Alignment
Local tangent space alignment (LTSA) computes a set of global coordinates that can be aligned to local normal coordinates via local affine transformations.
Given neighbors of expressed in local coordinates , LTSA finds global coordinates and affine alignment transformations that minimize the difference between the aligned global and local coordinates, , where is a centering matrix.
Given the global coordinates , this is a leastsquares regression problem for a fixed set of covariates . Thus, the best linear predictors for are given by where is the hat matrix for the covariates . The objective can be expressed using local operators as
(9) 
4 Equivalence of HLLE and LTSA
Although LTSA is derived from a very different objective than HLLE, we will show that they are asymptotically equivalent. This greatly strengthens a result in Zhang et al. (2018) which showed the equivalence of a modified version of HLLE to LTSA under the restrictive condition that there are exactly neighbors.
We show the asymptotic equivalence in two steps. First, we show and converge to the same differential operator by showing convergence in the weak operator topology. Then, we give an argument that derives the boundary condition for both methods.
4.1 Continuous extension
Both HLLE and LTSA are composed of the sum of local projection operators on the point cloud. As we wish to study their limit behavior on smooth functions, we consider the continuous extension to the manifold. To form the continuous extension, the sum is replaced by an integral
(10) 
where is the natural volume measure on , is an appropriate normalizing constant, is the volume of a ball of radius in , and are local operators. In the case of LTSA, where is the projection onto linear functions in the neighborhood and on a local normal coordinate system rather than linear functions on a discrete set of points.
To identify the order of the differential operator that LTSA converges to, we simply need to find the scaling that yields a nontrivial limit. For the functional to nontrivially converge as , the component terms should also nontrivially converge. Since is a projection onto the space orthogonal to linear functions, this is equivalent to . For this to converge nontrivially, one must have . The same argument holds for HLLE. Thus, both LTSA and HLLE yield fourthorder differential operators in the interior of .
4.2 HLLE and LTSA difference is neglible
To show the equivalence of HLLE and LTSA, we show that the remainder term resulting from their difference is the bias term of an even higher order smoother. As such, this bias term goes to zero faster than the HLLE and LTSA bias terms, which yields convergence in the weak operator topology. This gives following theorem. Proof details are given in the supplementary material.
Theorem 3.
as in the weak operator topology of equipped with the norm.
4.3 HLLE and LTSA boundary behavior
To establish that the limit operator has the same eigenfunctions, we must show that the boundary conditions that those eigenfunctions satisfy also match. We provide a theorem where a single proof identifies the boundary condition for both methods. This boundary condition applies to the second derivatives of a function and admits linear functions, unlike the Neumann boundary condition imposed by graphLaplacianbased methods.
Theorem 4.
and as for any and unless
(11) 
where is the tangent vector orthogonal to the boundary.
We outline the proof, providing the details in the supplementary material. The proof shows that boundary bias of the LTSA smoother is of order for some matrix . This is of lower order than the required scaling for the functional to converge.
Since locally linear and constant functions are in the null space by construction, only the behavior of the smoother on quadratic polynomials needs to be considered. The difficulty in analyzing the behavior arises from needing to average over multiple local regressions on different neighborhoods and dealing with a nonproduct measure. By exploiting symmetry, the problem of a nonproduct measure is removed, and the multivariate regression problem can be reduced to a univariate linear regression with respect to
, where is the coordinate function in the direction orthogonal to the boundary. Due to the shape of the asymmetric neighborhood, the quadratic functions , , induce a nonzero conditional mean . Since has a negative sign in this expression, the coefficients for and each can be set to cancel out the boundary bias to obtain the result.4.4 Partial equivalence of HLLE and Laplacian
Although the LaplaceBeltrami operator induces a penalty on the gradient and appears fundamentally different from the Hessian penalty induced by HLLE and LTSA, they can be compared by squaring the LaplaceBeltrami operator to create a fourthorder differential operator that induces a slightly different penalty on the Hessian.
Consider the smoothness penalty obtained by squaring the LaplaceBeltrami operator compared to the HLLE penalty.
where the second equality follows from the selfadjointness of the LaplaceBeltrami operator. These penalties are identical for onedimensional manifolds. However, a slightly different penalty is induced on multidimensional manifolds. Given global coordinates , the quadratic polynomial is in the null space of the twiceiterated Laplacian but not in the null space of the HLLE penalty.
5 Alternate constructions
We provide a few examples to illustrate how alternative constructions can address undesirable properties of existing methods and the implications of the changes. In particular, we propose a convergent and more stable variation of LLE. We find LLE and its variants behave similarly to a local linear regression smoother, replacing the Neumann boundary condition for Laplacians with a secondorder boundary condition that admits linear functions. Furthermore, we generate a new LaplaceBeltrami approximation that generates a smoothing penalty that does not impose boundary conditions. The resulting operator is shown to not have wellseparated eigenfunctions.
5.1 Lowdimensional representation LLE+
LLE has the deficiency that the curvature of the manifold affects the resulting operator and smoothness penalty. In the worst case when the regularization parameter is very small, the embedding
of the manifold in the ambient space lies close to the null space of the resulting LLE operator. In other words, a spectral decomposition of the LLE operator may simply recover a linear transformation of the ambient space.
Low dimensional representation Goldberg & Ritov (2008) LLE (LDRLLE) is a modified version of LLE that removes some of the effect of curvature in the manifold by reconstructing each point based on its tangent space coordinates only. As this still cancels out the first two terms of a Taylor expansion, it is an approximation to the LaplaceBeltrami operator. The weights at are chosen to be in the subspace spanned by points in in the ambient space and orthogonal to the tangent space, and thus, explicitly penalizes functions with curvature that matches the manifold. However, as the singular values for directions orthogonal to the tangent space decrease as compared to
for the tangent space, the method is subject to numerical instability. As such we classify LDRLLE as well as LLE as being based on a secondorder smoother. To compensate for this instability, it adds a regularization term.
We propose a further modification, LDRLLE+, which removes the artificial restriction on the weights to the span of the points in the neighborhood and generates a firstorder smoother. The weights are given by
(12) 
where are the top left singular vectors of after centering on .
Another variant of LLE is modified LLE (mLLE) Zhang & Wang (2007). While LDRLLE explicity constructs a local smoother that can be expressed by linear functions in the tangent coordinates and a bias operator in which linear functions are in its null space, mLLE achieves a similar effect by adding vectors orthogonal to the tangent space to the bias operator in order to remove nontangent directions from the null space.
5.2 Local Linear Regression
A simple alternative construction is to directly use a local linear regression (LLR) smoother to approximate the LaplaceBeltrami operator. Since only functions linear in the normal coordinates should be included in the null space of the bias operator, each local regression is performed on the projection of the neighborhood into its tangent space. Cheng & Wu (2013) propose this linear smoother in the context of local linear regression on a manifold and note its connection to the LaplaceBeltrami operator without analyzing the convergence of the associated operator. Our contribution is determining the boundary conditions.
In particular, we study the boundary behavior for the operator where is the adjoint of . The resulting boundary conditions have the same form as those for LTSA, albeit with different constants, and admit linear functions as eigenfunctions.
Theorem 5.
Let be the expected continuous extension for a local linear regression smoother with bandwidth and . Then there is some such that for any and , converges as only if
(13) 
where is the vector normal to the boundary at .
This result is obtained by examining the behavior of the adjoint by reducing it to computing the average residual over a set of univariate linear regressions. The result is that the adjoint has boundary bias when applied to a function . Combined with existing analyses for the boundary bias of Ruppert & Wand (1994) gives if the boundary condition is not met. Interestingly, this yields a Dirichlet boundary condition when swapping the order the operators are applied as .
We note that this procedure is similar to LDRLLE. The row weights must sum to one to preserve constant functions, , and each point’s tangent coordinates are perfectly reconstructed since the coordinate functions are linear functions of themselves. Empirically we found these perform very similarly as shown in Figure 1, but Local Linear Regression was more likely to exclude the quadratic interaction polynomial from its bottommost eigenfunctions.
5.3 Laplacian without boundary conditions
For existing Laplacian approximations, the smoothness penalty is only guaranteed to converge when satisfies the boundary conditions imposed by . Furthermore, the only existing LaplaceBeltrami approximation that yields a positive semidefinite matrix and nonnegative smoothness penalty is the graph Laplacian.
We derive a construction, the Coefficient Laplacian, that is both selfadjoint and guarantees convergence of the smoothness penalty for all functions. This is achieved by explicitly estimating the gradient with a local linear regression. For each neighborhood , estimate normal coordinates with respect to . Define
(14)  
(15) 
Thus, yields the coefficients for a linear regression of the centered on without the intercept term. Thus, as and sufficiently slowly.
The resulting operator has desirable behavior when used as a smoothing penalty. However, its eigenfunctions are useless for nonlinear dimensionality reduction. This occurs for two reasons. First, removal of boundary conditions yields an operator without a pure point spectrum. For example, in one dimension, the resulting operator solves the differential equation which has solutions for all . Thus, it is unclear what solutions are picked out by an eigendecomposition of the discrete approximations. Second, one can show that does not converge uniformly at the boundary even when the functional converges. As a result, the eigenvectors of the discrete approximations are uninformative. The resulting deficiencies are illustrated in Figure 2.
6 Discussion
This paper explores only one aspect of the connection between manifold learning and linear smoothing, namely the analysis of manifold learning methods and ways to use that analysis to improve existing NLDR methods.
However, this connection can be exploited in many additional ways. For instance, the methods developed for manifold learning can be used for linear smoothing. In particular, the linear smoother associated with LTSA can be used as a smoother for nonparametric regression problems. A simple consequence of our analysis is that the smoother can achieve bias in the interior even though it only uses a firstorder local linear regression. By contrast, local linear regression yields a bias of in the interior. The connection can also be exploited for multiscale analyses. In the case of the Laplacian, the weighted bias operator is the infinitesimal generator of a diffusion with defining its transition kernels and can be used to generate Diffusion wavelets Coifman & Maggioni (2006). Other bias operators can be substituted to generate orthogonal bases with local, compact support. Another possibility is to combine existing linear smoothers with manifold learning penalties to generate new smoothers and NLDR techniques by using the theory of generalized additive models Buja et al. (1989).
References
 Belkin & Niyogi (2003) Belkin, M. and Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003.
 Belkin & Niyogi (2007) Belkin, M. and Niyogi, P. Convergence of Laplacian eigenmaps. In NIPS, 2007.
 Berry & Sauer (2016) Berry, T. and Sauer, T. Consistent manifold representation for topological data analysis. arXiv preprint arXiv:1606.02353, 2016.
 Buja et al. (1989) Buja, A., Hastie, T., and Tibshirani, R. Linear smoothers and additive models. The Annals of Statistics, 17(2):453–510, 1989.
 Cheng & Wu (2013) Cheng, M.Y. and Wu, H.T. Local linear regression on manifolds and its geometric interpretation. Journal of the American Statistical Association, 108(504):1421–1434, 2013.
 Coifman & Lafon (2006) Coifman, R. and Lafon, S. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5–30, 2006.
 Coifman & Maggioni (2006) Coifman, R. and Maggioni, M. Diffusion wavelets. Applied and Computational Harmonic Analysis, 21(1):53–94, 2006.
 Donoho & Grimes (2003) Donoho, D. L. and Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for highdimensional data. Proceedings of the National Academy of Sciences, 100(10):5591, 2003.
 Goldberg & Ritov (2008) Goldberg, Y. and Ritov, Y. LDRLLE: LLE with lowdimensional neighborhood representation. Advances in Visual Computing, pp. 43–54, 2008.
 Hein et al. (2007) Hein, M., Audibert, J.Y., and von Luxburg, U. Graph Laplacians and their convergence on random neighborhood graphs. Journal of Machine Learning Research, 8:1325–1370, 2007.
 Roweis & Saul (2000) Roweis, S. T. and Saul, L. K. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323, 2000.
 Ruppert & Wand (1994) Ruppert, D. and Wand, M. Multivariate locally weighted least squares regression. The Annals of Statistics, 22(3):1346–1370, 1994.
 Singer & Wu (2016) Singer, A. and Wu, H.T. Spectral convergence of the connection Laplacian from random samples. Information and Inference: A Journal of the IMA, 6(1):58–123, 2016.
 Tenenbaum et al. (2000) Tenenbaum, J., De Silva, V., and Langford, J. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
 Ting et al. (2010) Ting, D., Huang, L., and Jordan, M. I. An analysis of the convergence of graph Laplacians. In ICML, 2010.

von Luxburg et al. (2008)
von Luxburg, U., Belkin, M., and Bousquet, O.
Consistency of spectral clustering.
Annals of Statistics, 36(2):555–586, 2008. 
Weinberger & Saul (2006)
Weinberger, K. and Saul, L.
Unsupervised learning of image manifolds by semidefinite programming.
International journal of computer vision
, 70(1):77–90, 2006.  Zhang et al. (2018) Zhang, S., Ma, Z., and Tan, H. On the equivalence of HLLE and LTSA. IEEE transactions on cybernetics, 48(2):742–753, 2018.
 Zhang & Wang (2007) Zhang, Z. and Wang, J. MLLE: Modified locally linear embedding using multiple weights. In NIPS, 2007.
 Zhang & Zha (2004) Zhang, Z. and Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM journal on scientific computing, 26(1):313–338, 2004.
Appendix A Convergence to differential operator
Theorem 6.
Let be a sequence of linear smoothers where the support of is contained in a ball of radius . Further assume that the bias in the residuals for some integer and all . Then if and converges to a bounded linear operator as , then it must converge to a differential operator of order at most on the domain .
Proof.
Let is be the limit of . Let be any monomial with degree and be some smooth function that is on a ball of radius around . Then the pointwise product . Since is bounded, . Furthermore, the convergence is uniform over all . Thus, the behavior is determined on a basis of polynomials of degree at most . Applying to a Taylor expansion with remainder gives the desired result. ∎
Appendix B Equivalence of HLLE and LTSA
Theorem 7.
as in the weak operator topology of equipped with the norm.
Proof.
The local operators for HLLE can be described more succinctly in terms of the difference of two linear smoothers. Let be hat matrix for a local quadratic regression in the neighborhood .
(16)  
(17) 
Let . For any ,
(18) 
since quadratic terms and lower can be removed from the Taylor expansion. ∎
Appendix C Boundary behavior of HLLE and LTSA
Theorem 8.
For any and function , as unless
(19) 
where is the tangent vector orthogonal to the boundary.
To do this, we show that the required scaling by for the functional to converge causes the value at the boundary to go to when this condition is not met.
Let . Assume is sufficiently small so that there exists a normal coordinate chart at such that is contained in the neighborhood for the chart. Furthermore, choose the normal coordinates such that the first coordinate corresponds to the direction orthogonal to the boundary and pointing inwards.
Consider the behavior of on the basis of polynomials expressed in this coordinate chart. Denote by the hat operator for a linear regression on linear and constant functions restricted to a ball of radius centered on . Note that any polynomial of degree at most 1 is in the null space of any by construction. Likewise, by symmetry, any polynomial with
or of odd degree is in the null space of
. This leaves only the polynomials .Each computes the residual at from regressing against linear functions in the neighborhood at . The residual for all since linear functions are in the null space. By symmetry of the neighborhoods in the interior, whenever . In other words, in the interior of the manifold, averaging over the multiple regressions is equivalent to averaging over the residuals of a single regression. Since the expected residual is always for least squares regression, for any .
More intuitively, one wishes to evaluate the residual at over all neighborhoods that include . Because linear functions are in the null space, this depends only on the curvature of the response at . Furthermore, since all neighborhoods have essentially identical shape, the hat matrices for every neighborhood are also identical, so the residual only depends on the offset . Averaging over the different is equivalent to fixing a single regression at and averaging over the residuals at all offsets .
However, the boundary behavior is very different since the shape of the neighborhoods can change. First, consider the quadratic polynomial in the direction orthogonal to the tangent space on the boundary. Since the residual is always evaluated at the boundary for a linear regression and is convex, the residual is necessarily positive. Figure 4 provides an intuitive illustration of this. Furthermore, since evaluating It follows that for some nonzero constant .
For polynomials where , we note that any linear regression on linear functions has corresponding coefficients for all by symmetry of the neighborhood. Thus, the prediction can be reduced to a linear regression where the response is the conditional mean given the neighborhood. Integrating over slices of an sphere gives
(20) 
Since constant terms are removed by the regression and since summing over the residual function for a given value of gives , the summed residuals are the same as those obtained for the function .
Thus, for any , unless which is equivalent to equation 19.
We checked this boundary condition using simulation on a manifold isomorphic to a rectangle. We took 10 estimated eigenfunctions and computed their Hessian at a point on the boundary. This generates a matrix consisting of the estimates . We take the svd of this matrix. The distribution of the top, middle, and bottom singular value is shown in figure 5
. There is one eigenvalue clearly close to 0 that represents the boundary condition. The average bottom right singular vector is given in table
3 and show fairly good correspondence to the theoretical calculations on a modestly fine grid.Derivative  Estimated value 

0.935  
0.346  
0.00  
0.588  
Predicted  0.882 
Appendix D Boundary Behavior of Local linear regression
As the boundary bias of local linear regression is already well studied, existing results can help determine the boundary conditions for the resulting operator. However, as the local linear regression smoother is not selfadjoint, the behavior of the adjoint must also be determined. Let and be normal coordinates at in a neighborhood containing . We again choose to correspond to the tangent vector that is normal to the boundary and pointing inwards so that . We wish to evaluate the boundary behavior of where is the adjoint of and is the Dirac delta. The main part of our proof is to evaluate .
Similar to the proof for theorem 8, we first show we can reduce the problem to a univariate linear regression. We do this through an orthogonal basis. Then we use the usual regression equations to actually compute the value. For each , denote and . It is easy to see that by symmetry that there are functions which are orthogonal to each other and to the constant function in for and where . Likewise, let be similarly defined to be orthogonal to for . To generate an orthonormal basis, we must orthogonalize it with respect to the constant function as well. GramSchmidt gives is orthogonal to the constant function. Thus, by evaluating
(21)  
(22)  
(23)  
(24)  
(25) 
where the inner product is taken with respect to . Integrating over , the first two terms each have magnitude 1 and cancel each other out. The third term may be rewritten
(26)  
(27)  
(28) 
where unless .
It follows that
(30)  
(31)  
(32) 
By applying this to a Taylor expansion, one concludes that for any continuously differentiable function which is nonzero at , as .
By Theorem 2.2 in Ruppert & Wand (1994), a point with distance has boundary bias
where is some constant and is the unit sphere cut along the plane orthogonal to the first coordinate at . This is a linear function in the Hessian. Furthermore, all odd moments of for are 0, and their second moments are all equal. It is a linear function of the diagonal of the Hessian and of the form for some .
Thus, the boundary condition that functions must satisfy is
(33) 
Although it takes the same form as the HLLE boundary condition, the constants are different. However, we do not know of a reason to prefer one boundary condition over the other.
Comments
There are no comments yet.