On the convergence of maximum variance unfolding

08/31/2012 ∙ by Ery Arias-Castro, et al. ∙ 0

Maximum Variance Unfolding is one of the main methods for (nonlinear) dimensionality reduction. We study its large sample limit, providing specific rates of convergence under standard assumptions. We find that it is consistent when the underlying submanifold is isometric to a convex subset, and we provide some simple examples where it fails to be consistent.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the basic tasks in unsupervised learning, aka multivariate statistics, is that of dimensionality reduction. While the celebrated Principal Components Analysis (PCA) and Multidimensional Scaling (MDS) assume that the data lie near an affine subspace, modern approaches postulate that the data are in the vicinity of a submanifold. Many such algorithms have been proposed in the past decade, for example, ISOMAP

(Tenenbaum et al., 2000), Local Linear Embedding (LLE) (Roweis and Saul, 2000), Laplacian Eigenmaps (Belkin and Niyogi, 2003), Manifold Charting (Brand, 2003), Diffusion Maps (Coifman and Lafon, 2006), Hessian Eigenmaps (HLLE) (Donoho and Grimes, 2003), Local Tangent Space Alignment (LTSA) (Zhang and Zha, 2004), Maximum Variance Unfolding (Weinberger et al., 2004), and many others, some reviewed in (Van der Maaten et al., 2008; Saul et al., 2006).

Although some variants exist, the basic setting is that of a connected domain isometrically embedded in Euclidean space as a submanifold , with . We are provided with data points sampled from (or near) and our goal is to output that can be isometrically mapped to (or close to) .

A number of consistency results exist in the literature. For example, Bernstein et al. (2000) show that, with proper tuning, geodesic distances may be approximated by neighborhood graph distances when the submanifold is geodesically convex, implying that ISOMAP asymptotically recovers the isometry when is convex. When is not convex, it fails in general (Zha and Zhang, 2003). To justify HLLE, Donoho and Grimes (2003) show that the null space of the (continuous) Hessian operator yields an isometric embedding. See also (Ye and Zhi, 2012) for related results in a discrete setting. Smith et al. (2008) prove that LTSA is able to recover the isometry, but only up to an affine transformation. We also mention other results in the literature which show that, as the sample size increases, the output the algorithm converges to is an explicit continuous embedding. For instance, a number of papers analyze how well the discrete graph Laplacian based on a sample approximates the continuous Laplace-Beltrami operator on a submanifold (Belkin and Niyogi, 2005; von Luxburg et al., 2008; Singer, 2006; Hein et al., 2005; Giné and Koltchinskii, 2006; Coifman and Lafon, 2006), which is intimately related to the Laplacian Eigenmaps. However, such convergence results do not guaranty that the algorithm is successful at recovering the isometry when one exists. In fact, as discussed in detail by Goldberg et al. (2008) and Perrault-Joncas and Meila (2012), many of them fail in very simple settings.

In this paper, we analyze Maximum Variance Unfolding (MVU) in the large-sample limit. We are only aware of a very recent work of Paprotny and Garcke (2012) that establishes that, under the assumption that is convex, MVU recovers a distance matrix that approximates the geodesic distance matrix of the data. Our contribution is the following. In Section 2, we prove a convergence result, showing that the optimization problem that MVU solves converges (both in solution space and value) to a continuous version defined on the whole submanifold. The basic assumption here is that the submanifold is compact. In Section 3, we derive quantitative convergence rates, with mild additional regularity assumptions. In Section 4, we consider the solutions to the continuum limit. When is convex, we prove that MVU recovers an isometry. We also provide examples of non-convex where MVU provably fails at recovering an isometry.We also prove that MVU is robust to noise, which Goldberg et al. (2008) show to be problematic for algorithms like LLE, HLLE and LTSA. Some concluding remarks are in Section 5.

2 From discrete MVU to continuum MVU

In this section we state and prove a qualitative convergence result for MVU. This result applies with only minimal assumptions and its proof is relatively transparent. What we show is that the (discrete) MVU optimization problem converges to an explicit continuous optimization problem when the sample size increases. The continuous optimization problem is amenable to scrutiny with tools from analysis and geometry, and that will enable us to better understand (in Section 4) when MVU succeeds, and when it fails, at recovering an isometry to a Euclidean domain when it exists.

Let us start by recalling the MVU algorithm (Weinberger and Saul, 2006; Weinberger et al., 2004, 2005). We are provided with data points . Let denote the Euclidean norm. Let be the (random) set defined by

Choosing a neighborhood radius , MVU solves the following optimization problem:

Discrete MVU

Maximize (1)
subject to (2)

When the data points are sampled from a distribution with support , our main result in this section is to show that, when is sufficiently regular and sufficiently slowly, the discrete optimization problem converges to the following continuous optimization problem:

Continuum MVU

Maximize (3)
subject to (4)

where denotes the smallest Lipschitz constant of . It is important to realize that the Lipschitz condition is with respect to the intrinsic metric on (i.e., the metric inherited from the ambient space ), defined as follows: for , let

(5)

When is compact, the infimum is attained. In that case, is the length of the shortest continuous path on starting at and ending at , and is a complete metric space, also called a length space in the context of metric geometry (Burago et al., 2001). Then is Lipschitz with if

(6)

For any , denote by the class of Lipschitz functions satisfying (6).

One of the central condition is that is sufficiently regular that the intrinsic metric on is locally close to the ambient Euclidean metric.

Regularity assumption. There is a non-decreasing function such that when , such that, for all ,

(7)

This assumption is also central to ISOMAP. Bernstein et al. (2000) prove that it holds when is a compact, smooth and geodesically convex submanifold (e.g., without boundary). In Lemma 4, we extend this to compact, smooth submanifolds with smooth boundary, and to tubular neighborhoods of such sets. The latter allows us to study noisy settings.

Note that we always have

(8)

Let denote the set of functions that are solutions of Continuum MVU. We state the following qualitative result that makes minimal assumptions.

Theorem 1.

Let

be a (Borel) probability distribution with support

, which is connected, compact and satisfying (7), and assume that are sampled independently from . Then, for sufficiently slowly, we have

(9)

and for any solution of Discrete MVU,

(10)

almost surely as .

Thus Discrete MVU converges to Continuum MVU in the large sample limit, if satisfies the crucial regularity condition (7) and other mild assumptions. In Section 3, we provide explicit quantitative bounds for the convergence results (9) and (10) at the very end, under some additional (though natural) assumptions. In Section 4, we focus entirely on Continuum MVU, with the goal of better understanding the functions that are solutions to that optimization problem. Because of (10), we know that the output of Discrete MVU converges in a strong sense to one of these functions.

The rest of the section is dedicated to proving Theorem 1. We divide the proof into several parts which we discuss at length, and then assemble to prove the theorem.

2.1 Coverings and graph neighborhoods

For , let denote the undirected graph with nodes and an edge between and if . This is the -neighborhood graph based on the data. It is essential that be connected, for otherwise , while is finite. The latter comes from the fact that, for any ,

where we used (6) in the first inequality, and is the intrinsic diameter of , i.e.,

(11)

Recall that the only assumptions on made in Theorem 1 are that is compact, connected, and satisfies (7), and this implies that . Indeed, as a compact subset of , is bounded, hence . Reporting this in (7) immediately implies that .

That said, we ask more of than simply having connected. For , define

(12)

which is the event that forms an -covering of .

Connectivity requirement. in such a way that

(13)

Since is the support of , there is always a sequence that satisfy the Connectivity requirement. To see this, for , let be an -packing of of maximal size , i.e., a maximal collection of points such that for all . Recall that an -packing is also an -covering of and note that by compacity of . Let . Since is the support of , for any and any , where denotes the Euclidean ball centered at and of radius . Hence, for any . We have

Let ; the sequence is chosen here for the simplicity of the exposition, but more general sequence can be considered, as will become apparent at the end of the paragraph.

Since for all , . To see this, let . Clearly, for all , , which implies that the set of such that is non-empty. In particular, for all , we have . Now, let be fixed. Since , there exists an integer such that for all , so that for all . Since is arbitrary, this proves that the sequence converges to 0 as tends to infinity.

With such a choice of , we have . Therefore, if we take , it satisfies the Connectivity requirement. In Section 3.2 we derive a quantitative bound on that guaranty (13) under additional assumptions. Note that the sequence in the definition of can be replaced by any summable decreasing sequence.

The rationale behind the requirement on is the same as in (Bernstein et al., 2000): it allows to approximate each curve on with a path in of nearly the same length. We utilize this in the following subsection.

2.2 Interpolation

Assuming that the sampling is dense enough that

holds, we interpolate a set of vectors

with a Lipschitz function . Formally, we have the following.

Lemma 1.

Assume that holds . Then any vector is of the form for some .

We prove this result. The first step is to show that this is at all possible in the sense that

(14)

This shows that the map defined by for all , is Lipschitz (for and the Euclidean metrics) with constant . We apply a form of Kirszbraun’s Extension — (Lang and Schroeder, 1997, Th. B) or (Brudnyi and Brudnyi, 2012, Th. 1.26) — to extend to the whole into .

Therefore, let’s turn to proving (14). The arguments are very similar to those in (Bernstein et al., 2000). If , then, by (8), , which implies that

Now suppose that . Let be a path in connecting to of minimal length . Split into arcs of lengths plus one arc of length , so that

Denote by the extremities of the arcs along .

For , let . On , for all , so that

Hence, because ,

Similarly, for the last arc, recalling that , we have , and therefore

Consequently,

We have

and so (14) holds.

2.3 Bounds on the energy

We call the energy functional. For a function , let . Assume that holds . Then Lemma 1 implies that any is equal to for some . Hence,

(15)

Recall the function introduced in (7), and assume that is small enough that . For , and for any such that , we have

Since the function is non-decreasing, , and so

Consequently, , implying that

(16)

As a result of (15) and (16), we have

(17)

We have

and applying the triangle inequality, we arrive at

Since and , we have

and

(18)

Consequently,

Reporting this inequality in (17) on the event with , we have

(19)

where .

Finally, we show that is continuous (in fact Lipschitz) on for the supnorm. For any and in , and any and in , we have:

The first inequality is that of Cauchy-Schwarz. Hence,

(20)

and

(21)

2.4 More coverings and the Law of Large Numbers

The last step is to show that the supremum of the empirical process (18) converges to zero. For this, we use a packing (covering) to reduce the supremum over

to a maximum over a finite set of functions. We then apply the Law of Large Numbers to each difference in the maximization.

Fix and define

Note that if, and only if, , and by the fact that for any function or vector and any constant , we have

The reason to use is that it is bounded in supnorm. Indeed, for , we have

Let denote the covering number of for the supremum norm, i.e., the minimal number of balls that are necessary to cover , and let be an -covering of of minimal size . Since is equicontinuous and bounded, it is compact for the topology of the supremum norm by the Arzelà-Ascoli Theorem, so that for any .

Fix and let be such that . By (20) and (21), we have

Thus,

(22)

The Law of Large Numbers (LLN) imply that, for any bounded , , almost surely as . Indeed,

by the LLN applied to each term. Therefore, when is fixed, the second term in (22) tends to zero almost surely, and since is arbitrary, we conclude that

(23)

2.5 Large deviations of the sample energy

To show an almost sure convergence in (23), we need to refine the bound on the supremum of the empirical process (18). For this, we apply Hoeffding’s Inequality for U-statistics (Hoeffding, 1963), which is a special case of (de la Peña and Giné, 1999, Thm. 4.1.8).

Lemma 2 (Hoeffding’s Inequality for U-statistics).

Let be a bounded measurable map, and let

be a sequence of i.i.d. random variables with values in

. Assume that and that , and let . Then, for all ,

Let . To bound the deviations of , we apply this result with . Then,

By construction, . Since is Lipschitz with constant 1, for any and in , and . Hence , and . Applying Lemma 2 (twice), we deduce that, for any ,

(24)

Using (24) in (22), coupled with the union bound, we get that

(25)

Clearly, the RHS is summable for every fixed, so the convergence in (23) happens in fact with probability one, that is,

(26)

2.6 Convergence in value: proof of (9)

Assume satisfies the Connectivity requirement, and that is large enough that . When holds, by (19), we have

while when does not hold, since the energies are bounded by , we have

Combining these inequalities, we deduce that

(27)

Almost surely, the sum of the first two terms on the RHS tends to 0 by the fact that when , and (13) since satisfies the Connectivity requirement. The third term tends to 0 by (23). Hence, (9) is established.

2.7 Convergence in solution: proof of (10)

Assume satisfies the Connectivity requirement, and that is large enough that . Let denote any solution of Discrete MVU. When holds, there is such that . Note that the existence of the interpolating function holds on for each fixed , and that this does not imply the existence of an interpolating sequence . That said, for each in the event , there exists a sequence and an integer such that for all , i.e., the sequence is interpolating a solution of Discrete MVU for all large enough. In addition, when satisfies the Connectivity requirement, then by the Borel-Cantelli lemma. Hence the event holds with probability one.

In fact, without loss of generality, we may assume that . Since is equicontinuous and bounded, it is compact for the topology of the supnorm by the Arzelà-Ascoli Theorem. Hence, any subsequence of admits a subsequence that converges in supnorm. And since increases with and , any accumulation point of is in .

In fact, if we define , then all the accumulation points of are in . Indeed, we have

with

by (23), and

by (9), almost surely as . Hence, if , by continuity of on , we have