Spectral Echolocation via the Wave Embedding

07/15/2016
by   Alexander Cloninger, et al.
Yale University
0

Spectral embedding uses eigenfunctions of the discrete Laplacian on a weighted graph to obtain coordinates for an embedding of an abstract data set into Euclidean space. We propose a new pre-processing step of first using the eigenfunctions to simulate a low-frequency wave moving over the data and using both position as well as change in time of the wave to obtain a refined metric to which classical methods of dimensionality reduction can then applied. This is motivated by the behavior of waves, symmetries of the wave equation and the hunting technique of bats. It is shown to be effective in practice and also works for other partial differential equations -- the method yields improved results even for the classical heat equation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

page 9

page 10

06/25/2020

Backward error analysis for variational discretisations of partial differential equations

In backward error analysis, an approximate solution to an equation is co...
11/18/2019

Traveling wave solutions of some important Wick-type fractional stochastic nonlinear partial differential equations

In this article, exact traveling wave solutions of a Wick-type stochasti...
01/21/2021

Traveling Wave Solutions of Partial Differential Equations via Neural Networks

This paper focuses on how to approximate traveling wave solutions for va...
11/25/2021

Graph recovery from graph wave equation

We propose a method by which to recover an underlying graph from a set o...
02/20/2021

A novel spectral method for the semi-classical Schrödinger equation based on the Gaussian wave-packet transform

In this article, we develop and analyse a new spectral method to solve t...
03/14/2019

Spatial modeling of significant wave height using stochastic partial differential equations

The general condition of the ocean surface at a certain location in spac...
03/11/2020

PDE-induced connection of moving frames for the Atlas of the cardiac electric propagation on 2D atrium

As another critical implementation of moving frames for partial differen...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Spectral embedding methods are based on analyzing Markov chains on a high-dimensional data set

. There are a variety of different methods, see e.g. Belkin & Niyogi [1], Coifman & Lafon [2], Coifman & Maggioni [3], Donoho & Grimes [5], Roweis & Saul [8], Tenenbaum, de Silva & Langford [10], and Sahai, Speranzon & Banaszuk [11]

. A canonical choice for the weights of the graph is declare that the probability

to move from point to to be

where is a parameter that needs to be suitably chosen. This Markov chain can also be interpreted as a weighted graph that arises as the natural discretization of the underlying ’data-manifold’. Seminal results of Jones, Maggioni & Schul [6] justify considering the solutions of

as measuring the intrinsic geometry of the weighted graph. Here we always assume Neumann-boundary conditions whenever such a graph approximates a manifold.

Figure 1. Graphs that approximate smooth manifolds.

The cornerstone of spectral embedding is the realization that the map

can be used as an effective way of reducing the dimensionality. One useful explanation that is often given is to observe that the Feynman-Kac formula establishes a link between random walks on the weighted graph and the evolution of the heat equation. We observe that random walks have a tendency to be trapped in clusters and are unlikely to cross over bottlenecks and, simultaneously, that the evolution of the heat equation can be explicitely given as

The exponential decay implies that the long-time dynamics is really governed by the low-lying eigenfunctions who then have to be able to somehow reconstruct the random walks’ inclination for getting trapped in clusters and should thus be able to reconstruct the cluster. We believe this intuition to be useful and our further exposition will be based on this.

2. The Wave equation

2.1. Introduction.

Once the eigenfunctions of the Laplacian have been understood, they imply complete control over the Cauchy problem for the wave equation

given by the eigenfunction expansion

Throughout the rest of the paper, we will understand a solution of a wave equation as an operator of that form, which is meaningful on both smooth, compact manifolds equipped with the Laplace-Beltrami operator as well as on discrete weighted graphs equipped with the Graph Laplacian . A notable difference is the lack of decay associated with the contribution coming from higher eigenfunctions – this is closely related to the fact that the heat equation is highly smoothing while the wave equation merely preserves regularity. In one dimension, this is is easily seen using

implying that translations and are particular solutions of the wave equation which preserve their initial roughness). However, the dynamics is still controlled by low-lying eigenfunctions in a time-averaged sense: note that

where the integrals decay as soon as since

Put differently, the average behavior over a certain time interval is much smoother than the instantenous behavior. We will now prove that ’average’ considerations within the framework of the wave equation allow us to reconstruct the classical distance used in spectral embedding: then, after seeing that ’average’ considerations recover the known framework, we will investigate the behavior on shorter time-scales and use that as a way of deriving a finer approximation of the underlying geometry of the given data.

2.2. Recovering the spectral distance.

We start by defining the usual spectral distance between two elements w.r.t. the first eigenfunctions as

Equivalently, this may be understood as the Euclidean distance of the embedding

Given the dynamical setup of a wave equation, there is another natural way of measuring distances. Given a point , we define as the solution of

where is the Dirac function in the point . The solution starts out being centered at and then evolves naturally according to the heat equation. Since we are mainly interested in computational aspects, we will use to denote the projection of onto the first Laplacian eigenfunctions. It is natural to assume that if are close, then and should be fairly similar on most points of the domain for most of the time.

Figure 2. An evolving Dirac function and the overlap between two solutions.

We will now prove the Main Theorem stating that this notion fully recovers the spectral distance.

Theorem (Wave equation can recover spectral distance.).

Assume is connected (in the sense of ). Then the average distance of the wave equation arising from Dirac measures placed in allows to reconstruct the spectral distance via

Proof.

By definition, we have that

We explicitly have that

Since the are orthonormal in , the Pythagorean theorem applies and

and, since , we easily see that

and therefore

Remark. If is not connected but has multiple connected components, then the argument shows

3. The Algorithm

If you want to see something, you send waves in its general direction, you don’t throw heat at it.

– attributed to Peter Lax

3.1. Spectral Echolocation.

The Theorem discussed in the preceeding section suggests that we lose no information when using distances induced by the wave equation. The main underlying idea of our approach is that we naturally obtain additional information. We emphasize that the algorithm we describe here is not a dimension-reduction algorithm – instead, it can be regarded as a natural pre-processing step to enhance the effectiveness of spectral methods. Furthermore, it is more appropriate to speak of an entire family of algorithms: there are a variety of parameters and norms one could define and the optimal choice is not a priori clear.

Spectral Echolocation Algorithm.

  1. Input. A weighted graph .

  2. Compute the first Laplacian eigenfunctions .

  3. Pick random points .

  4. Define functions as ’mollifications’ of the indicator functions associated to the points. We propose taking the existing affinities given by the weights

  5. Pick . The projection of the solution of the attenuated wave equation with as initial datum onto the first eigenfunctions is

  6. Define a new weight between any two points given by

    where is the derivative in time and are any norms on the space of continuous functions and .

  7. Output. A distance synthesized out of , examples being

The underlying idea is quite simple: we start with various initial distributions of ’water’ at rest. We want these initial configurations to be relatively smooth so as to avoid drastic shocks. Given this initial configuration, we follow the evolution of the wave equation at our desired level of resolution (given by restricting to eigenfunctions). Points that are nearby should always have comparable levels of water and comparable levels of change in water level and this is measured by the integral norm. The exponentially decaying term in the evolution of the solution

comes from actually solving for the attenuated wave equation which further reduces high-frequency shocks and increases stability. As described above, setting , , squaring the norm, ignoring the derivative term completely and letting recovers the original weights of the graph completely. In practice, we have found that , and yield the best results, however, this is a purely experimental finding – identifying the best parameters and giving a theoretical justification for their success is still an open problem.

4. Examples of Noisy Clustering

4.1. Parameters

We always consider eigenfunctions and randomly chosen initial spots from which to send out waves. The attenuation factor is always chosen as and time is chosen so that the first eigenfunction performs one oscillation . Further parameters are and . This uniquely defines the individually induced distances, we always condense them into one distance using either

Generally, continuous geometries benefit from taking the minimum because of increased smoothness whereas clustering problems are better treated using the second type of combined distance.

4.2. Geometric Clusters with Erroneous Edges

A benefit of the refined wave echolocation metric is that, unlike heat, the transmission between two clusters does not simply depend on the number of edges but also their topology. We consider two clusters in each of which consists of a 1000 points arranged in a unit disk and the two unit disks are well-separated – the obstruction comes from a large number of random edges; specifically, every point is randomly connected to 4% in the other cluster. Heat diffuses quickly between these two clusters due to the large number of intra cluster connections. For this reason, the heat embedding of the fails to separate the clusters (however, it is does preserve some aspects of the topology, see Fig. 3). In contrast, however, the wave echolocation metric manages a clear separation of objects.

Figure 3. Heat kernel embedding (left) and embedding of the wave echolocation metric (right). We used averaging across 10 starting positions.

4.3. Social Networks

Social networks are a natural application of spectral methods, and mirror the synthetic example in Section 4.2. We examine spectral echolocation on the Facebook social circles dataset from [12], which consists of 4039 people in 10 friend groups. While there exist clear friend groups, edges within the clusters are still somewhat sparse, and there exist erroneous edges between clusters. One goal is to propagate friendship throughout the network and emphasize clusters. Figure 4

shows the original affinity matrix, sorted by cluster number. We also compute the diffusion distance and spectral echolocation distance, and display the affinity matrix

for both. Spectral echolocation not only compresses the inter cluster distances, it also discovers weak similarity between different clusters that share a number of connections.

Figure 4. Original Facebook affinity matrix (left), heat kernel embedding affinity (center), and wave embedding affinity (right). Bottom is zoomed in version of the top.

Substructures. Another natural goal is to detect small “friendship circles” within the larger network. These circles are based off of the features that brought the group together (e.g. same university, same work, etc). Overall there are 193 circles, though many only contain two or three people, and many of the larger circles are nowhere close to a dense network. We compare the average affinity within the 100 largest circles across several techniques. For both the standard heat kernel embedding as well as the wave embedding, we build a new graph between people based on whether two people are “10-times closer than chance”, which is defined as

We now compare the typical number of edges in each circle for the original data as well as the two embeddings – we observe a dramatic improvement. The results are displayed in Figure 5.

Figure 5. Number of friendship edges in original network (left), heat kernel embedding network (center), wave embedding network (right).

5. Examples with Heterogeneous Dimensionality

5.1. Plane with Holes

We examine the behavior of waves in a porous medium. Figure 6 shows that the wave equation travels more quickly through the bridges (the wave speeds up while in a bottleneck), and gets caught in the intersections. preserves the topology of the data and emphasizes the holes.

Figure 6. Original Data (left), heat kernel Embedding (center), and Wave Embedding (right). We used the minimum distance across 10 starting positions.

5.2. Union of Manifolds

Another interesting property of the wave equation is that the change in position undergoes a dramatic change whenever the dimensionality of the manifold changes: waves are suddenly forced into a very narrow channel or – going the other direction – are suddenly evolving in many different directions. We demonstrate this first in Figure 7. The data consists of two six-dimensional spheres, connected by a one-dimensional line. The low frequency eigenfunctions of the heat kernel travel from one end to the other without much recognition of the varying dimensionality. However, the wave embedding creates a gap between the bridge and the two spheres, with the variation of the first non-trivial eigenfunction being supported almost entirely on the bridge.

Figure 7. 6D spheres with 1D bridge (left), heat kernel embedding (center), and wave embedding (right). For this problem, we use the min distance across 10 starting positions.

5.3. Union of Manifolds with different dimensions

We also consider the same problem with spheres of different dimensions as in Figure 8. The data consists of a six-dimensional sphere and a three-dimensional sphere connected by a one-dimensional line. The figure displays the affinity matrices of the points, with the first block representing the six-dimensional sphere, the second block representing the three-dimensional sphere, and the third small block for the bridge. Notice that, in the heat kernel affinity, the bridge has affinity to more points in the lower dimensional sphere than the higher dimensional sphere. Also notice that the wave embedding separates the six-dimensional sphere much further from the bridge than the three-dimensional sphere.

Figure 8. 6D sphere in cluster 1 and 3D sphere in cluster 2 with 1D bridge (top left), wave embedding (top right), heat kernel affinity matrix (bottom left), and wave affinity matrix (bottom right). For this problem, we use the min distance across 10 starting positions.

Finally, we consider two six-dimensional spheres connected via a two-dimensional bridge in Figure 9. Specifically, we examine the local affinities of several points on the bridge. Notice that, for the wave equation, the affinities of points on the bridge are far from isotropic and clearly distinguish the direction the wave is traveling between the two spheres. Moveover, points on the bridge near the spheres have much lower affinity to points on the sphere than their heat kernel counterparts.

Figure 9. Neighborhoods of chosen points in 6D spheres–2D bridge example for wave embedding using averaging across 10 starting positions (top) and heat kernel affinity (bottom).

6. Comments and Remarks

6.1. Other partial differential equations

Spectral echolocation has two novel components:

  1. the evolution of a dynamical system on an existing weighted graph

  2. and the construction of a refined metric using information coming from the behavior of the dynamical system.

Our current presentation had its focus mainly on the case where the dynamical system is given by the wave equation, however, it is not restricted to that. Let us quickly consider a general linear partial differential equation of the type

where

is an arbitrary differential operator. The Fourier transform in the space variable yields a separation of frequencies

where is the symbol of the differential operator at frequency

. This is a simple ordinary differential equation whose solution can be written down explicitly as

and taking the Fourier transform again allows us to write the solution as

Differential equations for which this scheme is applicable include the classical heat equation () but also variants that include convolution with a sufficiently nice potential (), the Airy equation () and, more generally, any sufficiently regular pseudo-differential operator (for example ). The crucial insight is that the abstract formulation via the Fourier transform has a direct analogue on weighted graphs: more precisely, given eigenfunctions

associated to eigenvalues

, the natural ‘frequency’ associated to is, of course, and we may define the solution of

in the same way via

Figure 10. Wave (top), Heat ( row), Airy ( row), and Schrodinger (bottom).

Natural ‘symbols’ include heat , Airy or Schrodinger . Naturally, the same analysis goes through for equations of the type and our analysis of the attenuated wave equation above follows that scheme. The analysis of partial differential equations on graphs is still in its infancy and our original motivation for using the wave equation is a number of desirable properties that seem uniquely suited for the task at hand: no dissipation of energy and finite speed of propagation. Numerical examples show that different symbols can induce very similar neighborhoods: we believe that this merits further study; in particular, a thorough theoretical analysis of the proposed family of algorithms is highly desired.

6.2. Special case: Heat equation

We want to emphasize that our approach is novel even when we chose to emulate the classical heat propagation. This method can outperform the classical (unrefined) embedding via Laplacian eigenmaps even in relatively simple toy examples: we consider the classical 2D dumbbell example in Figure 11.

Figure 11. The dumbbell domain in our experiment.

This example has a small Cheeger constant due to the bottleneck, which means the first non-trivial eigenfunction will be essentially constant on the boxes and change rapidly on the bottleneck. This classical examples illustrates well how the first nontrivial eigenfunction can be used as a classifier and the classical Laplacian eigenmap works spectacularly well without any further modifications.

Figure 12. The values of the first eigenfunction for classical Laplacian eigenmaps (blue) vs. the refined heat metric (blue).

Figure 12 compares the eigenfunction of the Laplacian compared to the first non-trivial eigenfunction of the heat equation distance matrix. We observe that the refined heat metric is a much better approximation to the function

than and allows for a more accurate reconstruction of the bridge. We also observe that the nontrivial eigenfunction is essentially and to a remarkable degree constant on the two clusters which further increases its value as a classifier.

7. Conclusions

Summary. We have presented a new pre-processing technique for classical dimensionality reduction techniques based on spectral methods. The underlying new idea comes in three parts: (1) if one computes eigenfunctions of the Laplacian, then one might just as well use them so simulate the evolution of a partial differential equation on the existing weighted graph, (2) especially for physically meaningful dynamical systems such as the wave equation one would expect points with high affinity to behave similarly throughout time and (3) this motivates the construction of a refined metric extracting information coming from the behavior of the dynamical system.

The wave equation. We were originally motivated by a series of desirable properties of the wave equation on : preservation of regularity and finite speed of propagation. Recall that one of the fundamental differences between the heat equation and the wave equation is that solutions of the heat equation experience an instanteneous gain in smoothness while the wave equation merely preserves the smoothness of the initial datum (and sometimes not even that). Our main point is to show that this is not arbitrary but due to physical phenomena whose counterparts in the world of data can provide a refined measurement: the lack of regularity can be helpful! However, as we have shown, there are similar effects for most other partial differential equations and theoretical justifications on a precise enough level that they would distinguish between various dynamical systems are still missing – we believe this to be a fascinating open problem.

Refined metrics. Similarily, our way of refining metrics, either by taking the minimum or by compiling an average, is motivated by considerations (see also [9]) that are not specifically tuned to our use of partial differential equations – another fascinating open question is whether there is a more natural and attuned way of extracting information.

Acknowledgement. The authors are grateful to Raphy Coifman for a series of fruitful discussions and helpful suggestions. A.C. is supported by an NSF Postdoctoral Fellowship #1402254, S.S. is supported by an AMS Simons Travel Grant and INET Grant #INO15-00038.

References

  • [1] M. Belkin and P. Niyogi, Laplacian Eigenmaps for Dimensionality Reduction and Data Representation, Neural Computation 15 (2003): 1373–1396.
  • [2] R. Coifman and S. Lafon, Diffusion maps. Appl. Comput. Harmon. Anal. 21 (2006), no. 1, 5–30.
  • [3] R. Coifman and M. Maggioni, Diffusion wavelets. Appl. Comput. Harmon. Anal. 21 (2006), no. 1, 53–94.
  • [4] G. David and A. Averbuch, Hierarchical data organization, clustering and denoising via localized diffusion folders. Appl. Comput. Harmon. Anal. 33 (2012), no. 1, 1–23.
  • [5] D. Donoho and C. Grimes, Hessian eigenmaps: locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. USA 100 (2003), no. 10, 5591–5596.
  • [6] P. Jones, M. Maggioni and R. Schul, Manifold parametrizations by eigenfunctions of the Laplacian and heat kernels. Proc. Natl. Acad. Sci. USA 105 (2008), no. 6, 1803–1808.
  • [7] P. Jones, M. Maggioni and R. Schul, Universal local parametrizations via heat kernels and eigenfunctions of the Laplacian. Ann. Acad. Sci. Fenn. Math. 35 (2010), no. 1, 131–174.
  • [8] S. Roweis and L. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (2000) 2323–2326.
  • [9] S. Steinerberger, A Filtering Technique for Markov Chains with Applications to Spectral Embedding, Applied and Computational Harmonic Analysis , 40 (2016), 575–587.
  • [10] J. Tenenbaum, V. de Silva, J. Langford, A global geometric framework for nonlinear dimensionality reduction, Science 290 (2000) 23190–2323.
  • [11] T. Sahai, A. Speranzon, A. Banaszuk, Hearing the clusters of a graph: A distributed algorithm, Automatica 48 (2012) 15–24.
  • [12] J. McAuley and J. Leskovec, Learning to Discover Social Circles in Ego Networks, NIPS (2012).