Making Laplacians commute

07/19/2013 ∙ by Michael M. Bronstein, et al. ∙ 0

In this paper, we construct multimodal spectral geometry by finding a pair of closest commuting operators (CCO) to a given pair of Laplacians. The CCOs are jointly diagonalizable and hence have the same eigenbasis. Our construction naturally extends classical data analysis tools based on spectral geometry, such as diffusion maps and spectral clustering. We provide several synthetic and real examples of applications in dimensionality reduction, shape analysis, and clustering, demonstrating that our method better captures the inherent structure of multi-modal data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 18

page 19

page 20

page 21

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Spectral methods proved to be an important and versatile tool in a wide range of problems in the fields of computer graphics, machine learning, pattern recognition, and computer vision. In computer graphics and geometry processing, classical signal processing methods based on frequency transforms were generalized to non-Euclidean spaces (Riemannian manifolds), where the eigenfunctions of the Laplace-Beltrami operator act as a non-Euclidean analogy of the Fourier basis, allowing one to perform harmonic analysis on the manifold. Applications based on such approaches include shape compression

[39], filtering [44], pose transfer [43, 60], symmetry detection [55], shape description [65, 33, 17, 46, 4], retrieval [15, 13], and correspondence [54, 30, 53].

In pattern recognition, one can think of the data as a low-dimensional manifold embedded into a high-dimensional space, whose local intrinsic structure is represented by the Laplace-Beltrami operator. In the discrete version, the manifold is represented as a graph and the Laplace-Beltrami operator as a graph Laplacian. Many problems thus boil down to finding the first eigenfunctions of the Laplacian: for example, in spectral clustering [52] clusters are determined by the smallest eigenfunctions of the Laplacian; eigenmaps [8] and diffusion maps [26, 51] embed the manifold into a low-dimensional space using the smallest eigenfunctions of the Laplacian or the related heat operator; and diffusion metrics [26] measure the distances in this low-dimensional space. Other examples include spectral graph partitioning [28], spectral hashing [69], image segmentation [63], spectral correspondence, and shape analysis.

Multimodal spectral geometry. Many data analysis applications involve observations and measurements of data using different modalities, such as multimedia documents [6, 70, 59, 50], audio and video [40, 1, 62], images with different lighting conditions [5], or medical imaging modalities [16]

. In shape analysis applications, it is important to be able to design compatible bases on multiple shapes, e.g. in order to transfer functions or vector fields from one shape to another

[41].

While spectral methods have been extensively studied for a single data space (manifold), there have been relatively few attempts of principled and systematic extension of spectral methods to multimodal settings involving multiple data spaces. In particular, problems of multimodal (or ‘multi-view’) clustering have gained increasing interest in the computer vision and pattern recognition community [27, 49, 66, 19, 42, 29]. Sindhwani et al. [64] used a convex combination of Laplacians in the ‘co-regularization’ framework. Manifold alignment

considered multiple manifolds as a single space with ‘connections’ between points and tries to find an aligned set of eigenvectors

[35, 68, 67]. A similar philosophy has been followed in the recent work of Eynard et al. [31], who proposed an extension of the spectral methods to the multimodal setting by finding a common eigenbasis of multiple Laplacians by means of joint approximate diagonalization [18, 20, 21, 71, 72]. They also showed that many previous methods for multi-modal clustering can be developed as instances of the joint diagonalization framework. Kovnatsky et al. [41] used joint diagonalization in computer graphics and shape analysis problems.

Main contribution. In this paper, we study a class of methods we term closest commuting operators (CCO), which we show to be equivalent to joint diagonalization. However, one of the main drawbacks of joint diagonalization is that when applied to Laplacians, it does not preserver their structure. On the other hand, with the CCO problem, we can restrict our search to the set of legal Laplacian matrices, thus finding closest commuting Laplacians with the same sparse structure rather than arbitrary matrices. We show that such optimization produces meaningful multimodal spectral geometric constructions.

The rest of the paper is organized as follows. In Section 2, we provide the background on spectral geometry of graphs. In Section 3, we formulate our CCO problem. For the simplicity of discussion, we consider undirected graphs with equal vertex sets and unnormalized Laplacians. We discuss the relation between joint diagonalization and closest commuting matrices and show that the two problems are equivalent. Section 4 is dedicated to numerical implementation. In Section 5, we discuss the generalization to the setting of different vertex sets using the notion of functional correspondence. Section 6 presents experimental results. Finally, Section 7 concludes the paper.

2 Background

2.1 Notation and definitions

Let be two real matrices. We denote by

the Frobenius and the operator norm (induced by the Euclidean vector norm) of , respectively. We say that and commute if , and call their commutator

. If there exists a unitary matrix

such that and are diagonal, we say that are jointly diagonalizable and call such the joint eigenbasis of and . We denote by a column vector containing the diagonal elements of matrix , and by a diagonal matrix containing on the diagonal the elements . Furthermore, we use to denote a diagonal matrix obtained by setting to zero the off-diagonal elements of .

2.2 Spectral geometry

Let us be given an undirected graph with vertices and weighted edges with weights . We say that vertices are connected if , or alternatively, . The matrix is called the adjacency matrix and

(1)

the (unnormalized) Laplacian of . Since in undirected graph implies , the matrices and are symmetric. Consequently, admits the unitary eigendecomposition with orthonormal eigenvectors

and real eigenvalues

, .

Spectral graph theory [24] studies the properties of the graph through analyzing the spectral properties of its Laplacian. It is closely related to spectral geometry of Riemannian manifolds [11], of which the graphs can be thought of as a discretization, and the Laplacian matrix corresponds to the Laplace-Beltrami operator on a Riemannian manifold. In particular, spectral methods have been successfully applied in the field of machine learning and shape analysis. We outline below the main spectral geometric constructions to which we will refer later in the paper.

Fourier analysis on graphs. Given a function defined on the vertices of the graph and represented as the -dimensional column vector , we can decompose it in the orthonormal basis of the Laplacian eigenvectors using Fourier series,

or in matrix notation, .

Heat diffusion on graphs. Similarly to the standard heat diffusion equation, one can define a diffusion process on , governed by the following PDE:

where the solution is the amount of heat at time at the vertices . The solution of the heat equation is given by , and one can easily verify that it satisfies the heat equation and the initial condition . The matrix

is called the heat operator (or the heat kernel) and can be interpreted as the ‘impulse response’ of the heat equation.

Diffusion maps. Embeddings by means of the heat kernel have been studied by Bérard et al. [10] and Coifman et al. [26, 25]. In the context of non-linear dimensionality reduction, Belkin and Niyogi [8, 9] showed that finding a neighborhood-preserving -dimensional embedding of the graph can be posed as the minimum eigenvalue problem,

(2)

which has an analytic solution , referred to as Laplacian eigenmap. The neighborhood-preserving property of the eigenmaps is related to the fact the the smallest ‘low-frequency’ eigenvectors of the Laplacian vary smoothly on the vertices of the graph.

More generally, a diffusion map is given as a mapping of the form , where is some transfer function acting as a ‘low-pass filter’ on eigenvalues [26, 25]. In particular, the setting corresponds to heat kernel embedding.

Diffusion distances. Coifman et al. [26, 25] defined the diffusion distance as a ‘cross-talk’ between the heat kernels

(3)

Intuitively, measures the ‘reachabilty’ of vertex from by a heat diffusion of length .

Spectral clustering. Ng et al. [52] showed a very efficient and robust clustering approach based on the observation that the multiplicity of the null eigenvalue of is equal to the number of connected components of . The corresponding eigenvectors act as indicator functions of these components. Embedding the data using the null eigenvectors of

and then applying some standard clustering algorithm such as K-means was shown to produce significantly better results than clustering the high-dimensional data directly.

2.3 Joint diagonalization

In many data analysis applications, we have multiple modalities or ‘views’ of the same data, which can be considered as graphs with different connectivities (sometimes referred to as multi-layered graphs [29]) with equal set of vertices and different weighted edges, with corresponding adjacency matrices and Laplacians . We denote their respective eigenvalues by and eigenvectors by , and the heat operators by .

The main question treated in this paper is how to generalize the spectral geometric constructions to such a setting, obtaining a single object such as diffusion map or distance from multiple graphs. Eynard et al. [31] proposed constructing multimodal spectral geometry by finding a common orthonormal basis that approximately jointly diagonalizes the symmetric Laplacians by solving the optimization problem

(4)

where denotes the squared norm of the off-diagonal elements of a matrix. Minimization of (4) can be carried out using a Jacobi-type method referred to as JADE [21]. Kovnatsky et al. [41] proposed a more efficient approach for finding the first few joint approximate eigenvectors representing as linear combinations of the eigenvectors and of .

The joint basis obtained in this way approximately diagonalizes the Laplacians, such that . The approximate matrices

obtained by setting to zero the off-diagonal elements of are jointly diagonalizable. Eynard et al. [31] used the approximate joint eigenvectors and the average joint approximate eigenvalues to construct joint ‘heat kernels’

(5)

and multimodal diffusion distances

(6)

2.4 Relation between joint diagonalizability and commutativity

Joint diagonalizability of matrices is intimately related to their commutativity. It is well-known that two symmetric matrices are jointly diagonalizable iff they commute [38]. In [34], we extended this result to the approximate setting, showing that almost jointly diagonalizable matrices almost commute:

Theorem 2.1 (Glashoff-Bronstein 2013).

There exist functions satisfying , , such that for any two symmetric matrices with ,

Furthermore, the lower bound is tight.

On the other hand, it is known that almost commuting matrices are close to commuting matrices, e.g. in the following sense [45, 61, 48]:

Theorem 2.2 (Lin 1997).

There exists a function satisfying with the following property: If are two self-adjoint matrices satisfying , and , then there exists a pair of commuting matrices satisfying and .

The combination of Theorems 2.1 and 2.2 implies that approximately jointly diagonalizable matrices are close to jointly diagonalizable matrices, and provides for an alternative to the joint diagonalization approaches used in [31, 41]: instead of trying to approximately diagonalize the matrices , we minimally modify to make them commute and thus become jointly diagonalizable,

(7)

Finally, the following result111 An analogous theorem for the related problem of almost normal complex matrices is presented in [36], where it is attributed to [32] and [22]. provides an even stronger connection between problems (7) and (4):

Theorem 2.3.

Let be symmetric matrices. Then,

Proof.

Let us denote

where is a pair of commuting matrices, and is a unitary matrix. Let be the joint approximate eigenbasis of such that . We further define

Using the fact that the Frobenius norm is invariant under unitary transformations, we get the following sequence of inequalities:

(8)

Now suppose that and are the closest commuting matrices to such that . Commuting matrices are jointly diagonalizable by a unitary matrix that we denote by . Since changing a zero-term in a matrix to a non-zero term can only increase the Frobenius norm, we get

(9)

Because of , all inequalities in the proof of Theorem 2.3 turn out to be equalities, so we immediately get the following

Corollary 2.1.

Let be symmetric matrices.

1. Let be the approximate joint eigenbasis of such that . Then, and are the closest commuting matrices to such that .

2. Let be the closest commuting matrices such that . Then, their joint eigenbasis satisfied .

In other words, the joint approximate diagonalization problem (4) and the closest commuting matrices problem (7) are equivalent, and we can solve one by solving the other. However, the big advantage of (7) is that we have explicit control over the structure of the resulting matrices , while in (4) this is impossible. In particular, when applied to Laplacian matrices, we cannot guarantee that the matrices obtained by approximate diagonalization of are legal Laplacians (see Figure 1).

In the following section, we solve problem (7) on the subset of Laplacian matrices and explore its application to the construction of multimodal spectral geometry.

[width=1]sparsity.pdf Original JADECCO

Figure 1: Comparison of the result of joint diagonalization (center) and closest commuting operator (right) problems applied to a pair of Laplacians (left). JADE does not preserve the sparse structure of the Laplacians. Even worse, matrices are not legal Laplacians as the sum of their rows is not zero anymore.

3 Problem formulation

Denote by the set of Laplacian matrices of an undirected graph with arbitrary edge weights. Let us be given two undirected graphs with adjacency matrices and Laplacians , and let be new graphs, where the edges are defined either as (the connectivity of is identical to that of ), or as (the connectivity of is a union of the edge sets of and ). We denote their respective adjacency matrices by and the Laplacians by .

We are looking for such edge weights that and commute and are as close as possible to :

(10)

Problem (10) is a version of problem (7) where the space of the matrices is restricted to valid Laplacians with the same structure as . We call the Laplacians produced by solving (10) the closest commuting operators (CCO).

Since commute, they are jointly diagonalizable, i.e., we can find a single eigenbasis such that .222Individual diagonalization of does not guarantee that the respective eigenvectors are identical, as the eigenvectors are defined up to a sign (for matrices with simple spectrum), or more generally, up to an isometry in the eigen sub-spaces corresponding to eigenvalues with multiplicity greater than . It therefore makes sense to jointly diagonalize using e.g. JADE even in this case, see [18]. W.r.t. to this eigenbasis, we can write the heat operators

(11)

and diffusion distances

(12)

3.1 Existence of CCOs

An important question is how far the CCOs can be from the original Laplacians ? We should stress that Lin’s Theorem 2.2 is not directly applicable to our problem (10): it guarantees that if , there exist two arbitrary matrices -close to , while we are looking for two Laplacians with the same structure. The question is therefore whether there exists a version of Theorem 2.2 that holds for a subset of such matrices.

While answering this question is a subject for future theoretical research, we provide empirical evidence that almost-commuting Laplacians are close to commuting Laplacians. In our experiment shown in Figure 2, we generated pairs of random Laplacian matrices of sizes and , with random -neighbor connectivity ( random per vertex, ranging between and

) and weights uniformly distributed in the interval

. We consider two matrices ‘numerically commuting’ if the Frobenius norm of their commutator is below . The behavior observed in Figure 2 suggests the following

Conjecture 3.1.

There exists a function satisfying , such that

Stated differently, from Theorem 2.3 we know that . We conjecture that if the Laplacians almost commute, then is close to .

A counterexample to Conjecture 3.1 would be a point in Figure 2 with small x-coordinate and large y-coordinate, which is not observed in our experiments. We leave the theoretical justification of this conjecture (or its disproval) for future work.

[width=0.65]random1.pdf

Figure 2: Numerical evidence that almost-commuting Laplacians are close to commuting Laplacians, obtained on random graphs of different size (shown by color) and connectivity.

4 Numerical optimization

In problem (10), we are looking for new Laplacians . Let us denote by the edge weights of the new graphs (here, due to symmetry, ). We parametrize the new adjacency matrices as

(13)

Then, we can rewrite (10) as

(14)

In practice, it is more convenient to solve an unconstrained formulation of problem (14),

(15)

where is a weight parameter.

The solution of problem (15) is carried out using standard first-order optimization techniques, where to ensure that we obtain a legal Laplacian, the weights are projected onto the interval after each iteration.

We differentiate the cost function in (15) w.r.t. the elements of the matrices , out of which only the relevant elements are used. The derivative of the distance terms in (15) are given by

where is an matrix with equal columns (in our notation, is a column vector containing the diagonal elements of ). The derivative of the commutator term w.r.t. the elements of the matrix is given by

where are matrices with equal columns given by

By symmetry considerations,

5 Generalizations

Our problem formulation (15) assumes that the two graphs have the same set of vertices and different edges , having thus Laplacians of equal size . A more general setting is of two graphs with different sets of vertices and edges, , , and the corresponding Laplacians of size .

Our CCO problem can be extended to this setting using the notion of functional correspondence [53], expressed at an matrix transferring functions defined on to , and an matrix going the other way around. In this setting, we can define an operator on the space of functions by the composition (or, equivalently, an operator on the space of functions as ). Our problem thus becomes

(16)

We call the term the generalized commutator of and .

The functional correspondence can be assumed to be given, or found from a set of corresponding vectors as proposed by Ovsjanikov et al. [53]: given a set of functions on and corresponding functions on (such that ), one can decompose and in the first eigenvectors of the corresponding Laplacians , yielding a system of equations with variables

(17)

where the matrix translates Fourier coefficients between the bases and . The correspondence can be thus represented as . (A more general setting of finding the matrix when the order of the columns of

is unknown and outliers are present was discussed by Pokrass et al.

[58]).

6 Results

In this section, we demonstrate our CCO approach on several synthetic and real datasets coming from shape analysis, manifold learning, and pattern recognition problems. The leitmotif of all the experiments is, given two datasets representing similar objects in somewhat different ways, to reconcile the information of the two modalities producing a single consistent representation.

In all the experiments, we used unnormalized Laplacians (1) constructed with Gaussian weights. Optimization of (15) was performed using conjugate gradients with inexact Armijo linesearch [12] with in the range . The edges were selected to preserve the connectivity of the original graphs. The information about the datasets as well as approximate timing (complexity of cost function and gradient evaluation, measured on a MacBook Air) is summarized in Table 1.


Dataset
T (sec)
Caltech 105 791 678 0.0116
Ring 140 149 149 0.0059
Circles 195 443 446 0.0125
Swissroll 400 866 877 0.0766
Man 500 915 922 0.1195
Reuters 600 10122 10669 1.2203
Table 1:

Number of degrees of freedom and computational time of cost function and its gradient on different datasets.

Circles: we used two graphs, shaped as four eccentric circles containing 195 points and having different connectivity (Figure 3, left). The closest commuting Laplacians were found using the procedure described above and result in graph weights shown in Figure 3 (right): the optimization performs a ‘surgery’ disconnecting the inconsistent connections and producing four connected components.

Ring: We used a ring and a cracked ring sampled at 140 points and connected using 4 nearest neighbors (Figure 4) to visualize the effect of our optimization on the resulting Laplacian eigenvectors. Figure 4 (top) shows the first few eigenvectors of the original Laplacians : their structure differs dramatically. The CCO optimization cuts the connections in the first dataset, making the two rings topologically equivalent. Since the new Laplacians commute, they are jointly diagonalizable and thus the new sets of eigenvectors are identical (, as shown in Figure 4, bottom).

Figure 5 shows the heat kernels computed on the original graphs (, left) and after the optimization (, right). For comparison, we also show the ‘joint’ heat kernels obtained using joint diagonalization of the original Laplacians computed with JADE [21]. The latter is not a valid heat operator as it contains negative, albeit small, values (Figure 5, center).

Human shapes: We used two poses of the human shape from the TOSCA dataset [14], uniformly sampled at 500 points and connected using 5 nearest neighbors. The resulting graphs have different topology (the hands are connected or disconnected, compare Figure 6 top and bottom). We computed the heat diffusion distance with time parameter according to (3), truncating the sum after terms. Computing on the original graphs (Figure 6, left) manifests the difference in the graph topology: the distance from the fingers of the left hand to those of the right hand differs dramatically, as in one graph one has to go through the upper part of the body, while in the other one can ‘shortcut’ across the hands connections. Our optimization disconnects these links (Figure 6, right) making the distance in both cases behave similarly. For comparison, we show the result of simultaneous diagonalization using JADE (Figure 6, center), where the distance is computed using joint approximate eigenvectors and average approximate joint eigenvalues as defined in (6).

Swiss rolls: We used two Swiss roll surfaces with slightly different embeddings and geometric noise, sampled at 400 points and connected using 4 nearest neighbors. Because of the different embeddings, the two graphs have different topology (the first one cylinder-like and the second one plane-like, see Figure 7 top left). As a result, the embedding of the two Swiss rolls into the plane using Laplacian eigenmaps differ dramatically (Figure 7, bottom left).

Performing our CCO optimization removes the topological noise making both graphs embeddable into the plane without self-intersections (Figure 7, top right). The resulting eigenmaps have correct topology and are perfectly aligned (bottom, right). For comparison, we show the joint diagonalization result (bottom, center).

Finally, in Figure 7 (bottom, right) we show optimization results obtained using a sparse set of pointwise correspondences from which a smooth functional correspondence was computed according to (17) and used in the generalized commutator in (16).

Caltech: We used the dataset from [31], containing 105 images belonging to 7 image classes (15 images per class) taken from the Caltech-101 dataset. The images were represented using the bio-inspired and the PHOW features used as two different modalities. We constructed the unnormalized Laplacian in each of the modalities using self-tuning weights, and computed the diffusion distance using the scale between all the images.

Figure 8 (left) shows the obtained diffusion distances. The CCO approach allows a significantly better distinction between image classes, which is manifested in higher ROC curves (Figure 8, right).

Multiview clustering: We reproduce the multi-view clustering experiment from [31], wherein we use the previously described Caltech dataset; a subset of the NUS dataset [23] containing images (represented by 64-dimensional color histograms) and their text annotations (represented by 1000-dimensional bags of words); the UCI Digits dataset [2, 47] represented using 76 Fourier coefficients and the 240 pixel averages in windows; and the Reuters dataset [3, 47] with the English and French languages used as two different modalities. The goal of the experiment is to use the data in two modalities to obtain a multi-modal clustering that performs better than each single modality.

We use spectral clustering technique, consisting of first embedding the data in a low-dimensional space of the first eigenvectors, and then applying the standard K-means. The embedding is generated by the eigenvectors of each of the Laplacians individually (unimodal), by the approximate joint eigenvectors obtained by JADE, and the eigenvectors of the modified Laplacians produced by our CCO procedure. As a reference, we show the performance of the state-of-the-art Multimodal non-negative matrix factorization (MultiNMF) method [47] for multi-view clustering. Table 2 shows the clustering performance of these different methods in terms of accuracy as defined in [7] and normalized mutual information (NMI).

[width=1]rings4_alpha1e4.pdf OriginalCOO

Figure 3: Graphs and adjacency matrices of the original data (left) and CCO (right). Graph weights are shown with edge thickness and gray shades.

[width=1]circles_evecs.pdf

Figure 4: Eigenvectors of the original graph Laplacians (, first and second rows) and the CCO (, third and fourth rows). The eigenvectors of the CCO coincide, proving that they are jointly diagonalizable. Graph weights are shown with edge thickness and gray shades. Eigenvector are shown with red-blue colormap.

[width=0.9]circles_hk_t8.pdf OriginalJADECCO

Figure 5: Heat kernel at the point shown in big circle, computed using the original graph Laplacians (, left), joint diagonalization (, middle), and CCO (, right). Graph weights are shown with edge thickness and gray shades. Heat kernel values are shown with red-blue colormap. JADE produces an invalid heat kernel, which has negative values.

[width=1]man-dd1.pdf OriginalJADECCO

Figure 6: Diffusion distance from the point on the left hand (shown in big circle), computed using the original graph Laplacians (left) and CCO (right).

[width=1]swissroll_.pdf OriginalCCOOriginalJADECCO 100% corr.7.7% corr.2% corr.

Figure 7: First row: Swiss rolls with different connectivity before (left) and after (right) optimization. Second row: Laplacian eigenmaps before (leftmost) and after optimization using different number of corresponding points (second to fourth column). Color and lines show corresponding points.

[width=1]caltech.pdf Modality 1Modality 2JADECCO

Figure 8: Left: diffusion distances computed on the Caltech dataset using independently the two modalities, and their combination with joint diagonalization and CCO method. Right: ROC curves showing the tradeoff between false positive and true positive rates as function of a global threshold applied to the distance matrix (higher curves implies better discriminative power of the distance).

Accuracy / NMI
Dataset Unimodal MultiNMF JADE CCO
Caltech 105 77.1 / 75.3 82.9 / 83.0 90.5 / 93.4
NUS 145 82.1 / 76.9 76.7 / 78.4 77.9 / 75.5 86.9 / 84.4
Reuters 600 58.8 / 41.0 53.1 / 40.5 52.8 / 37.5 57.3 / 42.5
Digits 2000 83.4 / 82.2 86.1 / 78.1 84.5 / 84.0 90.5 / 85.7
Table 2: Clustering performance (in ) on four datasets. Best modality is shown. Since Multi-NMF requires explicit coordinates of the data points, while Caltech data is represented implicitly as kernels, we could not measure its performance on this dataset.

7 Conclusions

In this paper, we presented a novel approach for a principled construction of multimodal spectral geometry. Our approach is based on the observation that almost commuting matrices are close to commuting matrices, which, in turn, are jointly diagonalizable. We find closest commuting operators (CCOs) to a given pair of Laplacians, and use their eigendecomposition for multimodal spectral geometric constructions. We showed the application of our approach to several problems in pattern recognition and shape analysis.

We see several avenues to extend the work presented in our paper. First, our approach raised an open theoretical question, whether Huaxin Lin’s theorem [45] can be restricted to classes of special matrices, such as Laplacians.

Second, we considered only unnormalized graph Laplacians. Our approach can be extended to other graph Laplacian, as well as discretizations of the Laplace-Beltrami operator on manifolds, such as the popular cotangent formula [57] for triangular meshes. More broadly, we can consider other Laplace-like operators [37], heat, wave [4] or general diffusion operators [25].

Third, while we used the -norms in optimization problem (15), one can think of situations where the use of the sparsity-inducing -norm can be advantageous. One such situation is dealing with point-wise topological noise, where one has to modify a few graph weights to perform ‘surgery’ on the edges.

Fourth, we considered only undirected graphs with symmetric Laplacians. An important task is to extend our method to directed graphs or combinations of directed and undirected graphs. From the theoretical standpoint, the latter should be possible, as indicated by the following result that builds on the work of Pearcy and Shields [56] regarding the commutator of two matrices where one is self-adjoint. We can thus find CCOs, one of which is symmetric and one is not.

Theorem 7.1.

If and are real matrices and is symmetric, then there are commuting real matrices and with symmetric so that

Proof.

The reader may check that the construction by Pearcy and Shields [56] produces real matrices that commute when applied to real almost commuting matrices.

Now suppose . By the real version of Theorem 1 of [56] there exist and with symmetric and

Therefore,

8 Acknowledgement

We are grateful to Davide Eynard for assistance with the clustering experiments. This research was supported by the ERC Starting Grant No. 307047 (COMET).

References

  • [1] X. Alameda-Pineda, V. Khalidov, R. Horaud, and F. Forbes. Finding audio-visual events in informal social gatherings. In Proc. ICMI, 2011.
  • [2] E. Alpaydin and C. Kaynak.

    Cascading classifiers.

    Kybernetika, 34(4):369–374, 1998.
  • [3] M. R. Amini, N. Usunier, and C. Goutte. Learning from multiple partially observed views-an application to multilingual text categorization. In Proc. NIPS, 2009.
  • [4] M. Aubry, U. Schlickewei, and D. Cremers. The wave kernel signature: a quantum mechanical approach to shape analysis. In Proc. Dynamic Shape Capture and Analysis, 2011.
  • [5] M. Bansal and K. Daniilidis. Joint spectral correspondence for disparate image matching. In Proc. CVPR, 2013.
  • [6] R. Bekkerman, R. El-Yaniv, and A. McCallum. Multi-way distributional clustering via pairwise interactions. In Proc. ICML, 2005.
  • [7] R. Bekkerman and J. Jeon. Multi-modal clustering for multimedia collections. In Proc. CVPR, 2007.
  • [8] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, 2002.
  • [9] M. Belkin and P. Niyogi. Towards a theoretical foundation for Laplacian-based manifold methods. 74(8):1289–1308, 2008.
  • [10] P. Bérard, G. Besson, and S. Gallot. Embedding Riemannian manifolds by their heat kernel. Geometric & Functional Analysis, 4(4):373–398, 1994.
  • [11] M. Berger, P. Gauduchon, and E. Mazet. Le spectre d’une variété riemannienne. Springer, Berlin, 1971.
  • [12] D. P. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
  • [13] A. M. Bronstein, M. M. Bronstein, L. J. Guibas, and M. Ovsjanikov. Shape google: Geometric words and expressions for invariant shape retrieval. Trans. Graphics, 30:1:1–1:20, 2011.
  • [14] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Numerical geometry of non-rigid shapes. Springer, 2008.
  • [15] M. M. Bronstein and A. M. Bronstein. Shape recognition with spectral distances. Trans. PAMI, 33(5):1065–1071, 2011.
  • [16] M. M. Bronstein, A. M. Bronstein, F. Michel, and N. Paragios. Data fusion through cross-modality metric learning using similarity-sensitive hashing. In Proc. CVPR, 2010.
  • [17] M. M. Bronstein and I. Kokkinos. Scale-invariant heat kernel signatures for non-rigid shape recognition. In Proc. CVPR, 2010.
  • [18] A. Bunse-Gerstner, R. Byers, and V. Mehrmann. Numerical methods for simultaneous diagonalization. SIAM J. Matrix Analysis Appl., 14(4):927–949, 1993.
  • [19] X. Cai, F. Nie, H. Huang, and F. Kamangar. Heterogeneous image feature integration via multi-modal spectral clustering. In Proc. CVPR, 2011.
  • [20] J.-F. Cardoso and A. Souloumiac. Blind beamforming for non-Gaussian signals. Radar and Signal Processing, 140(6):362–370, dec 1993.
  • [21] J.-F. Cardoso and A. Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM J. Mat. Analysis Appl., 17:161–164, 1996.
  • [22] R. L. Causey. On closest normal matrices. PhD thesis, Department of Computer Science, Stanford University, 1964.
  • [23] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y.-T. Zheng. NUS-WIDE: A real-world web image database from National University of Singapore. In Proc. CIVR, 2009.
  • [24] F. R. K. Chung. Spectral graph theory. AMS, 1997.
  • [25] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21:5–30, 2006.
  • [26] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, F. Warner, and S. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. PNAS, 102(21):7426–7431, 2005.
  • [27] V.R. de Sa. Spectral clustering with two views. In Proc. ICML Workshop on Learning with Multiple Views, 2005.
  • [28] C. H. Q. Ding, X. He, H. Zha, M. Gu, and H. D. Simon. A min-max cut algorithm for graph partitioning and data clustering. In Proc. ICDM, 2001.
  • [29] X. Dong, P. Frossard, P. Vandergheynst, and N. Nefedov. Clustering on multi-layer graphs via subspace analysis on Grassmann manifolds. ArXiv:1303.2221, 2013.
  • [30] A. Dubrovina and R. Kimmel. Matching shapes by eigendecomposition of the Laplace-Beltrami operator. In Proc. 3DPVT, 2010.
  • [31] D. Eynard, K. Glashoff, M.M. Bronstein, and A.M. Bronstein. Multimodal diffusion geometry by joint diagonalization of Laplacians. ArXiv:1209.2295, 2012.
  • [32] R. Gabriel. Matrizen mit maximaler diagonale bei unitärer similarität. Journal für die reine und angewandte Mathematik, 0307_0308:31–52, 1979.
  • [33] K. Gebal, J. Andreas Bærentzen, H. Aanæs, and R. Larsen. Shape analysis using the auto diffusion function. Computer Graphics Forum, 28(5):1405–1413, 2009.
  • [34] K. Glashoff and M. M. Bronstein. Almost-commuting matrices are almost jointly diagonalizable. ArXiv:1305.2135, 2013.
  • [35] J. Ham, D. Lee, and L. Saul. Semisupervised alignment of manifolds. In

    Proc. Conf. Uncertainty in Artificial Intelligence

    , 2005.
  • [36] N. J. Higham. Matrix nearness problems and applications. In M. J. C. Gover and S. Barnett, editors, Applications of Matrix Theory. Oxford University Press, 1989.
  • [37] K. Hildebrandt, C. Schulz, C. von Tycowicz, and K. Polthier. Modal shape analysis beyond Laplacian. Computer Aided Geometric Design, 29(5):204–218, 2012.
  • [38] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1990.
  • [39] Z. Karni and C. Gotsman. Spectral compression of mesh geometry. In Proc. Computer Graphics and Interactive Techniques, 2000.
  • [40] E. Kidron, Y. Y. Schechner, and M. Elad. Pixels that sound. In Proc. CVPR, 2005.
  • [41] A. Kovnatsky, M. M. Bronstein, A. M. Bronstein, K. Glashoff, and R. Kimmel. Coupled quasi-harmonic bases. Computer Graphics Forum, 32:439–448, 2013.
  • [42] A. Kumar, P. Rai, and H. Daumé III. Co-regularized multi-view spectral clustering. In Proc. NIPS, 2011.
  • [43] B. Lévy. Laplace-Beltrami eigenfunctions towards an algorithm that “understands” geometry. In Proc. SMI, 2006.
  • [44] B. Lévy and R. H. Zhang. Spectral geometry processing. In SIGGRAPH Asia Course Notes, 2009.
  • [45] H. Lin. Almost commuting selfadjoint matrices and applications. 13:193–233, 1997.
  • [46] R. Litman, A. M. Bronstein, and M. M. Bronstein. Diffusion-geometric maximally stable component detection in deformable shapes. Computers & Graphics, 35(3):549 – 560, 2011.
  • [47] J. Liu, C. Wang, J. Gao, and J. Han. Multi-view clustering via joint nonnegative matrix factorization. In Proc. SDM, 2013.
  • [48] T. A. Loring and A. P. W. Sørensen. Almost commuting self-adjoint matrices - the real and self-dual cases. ArXiv:1012.3494, 2010.
  • [49] C. Ma and C.-H. Lee. Unsupervised anchor shot detection using multi-modal spectral clustering. In Proc. ICASSP, 2008.
  • [50] B. McFee and G. R. G. Lanckriet. Learning multi-modal similarity. JMLR, 12:491–523, 2011.
  • [51] B. Nadler, S. Lafon, R. R. Coifman, and I. G. Kevrekidis. Diffusion maps, spectral clustering and eigenfunctions of Fokker-Planck operators. In Proc. NIPS, 2005.
  • [52] A. Y. Ng, M. I. Jordan, and Y. Weiss.

    On spectral clustering: Analysis and an algorithm.

    In Proc. NIPS, 2001.
  • [53] M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. J. Guibas. Functional maps: A flexible representation of maps between shapes. Trans. Graphics, 31(4), 2012.
  • [54] M. Ovsjanikov, Q. Mérigot, F. Mémoli, and L. J. Guibas. One point isometric matching with the heat kernel. Computer Graphics Forum, 29(5):1555–1564, 2010.
  • [55] M. Ovsjanikov, J. Sun, and L. J. Guibas. Global intrinsic symmetries of shapes. Computer Graphics Forum, 27(5):1341–1348, July 2008.
  • [56] C. Pearcy and A. Shields. Almost commuting matrices. J. Functional Analysis, 33(3):332–338, 1979.
  • [57] U. Pinkall and K. Polthier. Computing discrete minimal surfaces and their conjugates. Experimental Mathematics, 2:15–36, 1993.
  • [58] J. Pokrass, A. M. Bronstein, M. M. Bronstein, P. Sprechmann, and G. Sapiro. Sparse modeling of intrinsic correspondences. Computer Graphics Forum, 32:459–468, 2013.
  • [59] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. R. G. Lanckriet, R. Levy, and N. Vasconcelos. A new approach to cross-modal multimedia retrieval. In Proc. ICM, 2010.
  • [60] G. Rong, Y. Cao, and X. Guo. Spectral mesh deformation. The Visual Computer, 24(7):787–796, 2008.
  • [61] M. Rordam and P. Friis. Almost commuting self-adjoint matrices - a short proof of Huaxin Lin’s theorem. Journal für die reine und angewandte Mathematik, 479:121–132, 1996.
  • [62] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs. Generalized multiview analysis: A discriminative latent space. In Proc. CVPR, 2012.
  • [63] J. Shi and J. Malik. Normalized cuts and image segmentation. Trans. PAMI, 22:888–905, 2000.
  • [64] V. Sindhwani, P. Niyogi, and M. Belkin.

    A co-regularization approach to semi-supervised learning with multiple views.

    In Proc. ICML Workshop on Learning with Multiple Views, 2005.
  • [65] J. Sun, M. Ovsjanikov, and L. J. Guibas. A concise and provably informative multi-scale signature based on heat diffusion. Computer Graphics Forum, 28(5):1383–1392, 2009.
  • [66] W. Tang, Z. Lu, and I. S. Dhillon. Clustering with multiple graphs. In Proc. Data Mining, 2009.
  • [67] C. Wang and S. Mahadevan. Manifold alignment using procrustes analysis. In Proc. ICML, 2008.
  • [68] C. Wang and S. Mahadevan. A general framework for manifold alignment. In Proc. Symp. Manifold Learning and its Applications, 2009.
  • [69] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Proc. NIPS, 2008.
  • [70] J. Weston, S. Bengio, and N. Usunier. Large scale image annotation: learning to rank with joint word-image embeddings. Machine Learning, 81(1):21–35, 2010.
  • [71] A. Yeredor. Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation. Trans. Signal Processing, 50(7):1545–1553, 2002.
  • [72] A. Ziehe. Blind Source Separation based on Joint Diagonalization of Matrices with Applications in Biomedical Signal Processing. Dissertation, University of Potsdam, 2005.