Weighted Spectral Embedding of Graphs

09/28/2018 ∙ by Thomas Bonald, et al. ∙ 8

We present a novel spectral embedding of graphs that incorporates weights assigned to the nodes, quantifying their relative importance. This spectral embedding is based on the first eigenvectors of some properly normalized version of the Laplacian. We prove that these eigenvectors correspond to the configurations of lowest energy of an equivalent physical system, either mechanical or electrical, in which the weight of each node can be interpreted as its mass or its capacitance, respectively. Experiments on a real dataset illustrate the impact of weighting on the embedding.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

spectral_embedding

Weighted spectral embedding


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many types of data can be represented as graphs. Edges may correspond to actual links in the data (e.g., users connected by some social network) or to levels of similarity induced from the data (e.g., users having liked a large common set of movies). The resulting graph is typically sparse in the sense that the number of edges is much lower than the total number of node pairs, which makes the data hard to exploit.

A standard approach to the analysis of sparse graphs consists in embedding the graph in some vectorial space of low dimension, typically much smaller than the number of nodes [15, 19, 4]

. Each node is represented by some vector in the embedding space so that close nodes in the graph (linked either directly or through many short paths in the graph) tend to be represented by close vectors in terms of the Euclidian distance. Standard learning techniques can then be applied to this dense vectorial representation of the graph to recommend new links, rank nodes or find clusters of nodes for instance

[7, 3].

The most popular technique for graph embedding is based on the spectral decomposition of the graph Laplacian, each dimension of the embedding space corresponding to an eigenvector of the Laplacian matrix [5, 13, 2, 11, 17, 12]. This spectral embedding can be interpreted in terms of a random walk in the graph [10, 14]. In the full embedding space (including all eigenvectors), the square distance between two vectors is proportional to the mean commute time of the random walk between the two corresponding nodes: close nodes in the graph tend to be close in the embedding space. Viewing this random walk as the path followed by electrons in the corresponding electrical network, with nodes linked by resistors, these square distances can also be interpreted as the effective resistances between pairs of nodes [16].

In this paper, we address the issue of the spectral embedding of graphs including node weights. We shall see that existing spectral embedding techniques implicitly consider either unit weights or so-called internal node weights, depending on the Laplacian used in the spectral decomposition. We here consider the node weights as some additional information representing the relative importance of the nodes, independently of the graph structure. The weight of a node may reflect either its value, its multiplicity if each node represents a category of users or items, or the reliability

of the associate data, for instance. Surprisingly, this notion of weight is common for vectoral data (see, e.g., the weighted version of the k-means clustering algorithm

[8, 6, 9]) but not for graph data, where weights are typically assigned to edges but not to nodes, apart from those induced from the edges.

Our main contribution is a spectral embedding of the graph, we refer to as the weighted spectral embedding, that incorporates the node weights. It is based on the spectral decomposition of some properly normalized version of the Laplacian. We prove that, when all eigenvectors are used, this embedding is equivalent to the regular spectral embedding shifted so that the origin is the center of mass of the embedding. In practice, only the first eigenvectors are included to get an embedding in low dimension. We show that these eigenvectors can be interpreted as the levels of lowest energy of a physical system, either a mechanical system where nodes are linked by springs (the edges) and have different masses (the node weights), or an electrical network where nodes are linked by resistors (the edges) and connected to the ground by capacitors with different capacitances (the node weights). In particular, the weighted spectral embedding can not be derived from the regular spectral embedding in low dimension. Experiments confirm that these embeddings differ significantly in practice.

The weighted spectral embedding can also be interpreted in terms of a random walk in the graph, where nodes are visited in proportion to their weights. In the full embedding space (including all eigenvectors), the square distances between vectors are proportional to the mean commute times of this random walk between the corresponding nodes, as for unit weights. In fact, these mean commute times depend on the weights through their sum only, which explains why the pairwise distances of the embedding are independent of the weights, up to some multiplicative constant. This property is somewhat counter-intuitive as the mean hitting time of one node from another does depend on the weights. We shall explain this apparent paradox by some symmetry property of each equivalent physical system.

The rest of the paper is organized as follows. We first introduce the model and the notations. We then present the regular spectral embedding and its interpretation in terms of a random walk in the graph. The weighted version of this random walk is introduced in Section 4. Section 5 presents the weighted spectral embedding and extends the results known for the regular spectral embedding. The analogies with a mechanical system and an electrical network are described in Sections 6 and 7, respectively. Experiments on real data are presented in Section 8. Section 9 concludes the paper.

2 Model

We consider a connected, undirected graph of nodes, without self-loops. We denote by its adjacency matrix. In the absence of edge weights, this is a binary, symmetric matrix, with if and only if there is an edge between nodes and . In the presence of edge weights, is the weight of the edge between nodes and , if any, and is equal to 0 otherwise.

Let , where is the -dimensional vector of ones. The components of the vector are equal to the actual node degrees in the absence of edge weights ( is a binary matrix) and to the total weights of incident edges otherwise ( is a non-negative matrix). We refer to as the internal node weights.

Nodes are assigned positive weights corresponding to their relative importances. These node weights are external parameters, independent of the graph. We denote by the vector .

3 Spectral embedding

We first present the regular spectral embedding, without taking the node weights into account. Let be the diagonal matrix of internal node weights. The Laplacian matrix is defined by

This is a symmetric matrix. It is positive semi-definite on observing that:

The spectral theorem yields

(1)

where

is the diagonal matrix of eigenvalues of

, with , and is the matrix of corresponding eigenvectors, with and .

Spectral embedding.

Let , where denotes the pseudo-inverse of . The columns of the matrix define an embedding of the nodes in , each dimension corresponding to an eigenvector of the Laplacian. Observe that the first component of each vector is equal to 0, reflecting the fact that the first eigenvector is not informative. Since , the centroid of the vectors is the origin:

(2)

The Gram matrix of is the pseudo-inverse of the Laplacian:

Random walk.

Consider a random walk in the graph where the transition rate from node to node is . Specifically, the walker stays at node an exponential time with parameter , then moves from node to node

with probability

. This defines a continuous-time Markov chain with generator matrix

and uniform stationary distribution. The sequence of nodes visited by the random walk forms a discrete-time Markov chain with transition matrix .

Hitting times.

Let be the mean hitting time of node from node . Observe that . The following results, proved in [10, 14], will be extended to weighted spectral embedding in Section 4. We denote by the -dimensional unit vector on component .

Proposition 1

The matrix satisfies:

(3)
Proposition 2

The solution to equation (3) with zero diagonal entries satisfies:

(4)

Since , we obtain:

We deduce the mean commute time between nodes and :

In view of (2), the mean hitting time of node from steady state is:

Cosine similarity.

Observe that:

Let

This is the cosine-similarity between nodes

and . We have:

that is

(5)

Thus the cosine-similarity between any vectors can be interpreted in terms of the mean commute time between the corresponding nodes relative to their mean hitting times.

4 Random walk with weights

In this section, we introduce a modified version of the random walk, that takes the node weights into account. Recall that all weights are assumed positive. We denote by the total node weight.

Random walk.

We modify the random walk as follows: the transition rate from node to node is now . Thus the walker stays at node an exponential time with parameter , then moves from node to node with probability . Observe that the previously considered random walk corresponds to unit weights. We get a continuous-time Markov chain with generator matrix , with , and stationary distribution .

Hitting times.

Let be the mean hitting time of node from node . Observe that .

Proposition 3

The matrix satisfies:

(6)

Proof. We have while for all ,

Thus the matrix is diagonal. Equivalently, the matrix is diagonal. Since , this diagonal matrix is .

Proposition 4

The solution to equation (6) with zero diagonal entries satisfies:

Proof. The matrix satisfies so the seeked solution is of the form:

for some -dimensional vector . Thus,

Since , we get:


Let be the center of mass of the vectors :

Since , we obtain:

(7)

Thus the mean hitting times are the same as without node weights, up to the multiplicative constant and the shift of the origin to the center of mass of the vectors .

In particular, the mean commute time between any nodes and depends on the weights through their sum only:

The mean hitting time of node from steady state is:

Cosine similarity.

Let This is the cosine-similarity between nodes and . We obtain as above:

In particular, the relative mean commute times, as defined by (5), depend on the weights through the center of mass of the vectors only.

5 Weighted spectral embedding

We now introduce the weighted spectral embedding. The generator matrix of the weighted random walk is not symmetric in general. Thus we consider the following normalized version of the Laplacian, we refer to as the weighted Laplacian:

This matrix is symmetric, positive semi-definite. In the particular case where , this is known as the symmetric normalized Laplacian.

The spectral theorem yields

(8)

where is the diagonal matrix of eigenvalues of , with , and is the matrix of corresponding eigenvectors, with and .

We have the following explicit expression for the pseudo-inverse of the weighted Laplacian:

Proposition 5

The pseudo-inverse of is:

(9)

Proof. Let be the matrix defined by the right-hand side of (9). Using the fact that , we get:

which proves that is the pseudo-inverse of .

Generalized eigenvalue problem.

Let . We have:

with . Thus the columns of the matrix are solutions to the generalized eigenvalue problem:

(10)

with corresponding eigenvalues The first column, associated to the eigenvalue , satisfies , while the others satisfy .

Spectral embedding.

Let . The columns of the matrix define an embedding of the nodes, we refer to as the weighted spectral embedding. As for the regular spectral embedding, the first component of each vector is equal to 0. Since , the center of mass of lies at the origin:

The Gram matrix of is:

In view of Proposition 5,

Thus the considered weighted spectral embedding is equivalent to the regular spectral embedding, shifted so that the origin is the center of mass . In particular, the distances between vectors in the embedding space can be interpreted in terms of mean hitting times of the random walk, as shown in Section 4.

6 A mechanical system

Consider the mechanical system consisting of point particles of respective masses sliding along a bar without friction. Particles and are linked by a spring satisfying Hooke’s law with stiffness .

Eigenmodes.

Assume that the bar has a uniform circular motion with angular velocity around some fixed axis. We denote by the locations of the particles along the bar, with the axis taken as the origin. By Newton’s second law of motion, the system is in equilibrium if and only if

that is

If , then is a solution to the generalized eigenvector problem (10) with eigenvalue . We refer to these equilibrium states as the eigenmodes of the mechanical system.

The first eigenmode , for , corresponds to the absence of motion. The other eigenmodes give the possible angular velocities of the system, equal to the square roots of the eigenvalues of . Any such eigenmode satisfies , meaning that the center of mass of the system is at the origin.

Potential energy.

The mechanical potential energy of the system in any state is:

If is an eigenmode of the mechanical system, this is equal to:

As

is the moment of inertia of the system and

is the square of the angular velocity, this corresponds to the angular kinetic energy. For a unit moment of inertia , we obtain so that the eigenvalues of the weighted Laplacian can be viewed as (twice) the levels of energy of the eigenmodes, for unit moments of inertia. The weighted spectral embedding reduced to the first eigenvectors of can then be interpreted as that induced by the eigenmodes of lowest potential energies of the mechanical system, for unit moments of inertia.

Dirichlet problem.

For any , assume the positions of the point particles and are set to 1 and 0, respectively. Let be the vector of positions at equilibrium, in the absence of rotation. By Newton’s first law of motion,

for some constant equal to the force exerted on both and (in opposite directions). This force does not depend on the masses of the point particles.

The solution to this Dirichlet problem is:

for some constant . Using the fact that and , we get:

and

Observe in particular that for all nodes . The mean hitting time of node from node by the random walk, given by (7), satisfies:

where is the center of mass of the system:

By symmetry, we have:

We deduce the mean commute time between nodes and :

This symmetry in the solutions to the Dirichlet problem explains why, unlike the mean hitting times, the mean commute times depend on the weights through their sum only. The mean commute time between two nodes of the graph is inversely proportional to the force exerted between the two corresponding point particles of the mechanical system, which is independent of the particle masses.

7 An electrical network

Consider the electrical network induced by the graph, with a resistor of conductance between nodes and and a capacitor of capacitance between node and the ground. The move of electrons in the network is precisely described by the random walk defined in Section 4.

Eigenmodes.

Let be the vector of electric potentials at time , starting from some arbitrary initial state . By Ohm’s law,

that is

The solution to this differential equation is:

If the initial state is a solution to the generalized eigenvector problem (10), we obtain:

Since the eigenvectors of form a basis of , any initial state can be written as a linear combination of solutions to the generalized eigenvector problem, we refer to as the eigenmodes of the electrical network. The first eigenmode , for , corresponds to a static system, without discharge. The other eigenmodes give the possible discharge rates of the system, equal to the eigenvalues of . Any such eigenmode satisfies , meaning that the total charge accumulated in the capacitors is null.

Energy dissipation.

The energy dissipated by the resistors for the vector of electric potentials is:

If be an eigenmode of the electrical network, this is equal to

Observing that is the electrical potential energy of the system, we deduce that can be viewed as the energy dissipated by the resistors for a unit electric potential energy. In particular, the weighted spectral embedding reduced to the first eigenvectors of can be interpreted as that induced by the eigenmodes of lowest energy dissipation of the electrical network, for unit electric potential energies.

Dirichlet problem.

For any , assume the electric potentials of nodes and are set to 1 and 0, respectively. Let be the vector of electric potentials at equilibrium. We have:

for some constant equal to the current flowing from to . This Dirichlet problem is the same as for the mechanical network, and thus the same results apply. In particular, we get for all nodes , and the current between to is:

This is the intensity of the current generated by a unit potential difference betweeen and . Its inverse is known as the effective resistance between and . The mean hitting time of node from node by the random walk satisfies:

where is the total charge accumulated in the capacitors:

Observe that this expression for the mean hitting time can be seen as a consequence of Little’s law, as is the intensity of positive charges entering the network and the mean number of positive charges accumulated in the network.

By symmetry, the total charge accumulated in the capacitors when the respective electric potentials of and are set to 0 and 1 is:

so that:

We deduce the mean commute time between nodes and :

Again, the symmetry in the solutions to the Dirichlet problem explains why the mean commute times depend on the weights through their sum only. The mean commute time between two nodes of the graph is proportional to the effective resistance between the two corresponding points of the electrical network, which is independent of the capacitors.

8 Experiments

We now illustrate the results on a real dataset. The considered graph is that formed by articles of Wikipedia for Schools111https://schools-wikipedia.org, a selection of articles of Wikipedia for children [18]. Specifically, we have extracted the largest connected component of this graph, considered as undirected. The resulting graph has 4,589 nodes (the articles) and 106,644 edges (the hyperlinks between these articles). The graph is undirected and unweighted. Both the dataset and the Python code used for the experiments are available online222https://github.com/tbonald/spectral_embedding.

Global clustering.

We first apply k-means clustering333The algorithm is k-means++ [1] 100 random initializations. to the following embeddings of the graph, each in dimension :

  • the regular spectral embedding, , restricted to rows , say ;

  • the shifted spectral embedding, , where and is the -dimensional vector of ones;

  • the weighted spectral embedding, , restricted to rows , say .

Here the vector of weights is taken equal to , the vector of internal node weights. In particular, the weighted spectral embedding is that following from the spectral decomposition of the normalized Laplacian, . Observe that the first row of each embedding and is equal to zero and thus discarded.

Each embedding is normalized so that nodes are represented by -dimensional unitary vectors. This is equivalent to consider the distance induced by the cosine similarity in the original embedding. In particular, the regular spectral embedding and the shifted spectral embedding give different clusterings.

Tables 1 and 2 show the top articles of the clusters found for each embedding, when the number of clusters is set to 20, with the size of each cluster. The selected articles for each cluster correspond to the nodes of highest degrees among the closest nodes from the center of mass of the cluster in the embedding space, with unit weights for the regular spectral embedding and internal node weights for the shifted spectral embedding and the weighted spectral embedding.

Size Top articles
1113 Australia, Canada, North America, 20th century
326 UK, England, London, Scotland, Ireland
250 US, New York City, BBC, 21st century, Los Angeles
227 India, Japan, China, United Nations
218 Earth, Sun, Physics, Hydrogen, Moon, Astronomy
200 Mammal, Fish, Horse, Cattle, Extinction
200 France, Italy, Spain, Latin, Netherlands
198 Water, Agriculture, Coal, River, Antarctica
197 Germany, World War II, Russia, World War I
187 Mexico, Brazil, Atlantic Ocean, Argentina
185 Human, Philosophy, Slavery, Religion, Democracy
184 Plant, Rice, Fruit, Sugar, Wine, Maize, Cotton
177 Gold, Iron, Oxygen, Copper, Electron, Color
170 Egypt, Turkey, Israel, Islam, Iran, Middle East
159 English, 19th century, William Shakespeare, Novel
158 Africa, South Africa, Time zone, Portugal
157 Europe, Scientific classification, Animal, Asia
141 Washington, D.C., President of the United States
72 Dinosaur, Fossil, Reptile, Cretaceous, Jurassic
70 Paris, Art, Architecture, Painting, Hist. of painting
Table 1: Global clustering of Wikipedia for Schools by weighted spectral embedding.

The first observation is that the three embeddings are very different. The choice of the Laplacian (regular or normalized) matters, because the dimension is much smaller than the number of nodes (recall that the shifted spectral embedding and the weighted spectral embedding are equivalent when ). The second observation is that the weighted spectral embedding seems to better capture the structure of the graph. Apart from the first cluster, which is significantly larger than the others and thus may contain articles on very different topics, the other clusters look meaningful. The regular spectral embedding puts together articles as different as Meteorology, British English and Number, while the shifted embedding groups together articles about Sanskrit, Light and The Simpsons, that should arguably appear in different clusters.

Size Top articles
1666 Germany, India, Africa, Spain, Russia, Asia
526 Chordate, Binomial nomenclature, Bird, Mammal
508 Earth, Water, Iron, Sun, Oxygen, Copper, Color
439 Scotland, Ireland, Wales, Manchester, Royal Navy
300 New York City, Los Angeles, California, Jamaica
246 North America, German language, Rome, Kenya
195 English, Japan, Italy, 19th century
131 Language, Mass media, Library, Engineering, DVD
103 Jazz, Piano, Guitar, Music of the United States
91 Microsoft, Linux, Microsoft Windows, Algorithm
67 British monarchy, Bristol, Oxford, Paul of Tarsus
58 Tropical cyclone, Bermuda, Hurricane Andrew
49 Mathematics, Symmetry, Geometry, Algebra, Euclid
48 Fatty acid, List of vegetable oils, Biodiesel
47 Train, Canadian Pacific Railway, Denver, Colorado
38 Eye, Retina, Animation, Glasses, Lego, Retinol
27 Finance, Supply and demand, Stock, Accountancy
19 Meteorology, British English, Number, Vowel
17 Newcastle upon Tyne, Isambard Kingdom Brunel
14 Wikipedia, Jimmy Wales, Wikimedia Foundation
Size Top articles
452 Chordate, Binomial nomenclature, Bird, Mammal
446 UK, Europe, France, English language, Japan
369 Germany, Spain, Soviet Union, Sweden, Poland
369 Earth, Water, Iron, Sun, Oxygen, Copper, Color
353 India, Africa, Russia, New Zealand, River, Snow
328 Egypt, Greece, Middle Ages, Roman Empire, Nazism
286 United States, New York City, Petroleum, Finland
271 British Empire, 17th century, Winston Churchill
238 Time zone, Turkey, Portugal, Israel, Currency
217 Rice, Fruit, Bacteria, Wine, DNA, Flower, Banana
178 British monarchy, Bristol, Charles II of England
175 Internet, Hebrew language, Language, Mass media
156 Physics, Ancient Egypt, Science, Astronomy, Time
125 Atlantic Ocean, Morocco, Algeria, Barbados
124 Opera, Folk music, Elvis Presley, Bob Dylan
117 Agriculture, Ocean, Geology, Ecology, Pollution
112 Christianity, Switzerland, Judaism, Bible, Deity
108 South America, Pacific Ocean, Tourism, Colombia
94 Film, Sanskrit, Light, The Simpsons, Eye, Shark
71 Mining, Food, Geography, Engineering, Transport
Table 2: Global clustering of Wikipedia for Schools by regular (top) and shifted (bottom) spectral embeddings.

Selective clustering.

We are now interested in the clustering of some selection of the articles. We take the 667 articles in the People category. A naive approach consists in considering the subgraph induced by these nodes. But this is not satisfactory as the similarity between articles in the People category depends strongly on their connections through articles that not in this category. Indeed, the subgraph of nodes in the People category is not even connected. Our approach consists in assigning a multiplicative factor of 10 to articles in the People category. Specifically, we set if article belongs to the People category, and otherwise. We compare the three previous embeddings, for this vector of weights.

Tables 3 and 4 show the top articles in the People category for the 20 clusters found for each embedding. The selected articles for each cluster correspond to the nodes of highest degrees in the People category. We also indicate the number of articles in the People category in each cluster. Observe that the last cluster obtained with the regular spectral embedding has no article in the People category. Again, the impact of weighting is significant. The weighted spectral embedding, which is adapted to the People category, seems to better capture the structure of the graph. Except for the first, which is much larger than the others, the clusters tend to group together articles of the People category that are closely related.

Count Top articles
228 William Shakespeare, Pope John Paul II
60 Jorge Luis Borges, Rabindranath Tagore
55 Elizabeth II of the UK, Winston Churchill, Tony Blair
49 Charles II of England, Elizabeth I of England
48 Ronald Reagan, Bill Clinton, Franklin Roosevelt
25 Alexander the Great, Genghis Khan, Muhammad
24 Adolf Hitler, Joseph Stalin, Vladimir Lenin
23 Napoleon I, Charlemagne, Louis XIV of France
22 Jesus, Homer, Julius Caesar, Virgil
22 Aristotle, Plato, Charles Darwin, Karl Marx
21 George W. Bush, Condoleezza Rice, Nelson Mandela
18 Elvis Presley, Paul McCartney, Bob Dylan
15 Isaac Newton, Galileo Galilei, Ptolemy
13 Albert Einstein, Gottfried Leibniz, Bertrand Russell
10 Pete Sampras, Boris Becker, Tim Henman
9 William Thomson, 1st Baron Kelvin, Humphry Davy
8 Carolus Linnaeus, James Cook, Gerald Durrell
8 Bill Gates, Richard Stallman, Ralph Nader
5 Floyd Mayweather Jr., Lucy, Lady Duff-Gordon
4 Vasco da Gama, Idit Harel Caperton, Reza Shah
Table 3:

Selective clustering of Wikipedia for Schools by weighted spectral clustering.

Count Top articles
196 Carolus Linnaeus, Adolf Hitler, Jesus, Aristotle
95 Christina Aguilera, Andy Warhol, Auguste Rodin
90 Julius Caesar, Martin Luther King, Jr., Euclid
75 Tony Blair, Victoria of the UK, Charles II of England
45 Ronald Reagan, Franklin D. Roosevelt, Gerald Ford
28 Abraham Lincoln, George Washington, John Adams
25 Igor Stravinsky, Johann Sebastian Bach
24 Albert Einstein, Gottfried Leibniz, Isaac Newton
22 George W. Bush, Napoleon I of France, Plato
10 Pete Sampras, Boris Becker, Tim Henman, Pat Cash
10 Fanny Blankers-Koen, Rosa Parks, Donald Bradman
9 Alexander the Great, Frederick II of Prussia
9 Mahatma Gandhi, Buddha, Muhammad Ali Jinnah
8 Columba, Edwin of Northumbria, Macbeth of Scotland
5 Bill Clinton, Richard Stallman, Linus Torvalds
5 Elizabeth II of the UK, David Beckham, Wayne Rooney
5 Archbishop of Canterbury, Harold Wilson
4 Leonardo da Vinci, Neil Armstrong, Wright brothers
2 George III of the UK, Matthew Brettingham
0
Count Top articles
182 Adolf Hitler, William Shakespeare
113 Elizabeth II of the UK, George W. Bush
69 Tony Blair, Victoria of the UK, Elizabeth I of England
46 Ronald Reagan, Franklin Roosevelt, Jimmy Carter
31 Henry James, Igor Stravinsky, Ezra Pound
26 Paul McCartney, Bob Dylan, Edgar Allan Poe
25 Jesus, Charlemagne, Genghis Khan, Homer, Ptolemy
24 Albert Einstein, Gottfried Leibniz, Isaac Newton
23 Charles Darwin, Galileo Galilei, Nikola Tesla
19 Plato, John Locke, Max Weber, Friedrich Nietzsche
18 Rabindranath Tagore, Mahatma Gandhi, Buddha
17 Margaret Thatcher, David Cameron, George VI
16 Condoleezza Rice, Nelson Mandela, Gerald Ford
12 Dwight D. Eisenhower, Ernest Hemingway
11 Aristotle, Alexander the Great, Fred. II of Prussia
10 Pete Sampras, Boris Becker, Tim Henman
8 Muhammad, Norman Borlaug, Osama bin Laden
7 Bill Clinton, Bill Gates, Richard Stallman
5 Leonardo da Vinci, Neil Armstrong
5 Carolus Linnaeus, Christopher Columbus, Paul Kane
Table 4: Selective clustering of Wikipedia for Schools by regular (top) and shifted (bottom) spectral clustering.

9 Conclusion

We have proposed a novel spectral embedding of graphs that takes node weights into account. We have proved that the dimensions of this embedding correspond to different configurations of an equivalent physical system, either mechanical or electrical, with node weights corresponding to masses or capacitances, respectively.

A practically interesting consequence of our work is in the choice of the Laplacian, when there are no other information on the node weights than the graph itself. Thanks to weighted spectral embedding, we see that the two versions of the Laplacian, regular and normalized, correspond in fact to two relative importance of nodes, given respectively by unit weights and internal node weights.

References

  • [1] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 1027–1035. Society for Industrial and Applied Mathematics, 2007.
  • [2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
  • [3] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst.

    Geometric deep learning: going beyond euclidean data.

    IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  • [4] H. Cai, V. W. Zheng, and K. Chang. A comprehensive survey of graph embedding: problems, techniques and applications. IEEE Transactions on Knowledge and Data Engineering, 2018.
  • [5] F. R. Chung. Spectral graph theory. American Mathematical Soc., 1997.
  • [6] I. S. Dhillon, Y. Guan, and B. Kulis. Kernel k-means: spectral clustering and normalized cuts. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 551–556. ACM, 2004.
  • [7] F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on knowledge and data engineering, 19(3):355–369, 2007.
  • [8] Z. Huang. Extensions to the k-means algorithm for clustering large data sets with categorical values. Data mining and knowledge discovery, 2(3):283–304, 1998.
  • [9] K. Kerdprasop, N. Kerdprasop, and P. Sattayatham. Weighted k-means for density-biased clustering. In International Conference on Data Warehousing and Knowledge Discovery, pages 488–497. Springer, 2005.
  • [10] L. Lovász. Random walks on graphs. Combinatorics, Paul Erdos is eighty, 2:1–46, 1993.
  • [11] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416, Dec. 2007.
  • [12] M. Newman. Spectral methods for community detection and graph partitioning. Phys. Rev. E, 88:042822, Oct 2013.
  • [13] A. Y. Ng, M. I. Jordan, and Y. Weiss.

    On spectral clustering: Analysis and an algorithm.

    In Advances in neural information processing systems, pages 849–856, 2002.
  • [14] H. Qiu and E. R. Hancock. Clustering and embedding using commute times. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(11), 2007.
  • [15] A. Robles-Kelly and E. R. Hancock. A Riemannian approach to graph embedding. Pattern Recognition, 40(3):1042–1056, 2007.
  • [16] P. Snell and P. Doyle. Random walks and electric networks. Free Software Foundation, 2000.
  • [17] D. A. Spielman. Spectral graph theory and its applications. In Foundations of Computer Science, 2007. FOCS’07. 48th Annual IEEE Symposium on, pages 29–38. IEEE, 2007.
  • [18] R. West, J. Pineau, and D. Precup. Wikispeedia: An online game for inferring semantic distances between concepts. In IJCAI, 2009.
  • [19] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE transactions on pattern analysis and machine intelligence, 29(1):40–51, 2007.