Regularized Laplacian Estimation and Fast Eigenvector Approximation

10/08/2011 ∙ by Patrick O. Perry, et al. ∙ NYU college Stanford University 0

Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which ℓ_2-regularized or ℓ_1-regularized ℓ_2-regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Approximation algorithms and heuristic approximations are commonly used to speed up the running time of algorithms in machine learning and data analysis. In some cases, the outputs of these approximate procedures are “better” than the output of the more expensive exact algorithms, in the sense that they lead to more robust results or more useful results for the downstream practitioner. Recently, Mahoney and Orecchia formalized these ideas in the context of computing the first nontrivial eigenvector of a graph Laplacian 

[1]. Recall that, given a graph on nodes or equivalently its Laplacian matrix , the top nontrivial eigenvector of the Laplacian exactly optimizes the Rayleigh quotient, subject to the usual constraints. This optimization problem can equivalently be expressed as a vector optimization program with the objective function , where is an -dimensional vector, or as a Semi-Definite Program (SDP) with objective function , where is an

symmetric positive semi-definite matrix. This first nontrivial vector is, of course, of widespread interest in applications due to its usefulness for graph partitioning, image segmentation, data clustering, semi-supervised learning, etc. 

[2, 3, 4, 5, 6, 7].

In this context, Mahoney and Orecchia asked the question: do popular diffusion-based procedures—such as running the Heat Kernel or performing a Lazy Random Walk or computing the PageRank function—to compute a quick approximation to the first nontrivial eigenvector of solve some other regularized version of the Rayleigh quotient objective function exactly? Understanding this algorithmic-statistical tradeoff is clearly of interest if one is interested in very large-scale applications, where performing statistical analysis to derive an objective and then calling a black box solver to optimize that objective exactly might be too expensive. Mahoney and Orecchia answered the above question in the affirmative, with the interesting twist that the regularization is on the SDP formulation rather than the usual vector optimization problem. That is, these three diffusion-based procedures exactly optimize a regularized SDP with objective function , for some regularization function to be described below, subject to the usual constraints.

In this paper, we extend the Mahoney-Orecchia result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which -regularized or -regularized

-regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. In more detail, we will set up a sampling model, whereby the graph Laplacian is interpreted as an observation from a random process; we will posit the existence of a “population Laplacian” driving the random process; and we will then define an estimation problem: find the inverse of the population Laplacian. We will show that the maximum a posteriori probability (MAP) estimate of the inverse of the population Laplacian leads to a regularized SDP, where the objective function

and where the role of the penalty function is to encode prior assumptions about the population Laplacian. In addition, we will show that when is the log-determinant function then the MAP estimate leads to the Mahoney-Orecchia regularized SDP corresponding to running the PageRank heuristic. Said another way, the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Moreover, by Mahoney and Orecchia’s main result, the solution to this regularized SDP can be computed very quickly—rather than solving the SDP with a black-box solver and rather computing explicitly the pseudoinverse of the Laplacian, one can simply run the fast diffusion-based PageRank heuristic for computing an approximation to the first nontrivial eigenvector of the Laplacian .

The next section describes some background. Section 3 then describes a statistical framework for graph estimation; and Section 4 describes prior assumptions that can be made on the population Laplacian. These two sections will shed light on the computational implications associated with these prior assumptions; but more importantly they will shed light on the implicit prior assumptions associated with making certain decisions to speed up computations. Then, Section 5 will provide an empirical evaluation, and Section 6 will provide a brief conclusion. Additional discussion is available in the Appendix.

2 Background on Laplacians and diffusion-based procedures

A weighted symmetric graph is defined by a vertex set , an edge set , and a weight function , where is assumed to be symmetric (i.e., ). In this case, one can construct a matrix, , called the combinatorial Laplacian of :

where is called the degree of . By construction, is positive semidefinite. Note that the all-ones vector, often denoted , is an eigenvector of

with eigenvalue zero,

i.e., . For this reason, is often called trivial eigenvector of . Letting be a diagonal matrix with , one can also define a normalized version of the Laplacian: . Unless explicitly stated otherwise, when we refer to the Laplacian of a graph, we will mean the normalized Laplacian.

In many situations, e.g., to perform spectral graph partitioning, one is interested in computing the first nontrivial eigenvector of a Laplacian. Typically, this vector is computed “exactly” by calling a black-box solver; but it could also be approximated with an iteration-based method (such as the Power Method or Lanczos Method) or by running a random walk-based or diffusion-based method to the asymptotic state. These random walk-based or diffusion-based methods assign positive and negative “charge” to the nodes, and then they let the distribution of charge evolve according to dynamics derived from the graph structure. Three canonical evolution dynamics are the following:

Heat Kernel.

Here, the charge evolves according to the heat equation . Thus, the vector of charges evolves as , where is a time parameter, times an input seed distribution vector.

PageRank.

Here, the charge at a node evolves by either moving to a neighbor of the current node or teleporting to a random node. More formally, the vector of charges evolves as

(1)

where is the natural random walk transition matrix associated with the graph and where is the so-called teleportation parameter, times an input seed vector.

Lazy Random Walk.

Here, the charge either stays at the current node or moves to a neighbor. Thus, if is the natural random walk transition matrix associated with the graph, then the vector of charges evolves as some power of , where represents the “holding probability,” times an input seed vector.

In each of these cases, there is a parameter (, , and the number of steps of the Lazy Random Walk) that controls the “aggressiveness” of the dynamics and thus how quickly the diffusive process equilibrates; and there is an input “seed” distribution vector. Thus, e.g., if one is interested in global spectral graph partitioning, then this seed vector could be a vector with entries drawn from uniformly at random, while if one is interested in local spectral graph partitioning [8, 9, 10, 11], then this vector could be the indicator vector of a small “seed set” of nodes. See Appendix A for a brief discussion of local and global spectral partitioning in this context.

Mahoney and Orecchia showed that these three dynamics arise as solutions to SDPs of the form

(2)
subject to

where is a penalty function (shown to be the generalized entropy, the log-determinant, and a certain matrix--norm, respectively [1]) and where is a parameter related to the aggressiveness of the diffusive process [1]. Conversely, solutions to the regularized SDP of (2) for appropriate values of can be computed exactly by running one of the above three diffusion-based procedures. Notably, when , the solution to the SDP of (2) is , where is the smallest nontrivial eigenvector of

. More generally and in this precise sense, the Heat Kernel, PageRank, and Lazy Random Walk dynamics can be seen as “regularized” versions of spectral clustering and Laplacian eigenvector computation. Intuitively, the function

is acting as a penalty function, in a manner analogous to the or penalty in Ridge regression or Lasso regression, and by running one of these three dynamics one is implicitly making assumptions about the form of . In this paper, we provide a statistical framework to make that intuition precise.

3 A statistical framework for regularized graph estimation

Here, we will lay out a simple Bayesian framework for estimating a graph Laplacian. Importantly, this framework will allow for regularization by incorporating prior information.

3.1 Analogy with regularized linear regression

It will be helpful to keep in mind the Bayesian interpretation of regularized linear regression. In that context, we observe

predictor-response pairs in , denoted ; the goal is to find a vector such that . Typically, we choose by minimizing the residual sum of squares, i.e., , or a penalized version of it. For Ridge regression, we minimize ; while for Lasso regression, we minimize .

The additional terms in the optimization criteria (i.e., and ) are called penalty functions; and adding a penalty function to the optimization criterion can often be interpreted as incorporating prior information about . For example, we can model as independent random observations with distributions dependent on . Specifically, we can suppose

is a Gaussian random variable with mean

and known variance

. This induces a conditional density for the vector :

(3)

where the constant of proportionality depends only on and . Next, we can assume that itself is random, drawn from a distribution with density . This distribution is called a prior, since it encodes prior knowledge about . Without loss of generality, the prior density can be assumed to take the form

(4)

Since the two random variables are dependent, upon observing , we have information about . This information is encoded in the posterior density, , computed via Bayes’ rule as

(5)

The MAP estimate of is the value that maximizes ; equivalently, it is the value of that minimizes . In this framework, we can recover the solution to Ridge regression or Lasso regression by setting or , respectively. Thus, Ridge regression can be interpreted as imposing a Gaussian prior on , and Lasso regression can be interpreted as imposing a double-exponential prior on .

3.2 Bayesian inference for the population Laplacian

For our problem, suppose that we have a connected graph with nodes; or, equivalently, that we have , the normalized Laplacian of that graph. We will view this observed graph Laplacian, , as a “sample” Laplacian, i.e., as random object whose distribution depends on a true “population” Laplacian, . As with the linear regression example, this induces a conditional density for , to be denoted . Next, we can assume prior information about the population Laplacian in the form of a prior density, ; and, given the observed Laplacian, we can estimate the population Laplacian by maximizing its posterior density, .

Thus, to apply the Bayesian formalism, we need to specify the conditional density of given

. In the context of linear regression, we assumed that the observations followed a Gaussian distribution. A graph Laplacian is not just a single observation—it is a positive semidefinite matrix with a very specific structure. Thus, we will take

to be a random object with expectation , where is another normalized graph Laplacian. Although, in general, can be distinct from , we will require that the nodes in the population and sample graphs have the same degrees. That is, if denotes the “degree vector” of the graph, and , then we can define

(6)

in which case the population Laplacian and the sample Laplacian will both be members of . To model , we will choose a distribution for positive semi-definite matrices analogous to the Gaussian distribution: a scaled Wishart matrix with expectation . Note that, although it captures the trait that is positive semi-definite, this distribution does not accurately model every feature of . For example, a scaled Wishart matrix does not necessarily have ones along its diagonal. However, the mode of the density is at , a Laplacian; and for large values of the scale parameter, most of the mass will be on matrices close to . Appendix B provides a more detailed heuristic justification for the use of the Wishart distribution.

To be more precise, let be a scale parameter, and suppose that is distributed over as a random variable. Then, , and has conditional density

(7)

where denotes pseudodeterminant (product of nonzero eigenvalues). The constant of proportionality depends only on , , , and ; and we emphasize that the density is supported on . Eqn. (7) is analogous to Eqn. (3) in the linear regression context, with , the inverse of the sample size parameter, playing the role of the variance parameter . Next, suppose we have know that is a random object drawn from a prior density . Without loss of generality,

(8)

for some function , supported on a subset . Eqn. (8) is analogous to Eqn. (4) from the linear regression example. Upon observing , the posterior distribution for is

(9)

with support determined by . Eqn. (9) is analogous to Eqn. (5) from the linear regression example. If we denote by the MAP estimate of , then it follows that is the solution to the program

(10)
subject to

Note the similarity with Mahoney-Orecchia regularized SDP of (2). In particular, if then the two programs are identical except for the factor of in the optimization criterion.

4 A prior related to the PageRank procedure

Here, we will present a prior distribution for the population Laplacian that will allow us to leverage the estimation framework of Section 3; and we will show that the MAP estimate of for this prior is related to the PageRank procedure via the Mahoney-Orecchia regularized SDP. Appendix C presents priors that lead to the Heat Kernel and Lazy Random Walk in an analogous way; in both of these cases, however, the priors are data-dependent in the strong sense that they explicitly depend on the number of data points.

4.1 Prior density

The prior we will present will be based on neutrality and invariance conditions; and it will be supported on , i.e., on the subset of positive-semidefinite matrices that was the support set for the conditional density defined in Eqn. (7). In particular, recall that, in addition to being positive semi-definite, every matrix in the support set has rank and satisfies . Note that because the prior depends on the data (via the orthogonality constraint induced by ), this is not a prior in the fully Bayesian sense; instead, the prior can be considered as part of an empirical or pseudo-Bayes estimation procedure.

The prior we will specify depends only on the eigenvalues of the normalized Laplacian, or equivalently on the eigenvalues of the pseudoinverse of the Laplacian. Let be the spectral decomposition of the pseudoinverse of the normalized Laplacian , where is a scale factor,

is an orthogonal matrix, and

, where . Note that the values are unordered and that the vector lies in the unit simplex. If we require that the distribution for be exchangeable (invariant under permutations) and neutral ( independent of the vector , for all ), then the only non-degenerate possibility is that is Dirichlet-distributed with parameter vector  [12]. The parameter , to which we refer as the “shape” parameter, must satisfy for the density to be defined. In this case,

(11)

where is a prior for . Thus, the prior weight on only depends on and . One implication is that the prior is “nearly” rotationally invariant, in the sense that for any rank- projection matrix satisfying .

4.2 Posterior estimation and connection to PageRank

To analyze the MAP estimate associated with the prior of Eqn. (11) and to explain its connection with the PageRank dynamics, the following proposition is crucial.

Proposition 4.1.

Suppose the conditional likelihood for given is as defined in (7) and the prior density for is as defined in (11). Define to be the MAP estimate of . Then, solves the Mahoney-Orecchia regularized SDP (2), with and as given in Eqn. (12) below.

Proof.

For in the support set of the posterior, define and , so that . Further, . Express the prior in the form of Eqn. (8) with function given by

where, as before, denotes pseudodeterminant. Using (9) and the relation , the posterior density for given is

where Suppose maximizes the posterior likelihood. Define and . In this case, must minimize the quantity where

(12)

Thus solves the regularized SDP (2) with . ∎

Mahoney and Orecchia showed that the solution to (2) with is closely related to the PageRank matrix, , defined in Eqn. (1). By combining Proposition 4.1 with their result, we get that the MAP estimate of satisfies conversely, Thus, the PageRank operator of Eqn. (1) can be viewed as a degree-scaled regularized estimate of the pseudoinverse of the Laplacian. Moreover, prior assumptions about the spectrum of the graph Laplacian have direct implications on the optimal teleportation parameter. Specifically Mahoney and Orecchia’s Lemma 2 shows how is related to the teleportation parameter , and Eqn. (12) shows how the optimal is related to prior assumptions about the Laplacian.

5 Empirical evaluation

In this section, we provide an empirical evaluation of the performance of the regularized Laplacian estimator, compared with the unregularized estimator. To do this, we need a ground truth population Laplacian and a noisily-observed sample Laplacian . Thus, in Section 5.1, we construct a family of distributions for ; importantly, this family will be able to represent both low-dimensional graphs and expander-like graphs. Interestingly, the prior of Eqn. (11) captures some of the qualitative features of both of these types of graphs (as the shape parameter is varied). Then, in Section 5.2, we describe a sampling procedure for which, superficially, has no relation to the scaled Wishart conditional density of Eqn. (7). Despite this model misspecification, the regularized estimator outperforms for many choices of the regularization parameter .

5.1 Ground truth generation and prior evaluation

The ground truth graphs we generate are motivated by the Watts-Strogatz “small-world” model [13]. To generate a ground truth population Laplacian, —equivalently, a population graph—we start with a two-dimensional lattice of width and height , and thus nodes. Points in the lattice are connected to their four nearest neighbors, making adjustments as necessary at the boundary. We then perform edge-swaps: for each swap, we choose two edges uniformly at random and then we swap the endpoints. For example, if we sample edges and , then we replace these edges with and . Thus, when , the graph is the original discretization of a low-dimensional space; and as increases to infinity, the graph becomes more and more like a uniformly chosen -regular graph (which is an expander [14] and which bears similarities with an Erdős-Rényi random graph [15]). Indeed, each edge swap is a step of the Metropolis algorithm toward a uniformly chosen random graph with a fixed degree sequence. For the empirical evaluation presented here, and ; but the results are qualitatively similar for other values.

(a) Dirichlet distribution order statistics.

(b) Spectrum of the inverse Laplacian.
Figure 1: Analytical and empirical priors. 1(a) shows the Dirichlet distribution order statistics versus the shape parameter; and 1(b) shows the spectrum of as a function of the rewiring parameter.

Figure 1 compares the expected order statistics (sorted values) for the Dirichlet prior of Eqn. (11) with the expected eigenvalues of for the small-world model. In particular, in Figure 1(a), we show the behavior of the order statistics of a Dirichlet distribution on the -dimensional simplex with scalar shape parameter , as a function of . For each value of the shape , we generated a random -dimensional Dirichlet vector, , with parameter vector ; we computed the order statistics of by sorting its components; and we repeated this procedure for replicates and averaged the values. Figure 1(b) shows a corresponding plot for the ordered eigenvalues of . For each value of (normalized, here, by the number of edges , where ), we generated the normalized Laplacian, , corresponding to the random -edge-swapped grid; we computed the nonzero eigenvalues of ; and we performed replicates of this procedure and averaged the resulting eigenvalues.

Interestingly, the behavior of the spectrum of the small-world model as the edge-swaps increase is qualitatively quite similar to the behavior of the Dirichlet prior order statistics as the shape parameter increases. In particular, note that for small values of the shape parameter the first few order-statistics are well-separated from the rest; and that as increases, the order statistics become concentrated around . Similarly, when the edge-swap parameter , the top two eigenvalues (corresponding to the width-wise and height-wise coordinates on the grid) are well-separated from the bulk; as increases, the top eigenvalues quickly merge into the bulk; and eventually, as goes to infinity, the distribution becomes very close that that of a uniformly chosen -regular graph.

5.2 Sampling procedure, estimation performance, and optimal regularization behavior

(a) and .

(b) and .

(c) and .

(d) and .

(e) and .

(f) Optimal .
Figure 2: Regularization performance. 2(a) through 2(e) plot the relative Frobenius norm error, versus the (normalized) regularization parameter . Shown are plots for various values of the (normalized) number of edges, , and the edge-swap parameter, . Recall that the regularization parameter in the regularized SDP (2) is , and thus smaller values along the X-axis correspond to stronger regularization. 2(f) plots the optimal regularization parameter as a function of sample proportion for different fractions of edge swaps.

Finally, we evaluate the estimation performance of a regularized estimator of the graph Laplacian and compare it with an unregularized estimate. To do so, we construct the population graph and its Laplacian , for a given value of , as described in Section 5.1. Let be the number of edges in . The sampling procedure used to generate the observed graph and its Laplacian is parameterized by the sample size . (Note that this parameter is analogous to the Wishart scale parameter in Eqn. (7), but here we are sampling from a different distribution.) We randomly choose edges with replacement from ; and we define sample graph and corresponding Laplacian by setting the weight of equal to the number of times we sampled that edge. Note that the sample graph over-counts some edges in and misses others.

We then compute the regularized estimate , up to a constant of proportionality, by solving (implicitly!) the Mahoney-Orecchia regularized SDP (2) with . We define the unregularized estimate to be equal to the observed Laplacian, . Given a population Laplacian , we define and . We define , , , and similarly to the population quantities. Our performance criterion is the relative Frobenius error , where denotes the Frobenius norm (). Appendix D presents similar results when the performance criterion is the relative spectral norm error.

Figures 2(a), 2(b), and 2(c) show the regularization performance when (an intermediate value) for three different values of

. In each case, the mean error and one standard deviation around it are plotted as a function of

, as computed from 100 replicates; here, is the mean value of over all replicates. The implicit regularization clearly improves the performance of the estimator for a large range of values. (Note that the regularization parameter in the regularized SDP (2) is , and thus smaller values along the X-axis correspond to stronger regularization.) In particular, when the data are very noisy, e.g., when , as in Figure 2(a), improved results are seen only for very strong regularization; for intermediate levels of noise, e.g., , as in Figure 2(b), (in which case is chosen such that and have the same number of edges counting multiplicity), improved performance is seen for a wide range of values of ; and for low levels of noise, Figure 2(c) illustrates that improved results are obtained for moderate levels of implicit regularization. Figures 2(d) and 2(e) illustrate similar results for and .

As when regularization is implemented explicitly, in all these cases, we observe a “sweet spot” where there is an optimal value for the implicit regularization parameter. Figure 2(f) illustrates how the optimal choice of depends on parameters defining the population Laplacians and sample Laplacians. In particular, it illustrates how , the optimal value of (normalized by ), depends on the sampling proportion and the swaps per edges . Observe that as the sample size increases, converges monotonically to ; and, further, that higher values of (corresponding to more expander-like graphs) correspond to higher values of . Both of these observations are in direct agreement with Eqn. (12).

6 Conclusion

We have provided a statistical interpretation for the observation that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve a certain regularized version of the problem. One might be tempted to view our results as “unfortunate,” in that it is not straightforward to interpret the priors presented in this paper. Instead, our results should be viewed as making explicit the implicit prior assumptions associated with making certain decisions (that are already made in practice) to speed up computations.

Several extensions suggest themselves. The most obvious might be to try to obtain Proposition 4.1 with a more natural or empirically-plausible model than the Wishart distribution; to extend the empirical evaluation to much larger and more realistic data sets; to apply our methodology to other widely-used approximation procedures; and to characterize when implicitly regularizing an eigenvector leads to better statistical behavior in downstream applications where that eigenvector is used. More generally, though, we expect that understanding the algorithmic-statistical tradeoffs that we have illustrated will become increasingly important in very large-scale data analysis applications.

References

  • [1] M. W. Mahoney and L. Orecchia. Implementing regularization implicitly via approximate eigenvector computation. In Proceedings of the 28th International Conference on Machine Learning, pages 121–128, 2011.
  • [2] D.A. Spielman and S.-H. Teng. Spectral partitioning works: Planar graphs and finite element meshes. In FOCS ’96: Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, pages 96–107, 1996.
  • [3] S. Guattery and G.L. Miller. On the quality of spectral separators. SIAM Journal on Matrix Analysis and Applications, 19:701–719, 1998.
  • [4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transcations of Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.
  • [5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003.
  • [6] T. Joachims. Transductive learning via spectral graph partitioning. In Proceedings of the 20th International Conference on Machine Learning, pages 290–297, 2003.
  • [7] J. Leskovec, K.J. Lang, A. Dasgupta, and M.W. Mahoney. Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters. Internet Mathematics, 6(1):29–123, 2009. Also available at: arXiv:0810.1355.
  • [8] D.A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In

    STOC ’04: Proceedings of the 36th annual ACM Symposium on Theory of Computing

    , pages 81–90, 2004.
  • [9] R. Andersen, F.R.K. Chung, and K. Lang. Local graph partitioning using PageRank vectors. In FOCS ’06: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 475–486, 2006.
  • [10] F.R.K. Chung. The heat kernel as the pagerank of a graph. Proceedings of the National Academy of Sciences of the United States of America, 104(50):19735–19740, 2007.
  • [11] M. W. Mahoney, L. Orecchia, and N. K. Vishnoi. A spectral algorithm for improving graph partitions with applications to exploring data graphs locally. Technical report. Preprint: arXiv:0912.0681 (2009).
  • [12] J. Fabius. Two characterizations of the Dirichlet distribution. The Annals of Statistics, 1(3):583–587, 1973.
  • [13] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 393:440–442, 1998.
  • [14] S. Hoory, N. Linial, and A. Wigderson. Expander graphs and their applications. Bulletin of the American Mathematical Society, 43:439–561, 2006.
  • [15] B. Bollobas. Random Graphs. Academic Press, London, 1985.
  • [16] W.E. Donath and A.J. Hoffman. Lower bounds for the partitioning of graphs. IBM Journal of Research and Development, 17:420–425, 1973.
  • [17] M. Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czechoslovak Mathematical Journal, 25(100):619–633, 1975.
  • [18] M. Mihail.

    Conductance and convergence of Markov chains—a combinatorial treatment of expanders.

    In Proceedings of the 30th Annual IEEE Symposium on Foundations of Computer Science, pages 526–531, 1989.
  • [19] P. Yu. Chebotarev and E. V. Shamis. On proximity measures for graph vertices. Automation and Remote Control, 59(10):1443–1459, 1998.
  • [20] A. K. Chandra, P. Raghavan, W. L. Ruzzo, R. Smolensky, and P. Tiwari. The electrical resistance of a graph captures its commute and cover times. In Proceedings of the 21st Annual ACM Symposium on Theory of Computing, pages 574–586, 1989.

Appendix A Relationship with local and global spectral graph partitioning

In this section, we briefly describe the connections between our results and global and local versions of spectral partitioning, which were a starting point for this work.

The idea of spectral clustering is to approximate the best partition of the vertices of a connected graph into two pieces by using the nontrivial eigenvectors of a Laplacian (either the combinatorial or the normalized Laplacian). This idea has had a long history [16, 17, 2, 3, 4]. The simplest version of spectral clustering involves computing the first nontrivial eigenvector (or another vector with Rayleigh quotient close to that of the first nontrivial eigenvector) of , and “sweeping” over that vector. For example, one can take that vector, call it , and, for a given threshold , define the two sets of the partition as

Typically, this vector is computed by calling a black-box solver, but it could also be approximated with an iteration-based method (such as the Power Method or Lanczos Method) or a random walk-based method (such as running a diffusive procedure or PageRank-based procedure to the asymptotic state). Far from being a “heuristic,” this procedure provides a global partition that (via Cheeger’s inequality and for an appropriate choice of

) satisfies provable quality-of-approximation guarantees with respect to the combinatorial optimization problem of finding the best “conductance” partition in the entire graph 

[18, 2, 3].

Although spectral clustering reduces a graph to a single vector—the smallest nontrivial eigenvector of the graph’s Laplacian—and then clusters the nodes using the information in that vector, it is possible to obtain much more information about the graph by looking at more than one eigenvector of the Laplacian. In particular, the elements of the pseudoinverse of the combinatorial Laplacian, , give local (i.e., node-specific) information about random walks on the graph. The reason is that the pseudoinverse of the Laplacian is closely related to random walks on the graph. See, e.g. [19] for details. For example, it is known that the quantity is proportional to the commute time, a symmetrized version of the length of time before a random walker started at node reaches node , whenever and are in the same connected component [20]. Similarly, the elements of the pseudoinverse of the normalized Laplacian give degree-scaled measures of proximity between the nodes of a graph. It is likely that has a probabilistic interpretation in terms of random walks on the graph, along the lines of our methodology, but we are not aware of any such interpretation. From this perspective, given and a cutoff value, , we can define a local partition around node via . (Note that if is in , then is in ; in addition, if the graph is disconnected, then there exists a such that and are in the same connected component iff .) We call clustering procedures based on this idea local spectral partitioning.

Although the naïve way of performing this local spectral partitioning, i.e., to compute explicitly, is prohibitive for anything but very small graphs, these ideas form the basis for very fast local spectral clustering methods that employ truncated diffusion-based procedures to compute localized vectors with which to partition. For example, this idea can be implemented by performing a diffusion-based procedure with an input seed distribution vector localized on a node and then sweeping over the resulting vector. This idea was originally introduced in [8] as a diffusion-based operational procedure that was local in a very strong sense and that led to Cheeger-like bounds analogous to those obtained with the usual global spectral partitioning; and this was extended and improved by [9, 10]. In addition, an optimization perspective on this was provided by [11]. Although [11] is local in a weaker sense, it does obtain local Cheeger-like guarantees from an explicit locally-biased optimization problem, and it provides an optimization ansatz that may be interpreted as a “local eigenvector.” See [8, 9, 10, 11] for details. Understanding the relationship between the “operational procedure versus optimization ansatz” perspectives was the origin of [1] and thus of this work.

Appendix B Heuristic justification for the Wishart density

In this section, we describe a sampling procedure for which, in a very crude sense, leads approximately to a conditional Wishart density for .

Let be a graph with vertex set , edge set equipped with the equivalence relation . Let be an edge weight function, and let and be the corresponding combinatorial and normalized Laplacians. Let be a diagonal matrix with , so that . Suppose the weights are scaled such that , and suppose further that . We refer to as the population weight of edge .

A simple model for the sample graph is as follows: we sample edges from , randomly chosen according to the population weight function. That is, we see edges , where the edges are all drawn independently and identically such that the probability of seeing edge is determined by :

Note that we will likely see duplicate edges and not every edge with a positive weight will get sampled. Then, we construct a weight function from the sampled edges, called the sample weight function, , defined such that

where is an indicator vector. In turn, we construct a sample combinatorial Laplacian, , defined such that

Let be a diagonal matrix such that , and define . Letting denote expectation with respect to the probability law , note that , that , and that

. Moreover, the strong law of large numbers guarantees that as

increases, these three quantities converge almost surely to their expectations. Further, Slutzky’s theorem guarantees that and converge in distribution to the same limit. We use this large-sample behavior to approximate the the distribution of by the distribution of . Put simply, we treat the degrees as known.

The distribution of is completely determined by the edge sampling scheme laid out above. However, the exact form for the density involves an intractable combinatorial sum. Thus, we appeal to a crude approximation for the conditional density. The approximation works as follows:

  1. For , define such that

    where is chosen arbitrarily. Note that .

  2. Take to be random, equal to or with probability . Approximate the distribution of by the distribution of a multivariate normal random variable, , such that and

    have the same first and second moments.

  3. Approximate the distribution of by the distribution of , where

  4. Use the asymptotic expansion above to approximate the distribution of by the distribution of .

The next two lemmas derive the distribution of and in terms of , allowing us to get an approximation for .

Lemma B.1.

With and defined as above,

and

Proof.

The random variable is defined to have the same first and second moments as . The first moment vanishes since implies that . For the second moments, note that when ,

Likewise,

Lemma B.2.

The random matrix

is distributed as random variable. This distribution is supported on the set of positive-semidefinite matrices with the same nullspace as . When , the distribution has a density on this space given by

(13)

where the constant of proportionality depends only on and and where denotes pseudodeterminant (product of nonzero eigenvalues).

Proof.

Since is a sum of outer products of multivariate , it is Wishart distributed (by definition). Suppose and is a matrix whose columns are the eigenvectors of . Note that , and that has full rank. Thus, has a density over the space of positive-semidefinite matrices whenever . The density of is exactly equal to , defined above. ∎

Using the previous lemma, the random variable has density

where we have used that , and the constant of proportionality depends on , , , and . Then, if we approximate and , then is “approximately” the density of a random variable. These last approximations are necessary because and are rank-degenerate.

To conclude, we do not want to overstate the validity of this heuristic justification. In particular, it makes three key approximations:

  1. the true degree matrix can be approximated by the observed degree matrix ;

  2. the distribution of , a sparse vector, is well approximated , a Gaussian (dense) vector;

  3. the quantities and can be replaced with and .

None of these approximations hold in general, though as argued above, the first is plausible if is large relative to . Likewise, since and are nearly full rank, the third approximation is likely not too bad. The biggest leap of faith is the second approximation. Note, e.g., that despite their first moments being equal, the second moments of and differ.

Appendix C Other priors and the relationship to Heat Kernel and Lazy Random Walk

There is a straightforward generalization of Proposition 4.1 to other priors. In this section, we state it, and we observe connections with the Heat Kernel and Lazy Random Walk procedures.

Proposition C.1.

Suppose the conditional likelihood for given is as defined in (7) and the prior density for is of the form

(14)

where , , and and are functions with over the support of the prior. Define to be the MAP estimate of . Then, solves the Mahoney-Orecchia regularized SDP (2), with the same as in the expression (14) for and with

where .

The proof of this proposition is a straightforward generalization of the proof of Proposition 4.1 and is thus omitted. Note that we recover the result of Proposition 4.1 by setting and . In addition, by choosing to be the generalized entropy or the matrix -norm penalty of [1], we obtain variants of the Mahoney-Orecchia regularized SDP (2) with the regularization term . By then combining Proposition C.1 with their result, we get that the MAP estimate of is related to the Heat Kernel and Lazy Random Walk procedures, respectively, in a manner analogous to what we saw in Section 4 with the PageRank procedure. In both of these other cases, however, the prior is data-dependent in the strong sense that it explicitly depends on the number of data points; and, in addition, the priors for these other cases do not correspond to any well-recognizable parametric distribution.

Appendix D Regularization performance with respect to the relative spectral error

In this section, we present Figure 3, which shows the regularization performance for our empirical evaluation, when the performance criterion is the relative spectral norm error, i.e., , where

denotes spectral norm of a matrix (which is the largest singular value of that matrix). See Section 

5.2 for details of the setup. Note that these results are very similar to those for the relative Frobenius norm error that are presented in Figure 2.

(a) and .

(b) and .

(c) and .

(d) and .

(e) and .

(f) and .
Figure 3: Regularization performance, as measured with the relative spectral norm error, versus the (normalized) regularization parameter . Shown are plots for various values of the (normalized) number of edges, , and the edge-swap parameter, . Recall that the regularization parameter in the regularized SDP (2) is , and thus smaller values along the X-axis correspond to stronger regularization.