1 Introduction
Tikhonov denoising of graph signals [1, 2], semisupervised learning for node classification in graphs [3], postsampling graph signal reconstruction [4] are a few examples falling in the following class of optimization problems. Given a graph signal where is the value measured at node of a graph, one considers the problem
(1) 
where is the Laplacian of the graph and a parameter tuning the tradeoff between a datafidelity term –encouraging the solution to be close to – and a smoothness term –encouraging the solution to vary slowly along any path of the graph. In Tikhonov denoising, is a noisy measurement of an underlying signal that one wants to recover. In semisupervised learning (SSL),
are known labels that one wants to propagate on the graph to classify all the nodes in different classes (see Section
4 for more details).This optimization problem admits an explicit solution:
(2) 
that requires the inversion of a regularized Laplacian , where
is the identity matrix. Computing
costs elementary operations and becomes prohibitive as increases.Stateoftheart. For large (say ), the stateoftheart includes iterative methods such as conjugate gradient with preconditioning [5] where
is only accessed via matrixvector multiplications of the form
, and polynomial approximation methods [6] that approximate by a loworder polynomial in . Both classes of methods enable to compute in time linear in , the number of edges of the graph.Contribution. We introduce two MonteCarlo estimators of based on random spanning forests, that:

scale linearly in and thus useful on very large graphs

can be implemented in a fully distributed fashion: as long as each node is able to communicate with its neighbours, the result can be obtained without centralized knowledge of the graph’s structure (see the implementation paragraph at the end of Section 3).
The Julia code implementing these estimators and reproducing this papers’ results is available on the authors’ website^{1}^{1}1www.gipsalab.fr/nicolas.tremblay/files/graphsmoothingviaRSF.zip.
Structure of the paper. We provide the necessary notations and background in Section 2. In Section 3, we detail the two novel estimators before generalizing them to cases where is of the form with a positive diagonal matrix. The experimental Section 4 gives an illustration on image denoising before showing how to use our estimators in the context of SSL. We conclude in Section 5.
2 Background
Notations and preliminary definitions. We consider undirected graphs where is the set of nodes, the set of edges, and the symmetric weighted adjacency matrix. We denote by the degree of node , the degree vector, and the diagonal degree matrix. In this paper, we consider the Laplacian matrix defined as . is positive semidefinite (PSD) [7]
and its eigenvalues can be ordered as
. We also consider graph signals defined over the nodes : is the signal’s value at node . We denote by the constant vector equal to 1, and by the Dirac centered on : if , and otherwise.A tree of is understood as a connected subgraph of that does not contain cycles. A rooted tree of , i.e., a tree of whose edges are all oriented towards one node called root, is generically denoted by . A rooted forest of , i.e., a set of disjoint rooted trees, is generically denoted by . In the following, we only consider rooted trees and rooted forests, and thus simply refer to them as trees and forests. Also, stands for the set of roots of the trees in . As such, is the total number of trees in . We define the function such that is the root of the tree containing node in the forest . Node is said to be rooted in .
Random spanning forests (RSFs). Let us recall that a spanning tree (resp. forest) of is a tree (resp. forest) of that spans all nodes in . Now, for a given graph , there exists in general many spanning trees (and even more spanning forests). Probabilists and computer scientists have been studying several distributions over spanning trees and forests in the past. A classical distribution over spanning trees is:
(3) 
Sampling from this distribution yields a socalled uniform^{2}^{2}2
this terminology comes from the fact that in unweighted graphs, this distribution reduces to the uniform distribution over all spanning trees
spanning tree (UST) , and can be efficiently performed by Wilson’s algorithm [10] via looperased random walks.We focus here on the following parametrized distribution over spanning forests:
(4) 
Sampling from this distribution yields a RSF and is efficiently performed (in time ) via a variant of Wilson’s algorithm [11]. Many properties of are known [11, 12]
. For instance, the probability that node
is rooted in is (see lemma 3.3 in [12])(5) 
3 RSFbased estimators
Given a signal , our goal is to compute without computing explicitly , that is, without inverting . We define two MonteCarlo estimators of , both based on RSFs.
A first estimator of . Let us consider a realisation of RSF and define the estimator as
(6) 
Proposition 1.
is unbiased: . Moreover:
Proof.
Let . One has, using Eq. (5):
(7)  
(8) 
is thus unbiased. A similar calculation yields where , such that the variance verifies:
Summing over all nodes yields:
Noticing that
is an eigenvector of
with eigenvalue (as is an eigenvector of with eigenvalue ) ends the proof. ∎An improved estimator of . A random forest contains trees that we enumerate from to . Consider the subset of nodes that are in the th tree. As is a spanning forest, is a partition of (i.e., a set of disjoint subsets that cover ). Let be the tree membership function that associates to each node the tree number to which it belongs. For instance, is the size of the tree containing node . We define a second estimator as
(9) 
Proposition 2.
is unbiased: . Moreover:
Proof.
Let us denote by the function that takes as entry a forest and outputs its associated partition. Let us also define if , and otherwise. We need the Proposition 2.3 of [11]. Fixing a partition of , one has:
In words, this means that given a partition, the distribution of the root within each part is uniform over . Also, note that can be written:
(10) 
Given a RSF , define the matrix . Eq. (10) translates as: . Thus, :
(11)  
(12) 
The estimator is thus unbiased. Note that one also has:
(13) 
such that
(14)  
(15) 
Note that , i.e., , finishing the proof. ∎
RaoBlackwellisation. Note that
is a RaoBlackwellisation of where the sufficient statistic is
the partition induced by the RSF. As such, is necessarily an
improvement over . This improvement can also be observed in the
variance equations of both propositions: as is PSD with eigenvalues
between and , one has: .
Tikhonov denoising. Let be a noisy measurement of a signal that one wishes to recover. Assuming that the measurement noise is Gaussian with covariance , one can write (for instance for the second estimator):
where is a graph bandpass filter [1, 13] with frequency response . The second term depends on the noise level . The first term, however, depends on the original signal filtered by : the Fourier components of associated with graph frequencies around (maximizing ) are thus harder to denoise than the ones close to or .
In practice. For a given , independent RSFs are sampled –in time . Each RSF provides an independent estimate of , and all estimates are finally averaged.
Generalisation. Instead of estimating results of the form , one may need to estimate results of the form where is a diagonal matrix, with . In order to tackle this case, one considers the following distribution over forests:
(16) 
that generalizes the distribution of Eq. (4). One can efficiently sample from this distribution –also via a variant of Wilson’s algorithm (see the next paragraph). The introduced estimators generalize naturally to this case. In fact, given a RSF , their formulation is exactly the same, the sole difference stemming from the distribution from which is drawn from. Propositions 1 and 2 are still correct (proofs are omitted due to lack of space) for instead of .
Implementation. In a nutshell, an algorithm sampling from the distribution of Eq. (16) i/ adds a node to the graph and connects it to each node with weight ; ii/ runs Wilson’s RandomTreeWithRoot algorithm (based on looperased random walks –see Figure 1 of [10]) on this augmented graph to sample a UST rooted in ; iii/ cuts the edges connected to in yielding a RSF . Then, the estimator associates to each node the value of at its root , whereas the estimator associates to each node the average of over all the nodes belonging to its tree. All these operations can be done in a distributed fashion: no centralized knowledge of the graph is needed. Also, once the RSF is sampled, the computations involved for the estimators are not only distributed but can also be made in parallel (within each tree of the forest). To give an order of magnitude of computation times, for a random vector and , our naive Julia implementation of on a graph with (resp. ) nodes and (resp ) edges runs in average in 35 (resp. 550) ms on a laptop. These times are to compare to the optimized builtin sparse matrix multiplication running in 6 (resp. 115) ms, which is the building block of both conjugate gradient and polynomial approximation methods stated in the introduction. Our methods are thus comparable to the stateoftheart in computation time.
4 Experiments
Illustration on an image. Fig. 1 shows an image denoising example on a grayscale image. Constant irregular patches are observed on the realisation of : they are the trees of the associated RSF realisation. Also, as expected, converges faster than (as increases) for all values of .
Illustration on SSL. The goal of SSL is to infer the label (or class) of all the nodes of a graph given a few prelabelled nodes. Consider a partial labeling of the nodes, where is the number of classes and is equal to if node is a priori known to belong to class and otherwise. The objective is to find classification functions such that each is on the one hand close to the labeling function and on the other hand smooth on the graph, with a tradeoff given by a parameter . Depending on the choice of Laplacian used to define this graph smoothness (in the following, corresponds to the combinatorial Laplacian, to the normalized Laplacian, to the random walk Laplacian), the explicit formulation of can be written as (Prop. 2.2 of [3]): Note that this can be rewritten, with and , as:
Finally, once all classification functions are computed, each node is classified in the class .
One may use our estimators to solve the SSL classification task: , i/ use the proposed estimators on the vector to estimate , ii/ leftmultiply the result by and obtain an estimate of , iii/ once all functions have been estimated, classify each node to . In the following, we choose and set .
We illustrate this on the Stochastic Block Model (SBM): a random graph model with communities. Consider a SBM with nodes and two communities of equal size. We generate two scenarios: a wellstructured (resp. fuzzy) setting with probability of intrablock connection (resp. ) and interblock connection , which corresponds to a sparse graph with average degree (resp. ). The following experiment is performed: i/ choose nodes randomly in each community to serve as a priori knowledge on the labels and use them to define the two label functions ; ii/ compute the two classification functions either via direct computation (method referred to as ) or via our proposed estimators; iii/ each node is classified in community . The performance of the community recovery is measured by the Adjusted Rand Index (ARI), a number between and : the larger it is, the more accurate the recovery of the ground truth. Results are shown in Fig. 2. The estimator matches the performance of after forest realizations. Also, the smaller the amount of prior knowledge and the fuzzier the block structure, the harder it is to match . A closer look at the sampled forests shows that some trees do not contain any labeled nodes, thus failing to propagate the label information. This proofofconcept could be improved in various ways to avoid this difficulty –going beyond the scope of this paper.
5 Conclusion
We provide an original and scalable method enabling to estimate graph smoothing operations in large graphs. In future work, we will further explore this deep link between RSFs and Laplacianbased numerical linear algebra, to apply these ideas to more advanced graph signal processing operations.
References

[1]
D.I. Shuman, S.K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst,
“The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains,”
Signal Processing Magazine, IEEE, vol. 30, no. 3, pp. 83–98, May 2013.  [2] A. Sandryhaila and J.M.F. Moura, “Big Data Analysis with Signal Processing on Graphs: Representation and processing of massive data sets with irregular structure,” Signal Processing Magazine, IEEE, vol. 31, no. 5, pp. 80–90, Sept. 2014.
 [3] Konstantin Avrachenkov, Alexey Mishenin, Paulo Gonçalves, and Marina Sokol, “Generalized Optimization Framework for Graphbased Semisupervised Learning,” in Proceedings of the 2012 SIAM International Conference on Data Mining, pp. 966–974.
 [4] Gilles Puy, Nicolas Tremblay, Rémi Gribonval, and Pierre Vandergheynst, “Random sampling of bandlimited signals on graphs,” Applied and Computational Harmonic Analysis, pp. –, 2016.
 [5] Yousef Saad, Iterative methods for sparse linear systems, vol. 82, SIAM, 2003.
 [6] D.I. Shuman, P. Vandergheynst, and P. Frossard, “Chebyshev polynomial approximation for distributed signal processing,” in Distributed Computing in Sensor Systems and Workshops (DCOSS), 2011 International Conference on, June 2011, pp. 1–8.
 [7] F.R.K. Chung, Spectral graph theory, Number 92. Amer Mathematical Society, 1997.
 [8] Rafig Agaev and Pavel Chebotarev, “Spanning Forests of a Digraph and Their Applications,” Automation and Remote Control, vol. 62, no. 3, pp. 443–466, 2001, arXiv: math/0602061.
 [9] Simon Barthelmé, Nicolas Tremblay, Alexandre Gaudillière, Luca Avena, and PierreOlivier Amblard, “Estimating the inverse trace using random forests on graphs,” in Proc. GRETSI Symposium Signal and Image Processing, Lille, France, 2019, arXiv: 1905.02086.

[10]
David Bruce Wilson,
“Generating random spanning trees more quickly than the cover
time,”
in
Proceedings of the twentyeighth annual ACM symposium on Theory of computing
. 1996, pp. 296–303, ACM.  [11] L. Avena and A. Gaudillière, “Two Applications of Random Spanning Forests,” Journal of Theoretical Probability, July 2017.
 [12] Luca Avena and Alexandre Gaudillière, “Random spanning forests, Markov matrix spectra and well distributed points,” arXiv:1310.1723 [math], Oct. 2013, arXiv: 1310.1723.
 [13] Nicolas Tremblay, Paulo Gonçalves, and Pierre Borgnat, “Chapter 11  Design of Graph Filters and Filterbanks,” in Cooperative and Graph Signal Processing, Petar M. Djurić and Cédric Richard, Eds., pp. 299 – 324. Academic Press, 2018.
Comments
There are no comments yet.