1 Introduction
The computation of currents and voltages in a resistive electrical network, besides being an interesting problem on its own, is a crucial primitive in many recently proposed optimization algorithms on weighted networks. Examples include the fast computation of maximum flows [5, 13], network sparsification [23], and the generation of random spanning trees [11].
Solving the electrical flow problem requires solving a system of linear equations, whose variables are the electrical voltages, or “potentials”, at the nodes of the network (equivalently, the currents traversing its edges). Performing this task can be computationally nontrivial, and is typically achieved in a centralized fashion.
At the same time, the ability to perform this task in a fully decentralized way is implicit in a number of biological systems by virtue of the electronichydraulic analogy [12], including the P. polycephalum slime mold [25, 3, 24] and ant colonies [16]. These organisms have been showed to implicitly solve the electrical flow problem in the process of forming foodtransportation networks. Such capability of biological systems naturally raises the following questions, which motivate our paper:

Can this task be collectively accomplished by the network itself, if every node is an agent that follows an elementary protocol, and each agent can only interact with its immediate neighbours, otherwise possessing no knowledge of the underlying topology?

In case of a positive answer to Q1, what is the involved computational effort for the network, in terms of convergence time and communication overhead?
We address the two aforementioned questions by providing analytical bounds which are of interest for many bioinspired multiagent systems in swarm robotics and sensor networks (e.g. [20, 10]).
1.1 Our contribution
We propose two complementary, fully decentralized approaches to electrical flow computation on a weighted network. In particular, Towards question (Q1), we make the following contributions:

We consider a deterministic distributed process, based on Jacobi’s iterative method for solving linear systems. This process converges to a solution of Kirchhoff’s node potential equations. We bound the convergence rate of this process in terms of a graphtheoretic parameter of the network – graph conductance.

Driven by a natural probabilistic interpretation of the aforementioned process, we further consider a randomized token diffusion
process, implementing Monte Carlo sampling via independent random walks. This process also converges to a solution of Kirchhoff’s node potential equations, but differently from the deterministic algorithm, randomized token diffusion does not involve any arithmetics on real numbers: each agent/node simply maintains a counter of the number of random walks currently visiting it, from which a simple unbiased estimator of the node’s potential can be derived. We derive a bound on the convergence rate of this process in terms of another graphtheoretic parameter –
edge expansion.
With respect to question (Q2), while the strong connection between electrical flows and random walks has been known for a while and has been extensively investigated in the past [7, 26], any effective exploitation of electrical flow computation crucially requires explicit and plausible bounds on the efficiency and accuracy of the algorithm(s) under consideration. In this respect, besides establishing the correctness of the two algorithmic approaches above, our core contribution is to derive detailed bounds on their time and communication complexities in terms of fundamental combinatorial properties of the network.
Finally, our results highlight the algorithmic potential of classical models of opinion formation, as discussed in more detail in the next section.
1.2 Related work
We briefly review contributions that are most closely related to the spirit of this work.
Computing electrical flows.
The problem of computing voltages and currents of a given resistive network, that is, the question of solving Kirchhoff’s equations, is a classical example of solving a linear equation system with a Laplacian constraint matrix [29]. While Jacobi’s method is a wellknown approach to the solution of a class of linear systems that subsumes Laplacian systems, its complexity analysis in the literature (for example, in [21, 28]) is generic, and does not exploit the additional matrix structure that is inherent in Laplacian systems. In our setting, existing results on the convergence of Jacobi’s method could be leveraged to prove convergence to a correct solution, but they would fail to provide explicit bounds on convergence rate.
Electrical flows and random walks.
Relations between electrical quantities and statistical properties of random walks have been known for a long time, and are nicely discussed, for example, in Doyle and Snell [7], Lovasz [15], and Levin, Peres and Wilmer [14]. In particular, in their monograph, Doyle and Snell point out the interpretation of the electrical current along an edge as the expected number of net traversals of the edge by a random walker that is injected at the source node and absorbed at the sink node of the flow. While our randomized token diffusion algorithm refers to the same underlying process, it crucially differs from the former in the interpretation of electrical current, as it links the electric potential of a node to the expected number of random walks currently visting the node (see Section 4).
This connection has been also explored by Chandra et al. [4], who characterized the cover time of the random walk in terms of the maximal effective resistance of the network, as well as by Tetali [26], who characterized the hitting time of the random walk in terms of the effective resistance between source and sink. Tetali [26] also proved that the expected number of visits to a node by a random walker injected at the source and absorbed at the sink is related to the electrical potential of the node in a simple way. In principle, like the one discussed by Doyle and Snell, this characterization could be used as the basis for another random walkbased approach to the estimation of electric potentials, with essentially the same complexity as the method we propose. Still, this is a static characterization that, by itself, does not provide an iterative algorithm or error bound. On the contrary, our interpretation results in an estimator, which depends solely on the number of tokens at each node and is thus entirely local, as opposed to previous methods, which entail tracking an event that depends on global properties of the network (such as the hitting time of a specific node or the absorption at the sink). Hence, we believe our proposed randomized diffusion process is more suitable to accommodate dynamic changes in the weights of the network when coupled with other processes, such as the Physarum dynamics [3, 16].
Electrical flows and complex systems.
Understanding how electrical flows are computed in a decentralized fashion can help explain the emergent behavior of certain social and biological systems. For example, foraging behaviors of the P. polycephalum slime mold [25, 3, 24] and of ant colonies [16] can both be formulated in terms of currentreinforced random walks, see Ma et al. [16]. In this respect, our results can be seen as a step towards a more thorough understanding of these complex biological processes, at the microscopic scale. Moreover, the simple processes we propose shed new light on the computational properties of models of opinion dynamics in social networks [6, 1, 19]. In particular, the classical model of opinion formation proposed by DeGroot [6] essentially corresponds to the decentralized version of Jacobi’s iterative method presented in this paper.^{2}^{2}2The only difference is the presence of two “special” agents (the source and the sink), whose behaviours slightly differ from the others, in that they exchange information with the exterior in the form of a current flow. This is a hint that opinion dynamics are extremely versatile processes, whose algorithmic potential is not completely understood.
Distributed optimization.
Since the electrical flow is one of minimum energy, the decentralized computation of electrical flows can be seen as an instance of distributed optimization, akin to the problems considered within the multiagent framework introduced by [27] and of potential interest for the class of distributed constraint optimization problems considered, for example, in [18]. Although for other distributed problems it has been suggested that Laplacianbased approaches are not the most computationally effective [8], as we have mentioned in the previous paragraph currentreinforced random walks are considered a feasible model, at least for certain biological systems [16, 9]. In this paper, we leveraged the specific structure of electrical flows to prove the effectiveness of our decentralized solutions. In particular, the dependency of convergence rates on the size of the network is polynomial, which cannot be claimed for other more generic approaches to distributed optimization.
Finally, in the rather different context of social choice and agents with preferences (as opposed to our perspective motivated by natural processes for network optimization), [22] investigate mechanisms for social choice on social networks as a weighted form of classical preference aggregation. One of the update processes they consider is loosely related to our Jacobi process.
1.3 Outline
The rest of this paper is organized as follows. In Section 2 we discuss some preliminaries about electrical networks and flows and set up the necessary notation and terminology. In Section 3 we describe and analyze the deterministic distributed algorithm based on Jacobi’s method for the solution of Kirchhoff’s equations. In Section 4 we propose and analyze our randomized tokendiffusion method for the estimation of the electric potentials. We conclude by summarizing our findings in Section 5.
2 Preliminaries on electrical networks and notation
We consider a graph , with node set , edge set , and positive edge weights representing electrical conductances. We also denote the weight of an edge by if are the endpoints of the edge; if no edge corresponds to the pair , . We use and to denote the number of nodes and edges, respectively, of the graph. Without loss of generality we assume that .
The (weighted) adjacency matrix of is the matrix whose entry is equal to if , and to otherwise. The volume (or generalized degree) of a node is the total weight of the edges incident to it, and is denoted by . The generalized degree matrix is the diagonal matrix with . The matrix is the transition matrix of
; we denote its eigenvalues (which are all real) by
. We use and to denote the smallest and largest volume, respectively, of the nodes of .We adhere to standard linear algebra notation, and we reserve boldface type for vectors. We denote by
the th standard basis vector, that is, a vector whose entries are 0 except for the th entry which is . With and we denote vectors with entries all equal to 0 and 1, respectively.In the next sections, we make use of the following fact.
Fact 1.
The transition matrix is similar to the symmetric matrix via conjugation by the matrix . In particular, we have
Moreover, thanks to the fact that is symmetric, has
orthonormal eigenvectors
, which correspond to the eigenvectors of via the similarity transformation for each . Observe also that both and , for each , are associated to the same eigenvalue of .The Laplacian matrix of is the matrix . We denote by the eigenvalues of . We often use the facts that and that .
In our setting, one node of the graph acts as the source, and one as the sink of the electrical flow. Kirchhoff’s equations for a network are then neatly expressed by the linear system
where is the unknown vector of electric potentials, and is a vector such that , , and if . The electrical flow is easily obtained from the vector : the electrical flow along an edge , in the direction from to , equals .
For a given weighted graph with a source and a sink, the electrical flow is uniquely defined. We remark however that Kirchhoff’s equations have infinite solutions, since electric potentials are defined up to any constant offset: if , then for any constant (since .) We call the (unique) solution such that the grounded solution to Kirchhoff’s equations.
The graph conductance (or bottleneck ratio) of graph is the constant
where denotes the total volume of the nodes in , and denotes the total weight of the edges crossing the cut . The edge expansion of is the constant
The graph conductance is a number between 0 and 1, while the edge expansion is a number between 0 and .
3 Jacobi’s method
The potentials are a solution of the linear system
(1) 
A classic parallel iterative algorithm for solving such a system is Jacobi’s method [21, 28], which goes as follows. System (1) can be rewritten as , which is equivalent to
(2) 
The idea underlying Jacobi’s method is to introduce the related linear recurrence system
(3) 
For any node , (3) becomes
(4) 
where the sums range on all neighbors of . Note that the denominator in (4) equals .
Indeed, from (2) we know that is a fixed point of (4) whenever . This suggests running the following messagepassing algorithm at every node (Algorithm 1).
Algorithm 1 does not specify an initial value for : any initial value can be used. There is also no explicit termination condition. Terminating the algorithm sooner or later has only an effect on the numerical error, as explained in the next subsection.
3.1 Correctness and rate of convergence
To study the convergence of Algorithm 1, fix any solution of (1), and define the error at step as
(5) 
where is the projection of on the subspace orthogonal to i.e.,
while is the component of parallel to , i.e.,
The reason we decompose as in (5) is that is defined in (1) up to translation along , since is in the kernel of : any vector satisfies as well. Therefore, one has converged to a solution of (1) as soon as for any , which implies that in (5) we do not care about the value of , we only care about having a small norm. This becomes clear by plugging the decomposition (5) in (3):
(6) 
where in we used that for any transition matrix it holds and (2). From (6) it follows that .
By projecting, it also follows that
where in we used again that and in we used that . By repeating steps and above we can unroll the previous equation and get
(7) 
Recall Fact 1, which implies that , and that we can write
Combining the previous observations with (7), we get
(8) 
where in we used the fact that , in we used the submultiplicativity of the norm and in we used that , that and where by definition .
3.2 Time and message complexity
In summary, the arguments in the previous subsection prove the following.
Theorem 3.1.
After rounds, the orthogonal component of the error of the solution produced by Algorithm 1 is reduced by a factor
(9) 
where is the second largest absolute value of an eigenvalue of . The message complexity per round is .
Proof.
Observe that in typical applications we have . This is the case, for example, when one considers the lazy variant of a transition matrix in order to avoid pathological cases [14, Section 1.3]. The condition implies, in particular, that (9) in Theorem 3.1 can be bounded in terms of the graph conductance of the network, ^{3}^{3}3We remark that here the term conductance refers to the graphtheoretic notion also known as bottleneck ratio [14], and shall not be confused with the notion of electrical conductance in the theory of electrical networks [7]. since for any graph , [14, Theorem 13.14].
4 A token diffusion method
Following a wellknown analogy between electrical flows and random walks [7, 26, 4, 14], in this section we propose a random walkbased approach to approximate electric potentials. The process is described by Algorithm 2. In each round, the algorithm starts new, mutually independent random walks at the source node. Each random walker (or token) moves one step during each round of the algorithm, until it reaches the sink node, where it is absorbed. The independent parameter controls the accuracy of the process.
Let denote the number of tokens at vertex at the end of round , when new independent random walks are started at the source. Our estimator of the potential at node at time will be
We next show that in expectation, our estimator evolves following a recurrence that, though not identical, is very close to (4).
Lemma 4.1.
Consider Algorithm 2 with . Define inductively by
(10)  
(11) 
Then, for every time and for every we have:
Proof.
The claim is proved by induction. It clearly holds when , since at that time there are no tokens and thus . For , and for every :
where we used . Dividing both sides by , we obtain:
By recalling that and by the law of iterated expectations we obtain:
(12) 
Recurrence (12) has the very same form as (11). In particular, from the inductive hypothesis we obtain . This completes the proof. ∎
The following corollary justifies our estimator in Algorithm 3, when :
Corollary 4.2.
Let
for every and . Then:
Proof.
Note that the definition of in Lemma 4.1 is akin to that of in Equation 4. One might thus reasonably expect that, like , also converges to a solution of Kirchhoff’s equations. Nevertheless, the two definitions are different and establishing this requires a separate proof, which we give in Section 4.1.
That result will justify the interpretation of , and hence of the vector , as an iterative approximation of the correct Kirchhoff potentials. Note that there are two sources of inaccuracy in this estimation. One is intrinsic to the iterative process, i.e., the rate with which converges to a solution of Kirchhoff’s equations; this will be the subject of Section 4.2. The second source of error is stochastic and reflects the accuracy of the estimator itself; it will be discussed in Section 4.3, where we show that for a large enough
, the estimator yields an accurate approximation of the potential with high probability, and not only in expectation.
4.1 Correctness of the token diffusion method
We can reexpress the system (11) as
(13) 
where is obtained from by zeroing out all entries on the row and column corresponding to the sink node. Likewise, is obtained from by zeroing out the entry corresponding to the sink node. We next prove that the spectral radius of is strictly between and . Using this fact, we prove that the token diffusion method converges to a feasible potential vector.
Lemma 4.3.
The spectral radius of , , satisfies . More precisely, , where is the left Perron eigenvector of .
Proof.
First of all, observe that is diagonalizable: if we let be the matrix obtained from by zeroing out the entries on the row and column corresponding to the sink node, then , and thus is similar to the symmetric real matrix .
Moreover, is nonnegative and the PerronFrobenius theorem for nonnegative matrices (for example, see [17, Section 8.3]) guarantees the existence of a nonnegative row vector such that . Without loss of generality, assume that .
Now observe that
Let be the nonnegative “gaps” between and . Then we can continue,
(14) 
where is the vector of row gaps, i.e., , and for the last equality we also used the fact that every row of sums to 1. Note that, for , is the same as , that is, the probability that a token at node reaches the sink in one step.
We claim that the spectral radius is strictly positive. To see that, observe that, similarly to Fact 1, is similar to and thus shares the same eigenvalues. If the spectral radius were zero, would be a null matrix, i.e., a matrix whose entries are all zeros (this follows by looking at the diagonalized form of ). Since is similar to , would be a null matrix as well, which is clearly not the case for any nonempty graph.
Observe also that and therefore . To show that , assume by contradiction that . Then, and (14) implies , i.e., whenever ; in particular, for each node adjacent to the sink. Continuing this argument would yield that for each adjacent to a node adjacent to the sink, (since the th entry of equals ), and so on. Since the original graph is connected, this contradicts the fact that . ∎
In the proof of next theorem, we make use of the following fact, which is analogous to Fact 1.
Fact 2.
The matrix is similar to the matrix obtained from by zeroing out its last column and row. In particular, , and has orthonormal eigenvectors which correspond to the eigenvalues of .
Theorem 4.4.
Proof.
Since , the matrix is invertible, and its inverse can be expressed as
If we recursively expand the updates we get, for any ,
(15) 
where we used that in (13) . As , this yields
(16) 
which shows that in the limit, the iterates satisfy the linear system or, recalling that ,
(17) 
To conclude that also satisfies the original system (1), notice the following. The two matrices and , as well as the two vectors and , differ only in the row corresponding to the sink node, and the difference of the th rows of the two matrices is given by the row vector
where in the last equality we used . Therefore, using (17),
which, after rearranging terms, together with (17) implies that
This proves that .
4.2 Convergence rate of token diffusion
In Section 4.1 we showed that the token diffusion system converges to one of the solutions of Kirchhoff’s equations. Moreover, its rate of convergence is dictated by the spectral radius of the transition matrix (Theorem 4.4), which is similar to the original transition matrix , except for the fact that all entries of the row and column corresponding to the sink are equal to in .
For simplicity of exposition, in the remainder we simply remove the row and column of corresponding to the sink. Assume without loss of generality that the sink corresponds to the th row/column index. Given an matrix , consider the matrix , obtained from by grounding the th index, that is, removing the th row and th column. The next fact shows that this operation does not affect the spectral radius of .
Proposition 4.5.
Assume is an matrix where each entry of the th row and of the th column is zero. Then:

for every eigenpair of there is an eigenpair of ;

is an eigenpair of ;

the spectral radius of and is the same.
Proof.
Point (1) follows from the assumption that the th column of is identically zero and thus the th entry of is equal to the th entry of for any . Point (2) follows from . Point (3) is a direct consequence of the first two. ∎
Since satisfies the hypothesis of Proposition 4.5, we can equivalently study the spectral radius of . To simplify (and with a slight abuse of) notation, in the remainder of this section we write for .
Additional notation.
We denote by the graph obtained from by removing the sink node and its incident edges. We denote by the Laplacian matrix of , so that , with and respectively the adjacency and degree matrices of . We also define . Note that for each . On the other hand is not a proper graph Laplacian, since for some .^{4}^{4}4Precisely, this happens whenever in the original graph . However, can be viewed as a perturbed Laplacian, since , where .
The rate of convergence of the token diffusion process is dictated by , the dominant eigenvalue of the matrix . Thanks to Fact 2, we can equivalently study the matrix