# Leader Election in Well-Connected Graphs

In this paper, we look at the problem of randomized leader election in synchronous distributed networks with a special focus on the message complexity. We provide an algorithm that solves the implicit version of leader election (where non-leader nodes need not be aware of the identity of the leader) in any general network with O(√(n)^7/2 n · t_mix) messages and in O(t_mix^2 n) time, where n is the number of nodes and t_mix refers to the mixing time of a random walk in the network graph G. For several classes of well-connected networks (that have a large conductance or alternatively small mixing times e.g. expanders, hypercubes, etc), the above result implies extremely efficient (sublinear running time and messages) leader election algorithms. Correspondingly, we show that any substantial improvement is not possible over our algorithm, by presenting an almost matching lower bound for randomized leader election. We show that Ω(√(n)/ϕ^3/4) messages are needed for any leader election algorithm that succeeds with probability at least 1-o(1), where ϕ refers to the conductance of a graph. To the best of our knowledge, this is the first work that shows a dependence between the time and message complexity to solve leader election and the connectivity of the graph G, which is often characterized by the graph's conductance ϕ. Apart from the Ω(m) bound in [Kutten et al., J.ACM 2015] (where m denotes the number of edges of the graph), this work also provides one of the first non-trivial lower bounds for leader election in general networks.

## Authors

• 13 publications
• 30 publications
• 4 publications
• ### The Complexity of Leader Election: A Chasm at Diameter Two

This paper focuses on studying the message complexity of implicit leader...
09/02/2018 ∙ by Soumyottam Chatterjee, et al. ∙ 0

• ### Time and Communication Complexity of Leader Election in Anonymous Networks

We study the problem of randomized Leader Election in synchronous distri...
01/12/2021 ∙ by Dariusz R. Kowalski, et al. ∙ 0

Leader election is one of the fundamental problems in distributed comput...
09/14/2020 ∙ by Barun Gorain, et al. ∙ 0

• ### Singularly Optimal Randomized Leader Election

This paper concerns designing distributed algorithms that are singularly...
08/06/2020 ∙ by Shay Kutten, et al. ∙ 0

• ### Communication Efficient Self-Stabilizing Leader Election (Full Version)

This paper presents a randomized self-stabilizing algorithm that elects ...
08/10/2020 ∙ by Xavier Défago, et al. ∙ 0

• ### Leader Election in Arbitrarily Connected Networks with Process Crashes and Weak Channel Reliability

A channel from a process p to a process q satisfies the ADD property if ...
05/06/2021 ∙ by Carlos López, et al. ∙ 0

• ### Selecting a Leader in a Network of Finite State Machines

This paper studies a variant of the leader election problem under the st...
05/15/2018 ∙ by Yehuda Afek, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Leader election is one of the most classical and fundamental problem in the field of distributed computing having applications in numerous problems relating to synchronization, resource allocation, reliable replication, load balancing, job scheduling (in master slave environment), crash recovery, membership maintenance etc. Computing a leader can be thought of as a form of symmetry breaking, where exactly one special node or process (denoted as leader) is chosen to take some critical decisions.

Loosely speaking, the problem of leader election requires a set of nodes in a distributed network to elect a unique leader among themselves, i.e., exactly one node must output the decision that it is the leader. There are two well known variants of this problem (cf. [3, 27]), the explicit variant where at the end of the election process all the nodes are required to be aware of the identity of the leader and the implicit variant where the non-leader nodes need not be aware of the identity of the leader.

Compared to deterministic solutions that provide absolute guarantees for the election of a unique leader, randomized solutions guarantees unique leader election with high probability. However, this weakened assumption is still sufficient for many practical applications. With an acceptable error probability, randomization can result in a significant reduction in time and message complexities. This is highly advantageous for large scale distributed systems (e.g. P2P systems, overlay and sensor networks [35, 36, 40]), where scalability is an important issue. Furthermore, in anonymous networks, a randomized solution is often possible by randomly assigning unique identifiers to nodes (as done herein) whereas a corresponding deterministic solution is impossible (see [2]).

This paper focuses on studying the message complexity of implicit leader election in synchronous distributed networks. Here, we show the relationship between the graph connectivity (which is characterized by the graph’s conductance ) with the time and number of messages required for leader election. We provide an algorithm that solves implicit leader election in any general network with messages and in time, where is the number of nodes and refers to the mixing time of a random walk in the network graph .111Throughout the paper, we use the notation to hide factors. Correspondingly, we show that messages are needed for any leader election algorithm that succeeds with high probability, where refers to the graph’s conductance. We also show that the knowledge of the network size is critical to achieve the said message and time complexities, but surprisingly, knowledge of other graph properties, such as the conductance, mixing time, or diameter is not needed.

Computing Model.  We model the network as a connected, undirected graph with nodes and edges where nodes communicate over the graph edges. We assume synchronous communication that follows the standard model [32]. In each round, each node can perform some local computation which can involve accessing a private source of randomness. Additionally, each node is allowed to send a message of size bits through each edge incident on . Nodes do not have predefined ids and there are no node or link failures.

Port Numbering Model.  We assume that the nodes know the network size and wake up simultaneously at the beginning of the execution. Also, nodes are anonymous in the sense that they do not have unique IDs.222Our lower bound holds even if nodes start with unique IDs. Each node chooses an id uniformly at random from within the range .333This range guarantees that the chosen values are unique with high probability [27] (Chapter 4, page 72). Each node with degree has ports , over which it can send messages across undirected links to its neighbors; that is, each neighbor of is the endpoint of exactly of ’s ports. Nodes only know the port numbers of connections and are unaware of their neighbors’ identities. We do not assume these mappings to be symmetric: in particular, it can happen that is connected to via port number , whereas is connected to via port number .

Randomized Implicit Leader Election.  Every node of a given distributed network has a flag variable (or a boolean variable) initialized to and, after the process of election, with high probability (w.h.p), only one node, called the leader, raises its flag by setting the flag variable to . An algorithm is said to solve leader election in rounds, if within rounds nodes elect a unique leader w.h.p., and none of the nodes send any more messages after rounds.

Results.  In this paper, we present both upper and lower bounds for the problem of implicit leader election in general networks. We provide an algorithm that solves implicit leader election in any general network with messages and in time, where is the number of nodes and refers to the mixing time of a random walk in the network graph . If larger message sizes of is allowed, the message and time complexity reduces to and respectively. This implies significantly faster and efficient solutions (that are sub-linear in terms of message complexity) for leader election in several important classes of well-connected graphs that have a large conductance or alternatively a small mixing time. For example, to solve implicit leader election, in expander graphs (see [20] for applications) that have a mixing time of , it takes only time and messages; in hypercube graphs, that have a mixing time of , it takes only time and messages. The algorithm can also be used for solving the explicit variant of leader election by adding a broadcasting procedure, wherein the leader broadcasts its identity to all other nodes. For well connected graphs, this breaks the lower bound given in [24] (where denotes the number of edges of the graph) and nearly matches the lower bound for clique graphs [25] (as cliques have constant conductance).

We show that a dependence on the graph conductance is unavoidable, by presenting a message complexity lower bound of messages that holds for any leader election algorithm that succeeds with probability at least . This nearly matches the upper bound since we know that from [37].

By a similar analysis, we also provide lower bounds for other graph problems like broadcast and spanning tree construction in terms of the graph’s conductance. Our lower bounds also apply for the model [32], where there are no restrictions on the message size. Other than the bound in [24], to the best of our knowledge, this is the first non-trivial lower bound for randomized leader election in general networks. Also, ours is one of the first results to show the dependence of the time and message complexity to solve leader election on the connectivity of the graph , which is often characterized by the graph’s conductance .

Additionally, we show that the knowledge of the network size is critical for our algorithm to succeed by giving a lower bound of for all graphs if is not known. However surprisingly, the knowledge of other graph properties, like the conductance, mixing time, or diameter is not needed.

Prior Works.  Leader election, being one of the fundamental paradigms in the theory of distributed computing has been widely studied. The problem was first stated by Gérard Le Lann in [26] in the context of token ring networks, and thereafter has been extensively studied for various types of networks, scenarios and communication models. For particular types of networks topologies like token rings, mesh, torus, hypercubes and cliques the problem of leader election has been well-studied resulting in specialized algorithms and lower bounds in terms of both, time and message complexities (e.g. [10, 12, 39, 32, 27, 14, 1, 23, 25, 34] and references therein). In a seminal paper Gallager, Humblet and Spira [16], provided a deterministic solution for general graphs by finding the minimum spanning tree of the graph in time and exchanging a total of messages. Thereafter, Awerbuch [5] provided an round deterministic algorithm with a message complexity of messages, where refers to the total number of edges in the graph. Peleg [33] provided an time optimal algorithm for leader election with a message complexity of , where the diameter of the graph. More recently, in [24], the authors provide an algorithm that requires only messages though it could take arbitrary (albeit finite) time. There also exists significant amount of literature (see [19, 13, 38, 9] and references therein) that provides a solution for leader election on fault prone networks, with possible node or link failures.

The best known bounds for general graphs are as follows. In [24] Kutten et al. show that is the lower bound on messages and is the lower bound on time for any implicit leader election algorithm. They compare and contrast the deterministic algorithms to randomized algorithms while trying to simultaneously achieve optimal time and message complexity for leader election. Unlike the deterministic case where an algorithm cannot be simultaneously time and message optimal (e.g. in a cycle any time deterministic algorithm requires at least messages even when nodes know [15]), they show that for the randomized case simultaneous optimality can be achieved in certain cases. In particular, to show the bounds are tight they give an algorithm that takes messages (not time optimal), an algorithm that takes messages and time (almost simultaneously optimal).

In [25], Kutten et al. show that in terms of message complexity, there exists a gap between the implicit and the explicit variants of leader election. For the explicit variant, all nodes needs to be informed of the identity of the leader, and as such messages is the obvious lower bound for all graphs. However, for the implicit version the authors by provide a sub-linear bound algorithm on complete networks that runs in rounds and (w.h.p.) uses only messages to elect a unique leader (w.h.p.). Thereafter, they extend this algorithm to solve leader election on any connected graph in time and messages, where is the mixing time of a random walk on .

A key difference, however, was that they assume that every node in the graph knows the mixing time of the graph, which significantly simplifies the problem. An important challenge addressed by our algorithm is (effectively) estimating when a collection of random walks is well-enough mixed. While there is a recent result in

[29] that shows how nodes can quickly estimate the mixing time of the graph, their algorithm requires messages and hence cannot be used for the purpose of achieving a small message complexity, where is the total number of edges in the graph.

In the context of using random sampling for leader election, the work of [4] uses random walks to limit the impact of Byzantine nodes on electing an honest leader in dynamic networks.

## 2 Preliminaries

In this section, we describe some basic definitions and concepts that we make use of throughout the paper. First, we give a brief overview and the definition of graph conductance. Next, we describe some basic notation for random walks on a graph including its mixing time, and state the relationship between the mixing time and the conductance of .

Conductance, in general, is a characterization of the bottleneck in communication of a graph. The notion of graph conductance was introduced by Sinclair [21]. For a given graph , a subset of nodes and cut , we define to be the subset of edges across the cut , and the volume , where refers to the degree of node . The cut-conductance is defined as . The conductance of the graph is defined as the minimum of the cut-conductance across all possible cuts , i.e., . We simply write instead of , when graph is clear from the context.

For a random walk on , we define a node set and an transition matrix of . Each position of the form in the transition matrix has an entry , else if then has an entry if there is an edge , otherwise . At a particular step, the entry gives the probability of a random walk moving from node to node . This exactly corresponds to a lazy random walk wherein a random walk either stays in the current node with probability ; otherwise moves to a neighbor with probability

determined by represents the position of a random walk after steps. If some node starts a random walk, the initial distribution of the walk is an

-dimensional vector having all zeros except at index

where it is . After the node has chosen to forward the random walk token, either to itself or to a random neighbor, the distribution of the walk (after step) is given by and in general we have . For any connected graph , the distribution will eventually converge to the stationary distribution , which has entries and satisfies . The mixing time of an -node graph , is defined as the minimum such that, for each starting distribution , , where denotes the usual maximum norm on a vector. We simply write instead of , when graph is clear from the context.

Note that, the connectedness of a graph (determined by its conductance ) and the mixing time of a random walk on are closely related. Better connectivity implies fast mixing and vice versa. There is a well known result that formally relates the graph conductance to the mixing time (see [37]) as follows

 Θ(1/ϕ)⩽tmix⩽Θ(1/ϕ2) (1)

## 3 A Leader Election Algorithm for Well-Connected Networks

In this section, we provide an algorithm that solves implicit leader election for any given graph with time complexity of and more importantly, with message complexity, where refers to the mixing time of a random walk on the graph .

The proposed algorithm, in its initial phase is similar to the algorithms given in [18, 25] where the initial objective is to reduce the number of competing nodes (contenders), while also ensuring that there exists at least one contender (w.h.p.). For this purpose, each node in the network graph , elects itself as a contender with a probability of , where is a sufficiently large constant. As such, the probability of no node electing itself as a contender is ; thus, implying that w.h.p. the number of contenders is nonzero.

Now, imagine a scenario in which each of these contenders contacts a set of nodes, which we refer to as the contender’s target set. If the target set is large enough (say, ), then for any two contenders we can say that there would be common/intersecting node that would have communicated with both contenders. Thereafter, the contenders can communicate via this intersecting node. If all contenders have a sufficiently large target set then all contenders would be able to communicate with one another. We design our algorithm based on this idea.

First, we determine the minimum size of the target set needed to guarantee an intersection w.h.p. between the target sets of any two contenders. It can be easily shown with the birthday paradox argument that if any two contender nodes and contact random nodes, then w.h.p. there is at least one node that was chosen by both and . By the definition of mixing time, if a random walk has taken at least steps, then (for all practical purposes) its end point can be considered as a random node. Therefore, each contender node can find random nodes by performing independent random walks in parallel. The random walks essentially function as mechanisms for selecting/sampling “random” nodes, where the guarantee is if the length of the walk is long enough, the choice is close to uniform. We might as well think of the random walks as a black box that return a collection of random nodes. However, as nodes are not aware of the mixing time of the graph, this technique cannot be used directly to obtain random nodes.

Without knowledge of the mixing time, it is difficult (if not impossible) to obtain a set of nodes that are chosen uniformly at random by using random walks. Therefore, the major challenge reduces to correctly obtaining a set of possibly non-random nodes (as random walks might be of length less than the mixing time) that satisfy the required properties that we had hoped to achieve from a uniformly random chosen target set. One difficulty here lies is in determining the ideal length of the random walks of each contender without the knowledge of the mixing time. To deal with this, in our algorithm, we use a guess and double strategy where in each iteration nodes guess a length for the random walk, perform random walks of the chosen length, determine based on some criteria if the length is sufficient; if not the next iteration begins with double the previous estimate.

The critical part is to determine the criterion for which we can consider the length of the random walks to be sufficient. A natural solution would be to check if there are enough intersections in the target set (with target sets of other contenders). For example, if the target set of each contender had an intersection with target sets of all other contenders, all of them can communicate via the intersecting node(s). However, then we would require the knowledge of the exact number of contenders to determine termination, which is difficult to obtain with certainty. In fact, we show that an intersection with greater than half of the contenders is sufficient and obtainable.

Given such a criterion, it creates another challenge, as it might be the case that all the random walks do not terminate in the same round. For example, consider the case where a large number of contenders belong to the same locality of the graph and as a result they contact each other quickly, via their random walks. However, a few of the contenders do not belong to this neighborhood and are slightly far off from this locality. In this case, the target set of the locally placed contenders would belong to the same locality (and not be nearly randomly spread). As such, it would be difficult for the far flung contenders to make contact with any of the locally placed contender’s target sets, requiring much longer lengths of random walks than the mixing time. For this case, we would also need to guarantee that the random walks that terminate early are still easily discoverable.

To deal with the above challenges, we provide a twofold stopping criterion: first, we want to ensure that the end points of the random walks of a contender intersect with the random walks of at least half of the total number of contenders; second, we would also like to ensure that the end points of these random walks are sufficiently spread out, such that other random walks do not spend too much time discovering them. Another crucial part to consider is dealing with the congestion that might be caused by the information carried along the random walks.

Basically, the given randomized leader election algorithm can be divided into three major parts. First, a node makes a probabilistic decision determining its candidature, i.e., whether or not it becomes a contender. Then, in the second part, contenders guess and double length of random walks until it satisfies some required properties. Lastly, based on information retrieved from random walks, a node elects itself as the leader if it satisfies a certain winning condition.

We provide the following contender lemma which restricts the total number of possible contenders.

###### Lemma 1.

(Contender Lemma) With high probability the number of contenders is in the range , where is a sufficiently large constant.

###### Proof.

Since each node becomes contender independently with probability , we can apply two tail bounds to show concentration around the expected number of contenders . Let be the number of contenders. By standard Chernoff Bounds (Theorems 4.4 and 4.5 in [28]), we know that and, similarly, For sufficiently large , both of these bounds can be shown to hold with high probability and hence the lemma follows by a simple union bound. ∎

Each contender node creates tokens and starts random walks of length in parallel, where is a constant . Each random walk is represented by a token (of bits), where represents the node’s id, and represents the length of the random walk. At each step of the random walk is decremented by 1, until it finally becomes 0. We define proxies of node as the nodes where the random walks generated by complete steps, where is either ’s current or final guess of the length of the random walk. Two contender nodes are said to be adjacent if they share at least one proxy.

The algorithm guarantees the following properties at the end of the execution:

• Intersection Property: A contender satisfies the intersection property iff is adjacent to at least of the other contender nodes. Using Lemma 1, we see that any node which satisfies the intersection property is adjacent to greater than half of the total number of contenders (as ) w.h.p.

• Distinctness Property: A contender node satisfies the distinctness property if of its proxies are distinct. For a particular guess of , a proxy of a random walk belonging to is called a distinct proxy only if is the end node of exactly one random walk belonging to (from among the many random walks belonging to ) i.e. no other random walks belonging to ends at . For any contender node, the spreading out of its random walks is characterized by the number of distinct proxies.

In the model, due to the restriction on the message size, it is impossible to perform too many walks in parallel along an edge. We solve this issue by sending only one token and the count of tokens that need to be sent by a particular contender which is still , and not all the tokens themselves. Similarly, our algorithm also requires some id information (set of ids of other contenders) to be sent along the random walk. We note that the maximum possible number of contenders is w.h.p. (c.f. Lemma 1). In the worst case, an intermediate node might have to deal with messages of size each, introducing a maximum possible delay of rounds. To account for this delay, in the algorithm, we define , and use this upper bound estimate to keep the execution of the algorithm in synchrony. We relegate the formal details of handling congestion to the proof of Lemma 12.

Consider contender nodes that are yet to satisfy the intersection and distinctness properties as active nodes; consequently nodes that have already satisfied the said properties are considered inactive. That is, all nodes that will not double their estimate of are considered as inactive. It is to be noted that all the active nodes are synchronous and for all inactive nodes, the distance to their respective proxies is less than the current estimate of (of the active nodes).

###### Observation 2.

All inactive contender nodes satisfy both the intersection and the distinctness properties.

###### Lemma 3.

For any active contender node , after the iteration where , w.h.p. satisfies both the intersection and distinctness properties. In fact, has intersecting proxies with all of the other contenders (both active and inactive).

###### Proof.

Consider a set consisting of all active contenders and a set

of all the inactive contenders (contenders that decide not to double their estimate after some previous epoch). We prove the lemma using the following claims.

###### Claim 4.

Each contender node in is adjacent to (has intersecting proxies with) all the other contender nodes, w.h.p.

###### Proof.

For a contender node , when , has random proxies by running independent random walks of length (proxies are random by the definition of mixing time). For any contender node , since satisfies the distinctness property, it has at least distinct proxies. The probability of non-intersection between this set of distinct proxies and the set random proxies is given by a birthday-paradox style argument to be . The statement is true for all pair of nodes by taking a simple union bound. Thus, with high probability, each contender node in has at least one common/intersecting proxy with any contender node in .

Now, consider two different contenders , each of which has a set of random proxies. Using similar arguments as above it can be easily shown that each contender node in has at least one common/intersecting proxy with every other contender node in , w.h.p.

This implies that each contender in has intersecting proxies with all the other contenders, both in and , and thus is adjacent to all the other contenders. ∎

###### Claim 5.

Each contender node in satisfies the distinctness property, w.h.p., when , where is a constant .

###### Proof.

To show the number of distinct proxies, we name the independent random walks of any contender as . After the random walks have taken number of steps, the probability that two of these random walks and do not share a proxy is (by the definition of mixing time). The probability that no other the random walks of node ends up at the same node as is obtained by taking an union bound.

 (2)

The above equation holds as and .

Let

be a binary random variable such that

when has a distinct proxy (no other ends at the proxy of ), and when it does not. This implies (from above) that and . We define another binary random variable such that with probability , otherwise . Clearly, is always . Since each is independent of one another,

Thereafter, using a standard chernoff’s bound we show a bound on the summation over .

Since each stochastically dominates over the corresponding , it implies

Therefore, we can say that with high probability . This means that the number of non-distinct proxies of the contender is , w.h.p. The statement holds over all contender nodes at the same time by taking a simple union bound.

Thus, when , each contender would have found at least (which is ) distinct proxies and therefore satisfying the distinctness property. ∎

The time complexity of the algorithm, is determined by the following lemma.

###### Lemma 6 (Safety Lemma).

In time, w.h.p. all contender nodes satisfy both the intersection and the distinctness properties. Consequently, for the given algorithm, every node eventually stops, no later than time.

###### Proof.

Each contender in parallel, runs several random walks till it satisfies the intersection and distinctness properties. begins with an initial estimate of and doubles each time till the above condition is not satisfied. This is the standard guess and double strategy and this does not increase the overall complexity by more than a constant factor of the maximum estimate. From Observation 2 and Lemma 3, we see that all contender nodes satisfy both the conditions w.h.p. when . Since the algorithm runs an upper-bound of , i.e. to avoid congestion, the time required to satisfy both the intersection and distinctness property is . ∎

###### Lemma 7 (At least one leader).

After the iteration where the active nodes estimate , where is a constant , if no node had elected itself as leader in any of the earlier rounds, at least one contender node elects itself as the leader.

###### Proof.

Consider the iteration , where the active nodes estimate . We look at the contender node with the highest id, say . In iteration , can either be inactive or active depending on whether it has stopped. (If is inactive, we look at the iteration in which became inactive). We show that for either case, if no other node has elected itself as the leader in any of the earlier rounds then becomes leader.

Suppose that becomes inactive in iteration where and no other node has elected itself as the leader in any of the earlier iterations. By Observation 2, satisfies both the intersection and the distinctness properties. Alternatively, it could be that is active until iteration , where the active nodes estimate . Also in that case, Lemma 3 says that satisfies the intersection and the distinctness properties. For either case, since has the highest id among the contender nodes, satisfies both the distinctness and the intersection properties and none of the other nodes has elected itself as the leader in any of the earlier rounds (implying that has not received a winner message), satisfies all the required conditions and becomes leader. ∎

###### Lemma 8 (At most one leader).

After the completion of the algorithm, at most one contender node elects itself as the leader.

###### Proof.

We prove the lemma by combining the following two claims:

###### Claim 9.

Two different nodes cannot elect themselves as the leader in an iteration of the algorithm.

###### Proof.

Suppose two nodes and elect themselves as the leader in the same iteration of the algorithm. We know by the description of the algorithm that any node that becomes the leader would first need to satisfy both the intersection and the distinctness properties. Therefore, both contenders and would have at least distinct proxies and would be adjacent to , i.e., more than half of the contenders. Recall that the sets of and (denoted by resp. ) contain the ids of their adjacent contenders. Let be a contender whose id is in the intersection of and . As both and are adjacent to more than half of the contenders, there must be at least one such node .

Without loss of generality, assume that the id of is larger than the id of . Since , some proxy of must have also been a proxy of in this iteration. Similarly, since , some proxy of must have also been a proxy of in this iteration. Then, by the description of the algorithm would obtain the ids of both and in the set , which it then disseminated to all its proxies. The proxies and both get this information (of ids of and ) which is then forwarded to and respectively as sets and respectively. This means that must have known about while checking the winning condition and hence it knows that its id was not maximal, a contradiction. ∎

###### Claim 10.

If a node elects itself as the leader in some iteration , no other node can elect itself as the leader in any subsequent iteration.

###### Proof.

Suppose two nodes and elect themselves as the leader and suppose that does so in iteration whereas does so in iteration . For this case we show that when iteration begins, more than half of the contender nodes are aware that some node has become the leader. If any other contender node satisfies both, the intersection and the distinctness properties, then it must have interacted with at least one of the nodes that is aware of the existence of a leader and thereby also becomes aware of the leader. This means that must have known about the existence of a leader by receiving a winner message (either directly or indirectly), leading to a contradiction.

If becomes leader in iteration , then it immediately sends a winner message to all its proxies, which is then immediately forwarded it to all the other adjacent contenders (see Algorithm 2). The winner message reaches all the adjacent contenders of before the next iteration begins (as active contenders wait for time at the end of random walk phase). As has to satisfy both the intersection and the distinctness properties to satisfy the winning condition, the number of adjacent contenders of is , which in turn is greater than half of the total number of contenders. Any other contender node that also satisfies the intersection and the distinctness properties has to have at least one intersecting proxy with at least one of the adjacent contender nodes of (by the pigeon hole principle). Any interaction with adjacent contender nodes of is accompanied with an additional winner message notifying of the existence of a leader, and thus leading to a contradiction. ∎

This completes the proof of Lemma 8. ∎

Combining Lemma 7 and Lemma 8, we obtain the following lemma that determines the correctness of the algorithm.

###### Lemma 11 (Unique Leader Lemma).

With high probability and in time, exactly one contender becomes the leader.

###### Lemma 12 (Message Complexity Lemma).

With high probability, the total number of messages sent by the above algorithm is at most . If larger message size of is allowed the total number of messages comes down to .

###### Proof.

To calculate the message complexity, we look at the various messages that are sent by the algorithm. Considering Algorithm 2, we observe that all the information is sent only along the random walks. The messages that are sent include the random walk tokens, the sets and , the Boolean and the winner messages. In each phase (iteration), the maximum number of steps taken by any of these messages is proportional to the estimate of the length of the random walk . The maximum possible estimate is (c.f. Lemma 6) and as this estimate is chosen in a guess-and-double style which only increases the overall complexity to a constant factor of the maximum guess for a successful trial, the overall number of steps taken throughout the algorithm (without accounting for congestion) by any of these messages is as well.

Individually, the Boolean and the winner messages are of bits and the random walk tokens are of bits. Since the ids of the contenders are of bits and number of contenders is (c.f. Lemma 1), the sets and can be of size as they can contain the ids of other contenders. This implies that an intermediate node might receive up to many sized messages.

First, consider the case where message sizes are allowed to be sent over an edge. Each contender node ( many) initiates a total of messages which backtracks after reaching the proxies taking a total of steps. Additionally, the winner message also takes only many steps. As there would be no congestion, the message complexity here would be w.h.p. , which equals .

Now, we consider the standard model where message sizes are restricted to . Firstly, during the execution of the random walk, a contender node does not send different tokens for each random walk, but rather sends only one token along with a count of tokens that need to be sent in a particular path. For multiple instances of the variable originating from different proxies of the same contender, only the summation value is sent (which is ). Multiple messages coming from either the same or different nodes could possibly lead to congestion. For messages that have the same destination, we send only one distinct copy of id information over a particular edge (i.e. we use a filtering and forwarding technique wherein if an intermediate node has sent the information to a particular destination once it does not send the same information again to that destination). For messages having different destinations, there is a possibility that many messages of size could arrive at a particular intermediate node. Larger sized messages of bits would have to be broken down into sized messages, i.e. we can assume that each sized message contains the information of the id of a node and some additional bits. The maximum delay possible here an at intermediate node is . We note that we use the variable in the algorithm to deal with this possible delay. Hence, the number of messages sent in the worst case is w.h.p. , which equals . ∎

We conclude with the following theorem that combines the results of all the previous lemmas.

###### Theorem 13.

For any given graph that has a mixing time of , there exists an implicit leader election algorithm that succeeds w.h.p. in time and has a message complexity of , assuming that nodes know .

After finding the leader we can use the well known push-pull broadcast [22] to disseminate the id of the leader to all the other nodes to obtain a solution for the explicit variant of leader election.

###### Corollary 14.

For any graph that has a conductance of and a mixing time of , there exists an explicit leader election algorithm that succeeds w.h.p. in time and has a message complexity of , assuming that nodes know and there are no failures.

###### Proof.

The corollary follows by appending a simple push-pull broadcast procedure [17] at the end of the implicit leader election algorithm. The push-pull broadcast takes time and messages. From equation 1, we know that , it implies . Therefore the running time of leader election dominates the running time for broadcast. ∎

## 4 Lower bounds

In this section, we show the lower bounds for implicit leader election by showing that there exists a class of graphs with conductance on which any leader election algorithm that succeeds with probability requires messages in expectation. We also obtain some corollaries that lower bound the total number of messages required by other graph problems like broadcast and spanning tree construction.

###### Theorem 15.

Suppose there is a randomized leader election algorithm that succeeds with probability in -node networks where each node has a unique ID and knows the network size . Then, for every , where , there exists a graph of nodes and conductance such that the algorithm requires messages in expectation.

Throughout the proof of Theorem 15, we assume that nodes start without unique ids. However, since nodes have knowledge of the network size , it is straightforward to generate unique IDs with high probability. Hence we can use the same reduction as [11] (Sec. 3, paragraph “Unique IDs vs Anonymous”) to remove this assumption and show that our result holds even when nodes are equipped with unique ids.

### 4.1 The lower bound graph(s)

Graphs and .  We start out by describing the construction of the graph that we use to prove the message complexity lower bound. For any given and such that , we create the graph that has a total number of nodes and a conductance . In this regard, we also define a parameter .

We first construct a super-node graph with super-nodes, and later derive the graph from . The graph is created as a random regular graph (as in [8],[7]) where each super-node has a degree . See Figure 1. For the purpose of analysis, since it does not change our bounds, we assume that both and are integers.

Say and be the vertex set and the edge set of the graph . To create the graph from , each super-node is replaced with a clique of nodes. For each edge of , that exists between two super-nodes say and , a corresponding edge is created in the graph between a (previously unchosen) node chosen randomly from the clique and a (previously unchosen) node chosen randomly from the clique . As each super-node has exactly edges connected to it, for each clique there would exist such chosen nodes (called external-edged nodes). An edge between any two nodes belonging to the same clique is called an intra-clique edge, whereas an edge between nodes belonging to different cliques is called an inter-clique edge. To maintain uniform node degrees of exactly , two intra-clique edges are removed, one from between any two of the external-edged nodes, and the other from between the remaining two external-edged nodes. See Figure 2. Thus, in any clique of the graph there would be two types of nodes, nodes with only intra-clique edges called internal-edged nodes and nodes with both intra-clique edges and one inter-clique edge called external-edged nodes.

Based on the construction of , there exists a one to one mapping between the nodes of the super-node graph and the cliques in the graph .

Graph .  We define the clique communication graph as a graph whose vertex set is equivalent to the vertex set of the super-node graph , which we simply call cliques. An edge exists in from clique to iff a message is sent on the inter-clique edge from some node in to a node in . Note that the edge set of can grow over the course of the algorithm. For the purpose of our analysis, we only keep track of the first message sent on an inter-clique edges and so we treat as a simple graph.

### High-Level Overview of the Lower Bound Proof

We begin by showing that the conductance of the constructed lower bound graph is . Then, we assume towards a contradiction that there exists an algorithm that solves implicit leader election on the graph by sending at most messages in expectation, where . It implies from the construction of , that , as and . Next, on the graph we show that any algorithm that sends at most many messages in expectation, is likely to find at most inter-clique edges. Then, given the fact that only inter-clique edges can be used for communicating in the clique communication graph , we show that the random variables representing the states of the resulting connected components (in

) are nearly independent of one another. We leverage this “near independence” to show that the algorithm is likely to elect either no leader or more than one leader with constant probability (similarly to identically distributed and fully independent indicator random variables), thus resulting in a contradiction. We formalize this overview in the remainder of this section.

###### Lemma 16.

The conductance of the graph is with high probability.

###### Proof.

First, we define an optimal cut of a graph as the cut that determines the minimum cut-conductance of the graph, and hence also determines the conductance of the graph. We prove the lemma by using the following claim.

###### Claim 17.

The optimal cut of the graph does not pass through any of the cliques, i.e., all the edges that are cut by the optimal cut comprises only of inter-clique edges.

###### Proof.

Let us assume for the sake of contradiction that a given cut of the graph , with cut-conductance , is the optimal cut of . We show that, if intersects with (passes through) any clique, the conductance can always be reduced to such that by moving a group of nodes from one side of the cut to the other. This will contradict our assumption of being the optimal cut and thereby prove the above claim. If , we compare this cut to the middle cut of the graph that cuts the graph into two equal parts and does not pass through any cliques. With a simple calculation it is easily show that this middle cuts’ conductance is .

We refer to the total volume of the graph (summation of all node degrees) as . The side of the cut that has the (initial) lower volume is called as the min side of the cut and the side (initially) having the larger volume is called as the max side of the cut. If volumes of both sides are equal then we arbitrarily assign one side as min and the other as max. For the given cut , we consider , where represents the cut edges (edges with end nodes on either side of the cut ) and represents the volume of the min side of the cut. Note that, since node degrees are uniform, each node contributes towards the volume, and thus the side having the lesser number of nodes has the minimum volume.

We look at the cliques that are cut by , the side of the clique that has nodes is labeled as the minority side and the side of the clique that has nodes is labeled as the majority side. If both sides have exactly nodes labeling is done arbitrarily. In this regard, note that whenever we say min/max side we refer to the side of the cut of the entire graph, whereas when we say minority/majority side we refer to the cut sides of the particular clique under consideration.

Consider any clique that is divided by the cut . Let there be nodes in the minority side of the and nodes in the majority side, where . To show that we can obtain a lower conductance, we always move nodes from the minority side of the clique to the opposite side of the cut (except Case 3, where moving the nodes leads to the volume of the min side of the cut becoming : in this case nodes are moved from the majority side of the clique to the other side of the cut). After the nodes are moved, we show that for all cases that the maximum possible value of conductance obtained after moving the nodes (), is less than the conductance prior to the nodes being moved (), giving us a contradiction. Observe that the denominator is always greater than as in no case all the nodes are moved out of the eventual min side.

For each case (except Case 3), when nodes are moved, say there are internal-edged nodes and external-edged nodes such that . Each of the internal-edged nodes in the minority side was previously connected to nodes in the majority side (due to the clique edges) and each of the external-edged nodes in the minority side was previously connected to at least nodes in the majority side (case that maximizes ). Therefore, comparing with , the reduction in the number of cut edges is at least . The total possible number of external-edged nodes is at most (as the super-node graph was -regular), therefore in this case . Also by our choice of (and the range of ), we see that . We consider the following cases for a clique while moving nodes from the minority side to the opposite side of the cut.

Case 1 : If nodes move from min side of the cut to the max side.
Moving nodes reduces the volume of the min side of the cut by .

 ϕnew=C−(k(nϵ−k)−k2)V−knϵ⩽C−knϵ+k2+4V−knϵ
 ϕK−ϕnew⩾CV−C−knϵ/6V−knϵ=knϵ(V/6−C)V(V−knϵ)=Vknϵ(1/6−ϕK)V(V−knϵ)

As is , , which implies that .
Case 2 : If nodes move from max side of the cut to the min side.
Case 2A : If min side’s volume still remains after the nodes are moved.
Moving nodes into the min side of the cut increases the volume by .

 ϕnew=C−k(nϵ−k)+k2V+knϵ⩽C−knϵ+k2+4V+knϵ

The number of cut-edges strictly decrease from previous while the volume increases therefore it is clear that .
Case 2B : If min side’s volume becomes due to the moved nodes.
The max side now becomes the side with the lower value of volume as a result of nodes moving to the other side with its volume , where is the total volume of the graph.

 ϕnew=C−k(nϵ−k)+k2Vtotal−V−knϵ⩽C−knϵ+k2+4Vtotal−V−knϵ

As (because initially was the volume of the side having the lower volume among the two sides of the cut) and as is , we see that the terms in the numerator, and , i.e. , which implies that .
Case 3 : In a special case, when moving nodes from minority side of the clique to the majority side results in making the volume of the min side of the cut . (This is the case where the min side of the cut contains only the minority side of a clique). Instead of moving nodes from the minority side of the clique to the majority, nodes are moved from the majority side to the minority side (such that the min side of the cut now has exactly one clique). In contrast to all the previous cases, in this case nodes move from the majority side of the clique to the minority side. It is to be noted that this movement cannot result in increasing the volume of the min side of the cut to a value . Say there are internal-edged and external-edged nodes in the majority such that . Each of the internal-edged nodes in the majority side was previously connected to nodes in the minority side (due to the clique edges) and each of the external-edged nodes in the majority side was previously connected to at least nodes in the minority side (case that maximizes ). Therefore, comparing with , the reduction in the number of cut edges is at least . The total possible number of external-edged nodes is at most (as the super-node graph was -regular), therefore in this case . Also by our choice of (and the range of ), we see that . Moving nodes into the min side of the cut increases the volume by .

 ϕnew=C−k(nϵ−k)+k2V+nϵ(nϵ−k)⩽C−knϵ+k2+4V+nϵ(nϵ−k)

The number of cut-edges strictly decrease from previous while the volume increases therefore it is clear that .

Since for each case we can obtain a conductance of if cut passes through a clique, we conclude that the optimal cut would not pass through any of the cliques. ∎

Now, we give a one to one correspondence between the cuts on the super-node graph and the cuts on that do not pass through any cliques by considering any cut of and an identical cut on such that if cuts an edge in the graph , then the cut would cut edge of the graph that was created in behest of while constructing graph from (refer to the construction of described in the beginning of Section 4).

Note that, this also creates a one to one correspondence of their respective cut-conductances such that . Clearly, the number of cut edges across the cuts and remains same in either case. Let the volume of the smaller side the cut be . Since each super-node has a degree , the total number of super nodes present in the smaller side of the cut equals . As described earlier, while constructing graph from each super-node is replaced by a clique of nodes with each node having a degree , wherein the degree is adjusted to include the inter-clique edges by removing intra-clique edges. Therefore, the volume of the smaller side of the corresponding cut of the graph would be . Thus, if the cut conductance determined by in is , it implies that the cut conductance given by the cut in would be .

It immediately follows from the correspondence that if cut is the optimal cut that determines the conductance of the graph , then its corresponding identical cut would be the cut determining the conductance of . From [7], we know that for a sufficiently large , almost every random regular graph with degree has a constant conductance which implies that w.h.p. the conductance of , . ∎

### 4.2 Distinct parts remain disjoint

Recall that we assume and, assume towards a contradiction that the algorithm sends at most messages in expectation. In this section, we show that parts of the network where nodes might be initiating the exploration of their neighborhoods, evolve independently in the sense that they are likely to never communicate.

Let random variable give the number of messages sent by the algorithm and let random variable give the number of messages sent by the nodes in clique .

###### Lemma 18.

Without receiving any messages, if a clique sends a message444We slightly abuse notation by saying a clique sends a message when, in fact, some node in performs the sending action. over an inter-clique edge, then it follows that the nodes in have sent at least messages in expectation, i.e. .

###### Proof.

Recall from the construction of the super-node graph that we have assigned the inter-clique ports uniformly at random among all available ports of . Any clique has a total of ports out of which only ports belong to inter-clique edges. Also, the nodes are unaware of their neighbors’ identities, and in particular, the four nodes containing inter-clique port are unaware of this fact. First, we see that if a clique sends more than messages before sending its first inter-clique message, the lemma is vacuously true. Otherwise, given that no messages were received via an inter-clique edge, it holds that, at any point before sending the first inter-clique edge, there are at least ports among the nodes in over which no message has been sent yet, and each of them is equally likely to connect to an inter-clique edge. Thus, the probability that a message is sent over an inter-clique edges for the first time (in clique ) is at most .

Therefore, in expectation the number of messages sent by any clique before sending its first inter-clique message is at least which is . ∎

In the rest of our proof, will analyze the probability that certain subgraphs of the clique communication graph (see section 4.1) contain a leader node. We will first state a crucial consequence of Lemma 18 in the language of clique communication graphs:

###### Lemma 19.

With probability , the clique communication graph contains at most edges.

###### Proof.

Let , for a sufficiently large constant , and suppose towards a contradiction that contains at least edges with constant probability .

For each clique that is in a non-singleton connected component in the clique communication graph , we define its first edge as the first inter-clique edge over which its nodes have sent (or received) a message to (from) another clique. Let be the set of cliques that have first edges. Since each clique can connect to at most other cliques in , we have . We have

 (3)

Since the number of messages required for discovering an inter-clique edge of is independent of the event , it holds that

Applying Lemma 18 for each , we obtain from (3) that

By choosing sufficiently large, we obtain a contradiction to the assumption of sending at most messages in expectation. ∎

Spontaneous Cliques. Since we consider randomized algorithms, we assume that each node is equipped with a random bit string of infinite length. If a clique does not have any incoming edges in throughout the execution, i.e., it does not receive any messages from nodes in other cliques, then the actions and the state transitions of its nodes depend exclusively on the supplied random bit strings. In particular, inspecting these random bit strings, we can determine whether nodes in will send messages across any inter-clique edges of . This motivates us to call spontaneous, if some node in eventually sends an outgoing message assuming that no node ever receives an incoming message (as per its initial random string). (Note that it may not actually send an outgoing message because it may receive a message first from some node in another clique.) We use the notation to denote the connected component of a clique in and note that can grow over time.
Disjoint Components. We define to be the event where, at any point in the algorithm’s execution, each connected component in contains at most one spontaneous clique, and each non-singleton connected component contains exactly one. This, we show is in fact likely to occur. The next lemma summarizes the main result of this subsection:

###### Lemma 20.

Event occurs with probability .

###### Proof.

From Lemma 19, we know that with probability , the clique communication graph contains at most edges, for some fixed constant . Also note that, the only way of violating event , is the merging of two connected components, each that initially had only one spontaneous clique. Clearly, for each non-singleton connected component, there is at least one spontaneous node. We show that conditioned on the event that has edges, the probability that a connected component selects an inter-clique edge to a subgraph , which can be either another non-singleton connected component or a spontaneous clique (that may still be a singleton) is quite low. Denote this event by . Let and be the number of open ports of and , respectively. Then, it holds that

since by assumption. This shows that, two non-singleton connected components do not combine with probability at least . Considering that there are at most possible edges in the clique graph (see Lemma 19), we know that occurs with probability at least

### 4.3 Bounding the dependencies between connected components.

So far, we have shown that connected components are likely to remain disjoint throughout the execution. However, we cannot directly argue that this implies a small probability of electing a leader, since the conditioning on event restricts the evolution of a given connected component, as we explain in more detail below.

We view the execution of the algorithm as a sequence of steps performed by cliques, where a step involves either an update to a clique’s state (defined below) or the sending of a message. Note that a step here is different from a round as there may be simultaneous actions at cliques happening in the same round, but we can consider an arbitrary order on such simultaneous events for analysis.

We define the state of clique in as either (1) empty, if is not spontaneous, or (2) its state consists of the local states of the nodes that are part of the connected component in . In this notation, sending a message between two nodes in the same clique corresponds to a local update to the clique’s state.

Formally, we use the notation to denote the state of clique after steps and define to be the collective state of all the cliques after steps. By inspecting , we can derive whether there is a leader in one of the cliques of the connected component of in .

For the rest of the proof, we assume that all connected components remain disjoint throughout the execution, i.e., event occurs (see Lemma 20).

Let be a collection of states after step for all the cliques and suppose that represents a state in which holds; formally, the event has nonzero probability conditioned on . We use the notation to refer to the state of the clique in the collection of states . If the clique nodes eventual states were completely independent, then we would have Note that, the conditioning on can introduce dependencies between the event that some clique transits to a given state and the state of some other cliques and thus we cannot assume that the equality holds. However, we prove that any possible dependency due to event