Selecting a Leader in a Network of Finite State Machines

05/15/2018 ∙ by Yehuda Afek, et al. ∙ 0

This paper studies a variant of the leader election problem under the stone age model (Emek and Wattenhofer, PODC 2013) that considers a network of n randomized finite automata with very weak communication capabilities (a multi-frequency asynchronous generalization of the beeping model's communication scheme). Since solving the classic leader election problem is impossible even in more powerful models, we consider a relaxed variant, referred to as k-leader selection, in which a leader should be selected out of at most k initial candidates. Our main contribution is an algorithm that solves k-leader selection for bounded k in the aforementioned stone age model. On (general topology) graphs of diameter D, this algorithm runs in Õ(D) time and succeeds with high probability. The assumption that k is bounded turns out to be unavoidable: we prove that if k = ω (1), then no algorithm in this model can solve k-leader selection with a (positive) constant probability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many distributed systems rely on the existence of one distinguishable node, often referred to as a leader. Indeed, the leader election problem is among the most extensively studied problems in distributed computing [GHS83, Awe87, LL90, AM94]. Leader election is not confined to digital computer systems though as the dependency on a unique distinguishable node is omnipresent in biological systems as well [KN93, SCW05, KFQS10]. A similar type of dependency exists also in networks of man-made micro- and even nano-scale sub-microprocessor devices [DGS15].

The current paper investigates the task of electing a leader in networks operating under the stone age (SA) model [EW13] that provides an abstraction for distributed computing by nodes that are significantly inferior to modern computers in their computation and communication capabilities. In this model, the nodes are controlled by randomized finite automata and can communicate with their network neighbors using a fixed message alphabet based on a weak communication scheme that can be viewed as an asynchronous extension of the set broadcast (SB) communication model of [HJK15] (a formal definition of our model is provided in Sec. 1.1).

Since the state space of a node in the SA model is fixed and does not grow with the size of the network, SA algorithms are inherently uniform, namely, the nodes are anonymous and lack any knowledge of the network size. Unfortunately, classic impossibility results state that leader election is hopeless in these circumstances (even under stronger computational models): Angluin [Ang80] proved that uniform algorithms cannot solve leader election in a network with success probability ; Itai and Rodeh [IR90] extended this result to algorithms that are allowed to fail with a bounded probability.

Thus, in the distributed systems that interest us, leader election cannot be solved by the nodes themselves and some “external help” is necessary. This can be thought of as an external symmetry breaking signal that only one node is supposed to receive. Symmetry breaking signals are actually quite common in reality and can come in different shape and form. A prominent example for such external signaling occurs during the development process of multicellular organisms, when ligand molecules flow through a cellular network in a certain direction, hitting one cell before the others and triggering its differentiation [Sla09].

But what if the symmetry breaking signal is noisy and might be received by a handful of nodes? Is it possible to detect that several nodes received this signal? Can the system recover from such an event or is it doomed to operate with multiple leaders instead of one?

In this paper, we study the -leader selection problem, where at most (and at least ) nodes are initially marked as candidates, out of which exactly one should be selected. On top of the relevance of this problem to the aforementioned questions, it is also motivated by the following application. Consider scenarios where certain nodes, including the leader, may get lost during the network deployment process, e.g., a sensor network whose nodes are dropped from an airplane. In such scenarios, one may wish to produce candidate leaders with the purpose of increasing the probability that at least one of them survives; a -leader selection algorithm should then be invoked to ensure that the network has exactly one leader when it becomes operational.

The rest of the paper is organized as follows. In Sec. 1.1, we provide a formal definition of the distributed computing model used in the paper. Our results are summarized in Sec. 1.2 and some additional related literature is discussed in Sec. 1.3. A -leader selection algorithm that constitutes our main technical contribution, is presented in Sec. 2, whereas Sec. 3 provides some negative results.

1.1 Model

The distributed computing model considered in this paper follows the stone age (SA) model of Emek and Wattenhofer [EW13]. Under this model, the communication network is represented by a finite connected undirected graph whose nodes are controlled by randomized finite automata with state space , message alphabet , and transition function whose role is explained soon.

Each node of degree is associated with input ports (or simply ports), one port for each neighbor of in , holding the last message received from at . The communication model is defined so that when node sends a message, the same message is delivered to all its neighbors ; when (a copy of) this message reaches , it is written into port , overwriting the previous message in this port. Node ’s (read-only) access to its own ports is very limited: for each message type , it can only distinguish between the case where is not written in any port and the case where it is written in at least one port.

The execution is event driven with an asynchronous scheduler that schedules the aforementioned message delivery events as well as node activation events.111The only assumption we make on the event scheduling is FIFO message delivery: a message sent by node at time is written into port of its neighbor before the message sent by at time . When node is activated, the transition function determines (in a probabilistic fashion) its next state and the next message to be sent based on its current state and the current content of its ports. Formally, the pair is chosen uniformly at random from , where is defined so that if and only if is written in at least one port .

To complete the definition of the randomized finite automata, one has to specify the set of initial states that encode the node’s input, the set of output states that encode the node’s output, and the initial message written in the ports when the execution begins. SA algorithms are required to have termination detection, namely, every node must eventually decide on its output and this decision is irrevocable.

Following the convention in message passing distributed computing (cf. [Pel00]), the run-time of an asynchronous SA algorithm is measured in terms of time units scaled to the maximum of the time it takes to deliver any message and the time between any two consecutive activations of a node. Refer to [EW13] for a more detailed description of the SA model.

The crux of the SA model is that the number of states in and the size of the message alphabet are constants independent of the size (and any parameter) of the graph . Moreover, node cannot distinguish between its ports and in general, its degree may be larger than (and ).

Weakening the Communication Assumptions.

The model defined in the current paper is a restriction of the model of [EW13], where the algorithm designer could choose an additional constant bounding parameter , providing the nodes with the capability to count the number of ports holding message up to . In the current paper, the bounding parameter is set to . This model choice can be viewed as an asynchronous multi-frequency variant of the beeping communication model [CK10, AAB11].

Moreover, in contrast to the existing SA literature, the communication graph assumed in the current paper may include self-loops of the form which means, in accordance with the definition of the SA model, that node admits port that holds the last message received from itself. Using the terminology of the beeping model literature (see, e.g., [AAB11]), the assumption that the communication graph is free of self-loops corresponds to a sender collision detection, whereas lifting this assumption means that node may not necessarily distinguish its own transmitted message from those of its neighbors.

It turns out that self-loops have a significant effect on the power of SA algorithms. Indeed, while a SA algorithm that solves the maximal independent set (MIS) problem with probability is presented in [EW13] under the assumption that the graph is free of self-loops, we prove in Sec. 3 that if the graph is augmented with self-loops, then no SA algorithm can solve this problem with a bounded failure probability. To distinguish between the original model of [EW13] and the one considered in the current paper, we hereafter denote the latter by .

1.2 Results

Throughout, the number of nodes and the diameter of the graph are denoted by and , respectively. We say that an event occurs with high probability (whp) if its probability is at least for an arbitrarily large constant . Our main technical contribution is cast in the following two theorems.

Theorem 1.1.

For any constant , there exists a algorithm that solves the -leader selection problem in time whp.222The asymptotic notation may hide factors.

Theorem 1.2.

If the upper bound on the number of candidates may grow as a function of , then there does not exist a SA algorithm (operating on graphs with no self-loops) that solves the -leader selection problem with a failure probability bounded away from .

We emphasize that the failure probability of the algorithm promised in Thm. 1.1 (i.e., the probability that the algorithm selects multiple leaders or that it runs for more than time) is inverse polynomial in even though each individual node does not (and cannot) possess any notion of — to a large extent, this, together with the termination detection requirement, capture the main challenge in designing the promised algorithm.333If we aim for a failure probability inverse polynomial in (rather than ) and we do not insist on termination detection, then the problem is trivially solved by the algorithm that simply assigns a random ID from a set of size to each candidate and then eliminates a candidate if it encounters an ID larger than its own. The theorem assumes that and hides the dependency of the algorithm’s parameters on . A closer look at its proof reveals that our algorithm uses local memory and messages of size bits. Thm. 1.2 asserts that the dependence of these parameters on is unavoidable. Whether this dependence can be improved beyond remains an open question.

1.3 Additional Related Literature

As mentioned earlier, the SA model was introduced by Emek and Wattenhofer in [EW13] as an abstraction for distributed computing in networks of devices whose computation and communication capabilities are far weaker than those of a modern digital computer. Their main focus was on distributed problems that can be solved in sub-diameter (specifically, ) time including MIS, tree coloring, coloring bounded degree graphs, and maximal matching. This remained the case also in [EU16], where Emek and Uitto studied SA algorithms for the MIS problem in dynamic graphs. In contrast, the current paper considers the -leader selection problem — an inherently global problem that requires time.

Computational models based on networks of finite automata have been studied for many years. The best known such model is the extensively studied cellular automata that were introduced by Ulam and von Neumann [Neu66] and became popular with Martin Gardner’s Scientific American column on Conway’s game of life [Gar70] (see also [Wol02]).

Another popular model that considers a network of finite automata is the population protocols model, introduced by Angluin et al. [AAD06] (see also [AR09, MCS11]), where the network entities communicate through a sequence of atomic pairwise interactions controlled by a fair (adversarial or randomized) scheduler. This model provides an elegant abstraction for networks of mobile devices with proximity derived interactions and it also fits certain types of chemical reaction networks [Dot14]. Some work on population protocols augments the model with a graph defined over the population’s entities so that the pairwise interactions are restricted to graph neighbors, thus enabling some network topology to come into play. However, for the kinds of networks we are interested in, the fundamental assumption of sequential atomic pairwise interactions may provide the population protocol with unrealistic advantage over weaker message passing variants (including the SA model) whose communication schemes do not enable a node to interact with its individual neighbors independently. Furthermore, population protocols are typically required to eventually converge to a correct output and are allowed to return arbitrary (wrong) outputs beforehand, a significantly weaker requirement than the termination detection requirement considered in this paper.

The neat amoebot model introduced by Dolev et al. [DGRS13] also considers a network of finite automata in a (hexagonal) grid topology, but in contrast to the models discussed so far, the particles in this network are augmented with certain mobility capabilities, inspired by the amoeba contraction-expansion movement mechanism. Since its introduction, this model was successfully employed for the theoretical investigation of self-organizing particle systems [SOP14, DGR14, DGR15, DGS15, DGR16, CDRR16, DDG18], especially in the context of programmable matter.

Leader election is arguably the most fundamental problem in distributed systems coordination and has been extensively studied from the early days of distributed computing [GHS83, FL87]. It is synonymous in most models to the construction of a spanning tree — another fundamental problem in distributed computing — where the root is typically the leader. Leader election has many applications including deadlock detection, choosing a key/password distribution center, and implementing a distributed file system manager. It also plays a key role in tasks requiring a reliable centralized coordinating node, e.g., Paxos and Raft, where leader election is used for consensus — yet another fundamental distributed computing problem, strongly related to leader election. Notice that in our model, leader selection does not (and cannot) imply a spanning tree, but it does imply consensus.

Angluin [Ang80] proved that uniform algorithms cannot break symmetry in a ring topology with success probability . Following this classic impossibility result, many symmetry breaking algorithms (with and without termination detection) that relax some of the assumptions in [Ang80] were introduced [AAHK86, ASW88, IR90, SS94, AM94]. Itai and Rodeh [IR90] were the first to design randomized leader election algorithms with bounded failure probability in a ring topology, assuming that the nodes know . Schieber and Snir [SS94] and Afek and Matias [AM94] extended their work to arbitrary topology graphs.

2 Algorithm for -Leader Selection

In this section, we present our algorithm and establish Thm. 1.1. We start with some preliminary definitions and assumptions presented in Sec. 2.1. Sec. 2.2 and 2.3 are dedicated to the basic subroutines on which our algorithm relies. The algorithm itself is presented in Sec. 2.4, where we also establish its correctness. Finally, in Sec. 2.5, we analyze the algorithm’s run-time.

2.1 Preliminaries

As explained in Sec. 1.1, the execution in the SA (and ) model is controlled by an asynchronous scheduler. One of the contributions of [EW13] is a SA synchronizer implementation (cf. the -synchronizer of Awerbuch [Awe85]). Given a synchronous SA algorithm whose execution progresses in fully synchronized rounds (with simultaneous wake-up), the synchronizer generates a valid (asynchronous) SA algorithm whose execution progresses in pulses such that the actions taken by in pulse are identical to those taken by in round .444We emphasize the role of the assumption that when the execution begins, the ports hold the designated initial message . Based on this assumption, a node can “sense” that some of its neighbors have not been activated yet, hence synchronization can be maintained right from the beginning. The synchronizer is designed so that the asynchronous algorithm has the same bounding parameter ( in the current paper) and asymptotic run-time as the synchronous algorithm .

Although the model considered by Emek and Wattenhofer [EW13] assumes that the graph has no self-loops, it is straightforward to apply their synchronizer to graphs that do include self-loops, hence it can work also in our model. Consequently, in what follows, we restrict our attention to synchronous algorithms. Specifically, we assume that the execution progresses in synchronous rounds , where in round , each node
(1) receives the messages sent by its neighbors in round ;
(2) updates its state; and
(3) sends a message to its neighbors (same message to all neighbors).

Since we make no effort to optimize the size of the messages used by our algorithm, we assume hereafter that the message alphabet is identical to the state space and that node simply sends its current state to its neighbors at the end of every round. Nevertheless, for clarity of the exposition, we sometimes describe the algorithm in terms of sending designated messages, recalling that this simply means that the states of the nodes encode these messages.

To avoid cumbersome presentation, our algorithm’s description does not get down to the resolution of the state space and transition function . It is straightforward though to implement our algorithm as a randomized finite automaton, adhering to the model presented in Sec. 1.1. In this regard, at the risk of stating the obvious, we remind the reader that if is a constant, then a finite automaton supports arithmetic operations modulo .

In the context of the -leader selection problem, we use the verb withdraw when referring to a node that ceases to be a candidate.

2.2 The Ball Growing Subroutine

We present a generic ball growing subroutine in graph with at most candidates. The subroutine is initiated at (all) the candidates, not necessarily simultaneously, through designated signals discussed later on. During its execution, some candidates may withdraw; in the context of this subroutine, we refer to the surviving candidates as roots.

The ball growing subroutine assigns a level variable to each node , where . Path in is called incrementing if for every . The set of nodes reachable from a root via an incrementing path is referred to as the ball of , denoted by . We design this subroutine so that the following lemma holds.

Lemma 2.1.

Upon termination of the ball growing subroutine,
(1) every incrementing path is a shortest path (between its endpoints) in ;
(2) every root belongs to exactly one ball (its own); and
(3) every non-root node belongs to at least one ball.

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: A natural attempt to design the ball growing subroutine is to grow a breadth first search tree around candidate , layer by layer, so that node at distance from is assigned with level variable . This is not necessarily possible though when multiple candidates exist: What happens if the ball growing processes of different candidates reach in the same round? What happens if these ball growing processes reach several adjacent nodes in the same round? If we are not careful, these scenarios may lead to incrementing paths that are not shortest paths and even to cyclic incrementing paths. Things become even more challenging considering the weak communication capabilities of the nodes that may prevent them from distinguishing between the ball growing processes of different candidates.

The ball growing subroutine is implemented under the model by disseminating messages, , throughout the graph. Consider a candidate and let be the round in which it is signaled to invoke the ball growing subroutine. If receives a message in some round , then withdraws and subsequently follows the protocol like any other non-root node; otherwise, becomes a root in round . If

is even (resp., odd), then

assigns (resp., ) and sends a message.

Consider a non-root node and let be the first round in which it receives a message. Notice that may receive several messages with different arguments in round — let be the set of all such arguments . Node assigns and sends a message at the end of round , where is chosen to be any integer in that satisfies:
(i) ; and
(ii) .
This completes the description of the ball growing subroutine. Refer to Fig. 1 for an illustration.

Figure 1: The result of a ball growing process invoked at candidate A in round , candidate B in round , and candidate C in round . The level variables are depicted by the numbers written inside the nodes and the balls are depicted by the dashed curves. The boundary nodes appear with a gray background. The DAG is depicted by the oriented edges.

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: Condition (i) ensures that joins the ball of some root . By condition (ii), nodes do not join “indirectly” (this could have led to incrementing paths that are not shortest paths).

Proof of Lem. 2.1.

Consider a (root or non-root) node and let be the round in which starts its active participation in the ball growing process. More formally, if is a root (i.e., it is a candidate signaled to invoke the ball growing subroutine strictly before receiving any message), then ; otherwise, . The following properties are established by (simultaneous) induction on the rounds:

  • [nosep, leftmargin=*]

  • In any round , variable is even if and only if is even.

  • In any round , node has a neighbor with if and only if is not a root.

  • In any round , node belongs to ball for some root .

  • In any round , if for some root , then the incrementing path(s) that realize this relation are shortest paths in the graph.

  • If for some root and , then .

  • The total number of different arguments in the messages sent during a single round is at most .

  • Non-root node finds a valid value to assign to in round .

The assertion follows. ∎

Observation 2.2.

If is the earliest round in which the ball growing process is initiated at some candidate, then the process terminates by round .

Boundary Nodes.

We will see in Sec. 2.4 that our algorithm detects candidate multiplicity by identifying the existence of multiple balls in the graph. The key notion in this regard is the following one (see Fig. 1): Node is said to be a boundary node if
(1) for roots ; or
(2) for some root and there exists a neighbor of such that .

Observation 2.3.

If the graph has multiple roots, then every ball includes at least one boundary node.

Node is said to be a locally observable boundary node if it has a neighbor such that . Notice that by Lem. 2.1, there cannot be a ball that includes both and since then, at least one of the incrementing paths that realize these inclusions is not a shortest path. Therefore, a locally observable boundary node is in particular a boundary node.

The Directed Acyclic Graph .

Given two adjacent nodes and , we say that is a child of and that is a parent of if ; a childless node is referred to as a leaf. This induces an orientation on a subset of the edges, say, from parents to their children (up the incrementing paths), thus introducing a directed graph whose edge set is an oriented version of (see Fig. 1). Lem. 2.1 guarantees that is acyclic (so, it is a directed acyclic graph, abbreviated DAG) and that it spans all nodes in . Moreover, the sources and sinks of are exactly the roots and leafs of the ball growing subroutine, respectively, and the source-to-sink distances in are upper-bounded by the diameter of .

We emphasize that the in-degrees and out-degrees in are unbounded. Nevertheless, the simplifying assumption that the messages sent by the nodes encode their local states, including the level variables (see Sec. 2.1), ensures that node can distinguish between messages received from its children, messages received from its parents, and messages received from nodes that are neither children nor parents of .

2.3 Broadcast and Echo over

The assignment of level variables by the ball growing subroutine and the child-parent relations these variables induce provide a natural infrastructure for broadcast and echo (B&E) over the aforementioned DAG so that the broadcast (resp., echo) process progresses up (resp., down) the incrementing paths. These are implemented based on and messages as follows.

The broadcast subroutine is initiated at (all) the roots, not necessarily simultaneously, through designated signals discussed later on and root becomes broadcast ready upon receiving such a signal. A non-root node becomes broadcast ready in the first round in which it receives messages from all its parents. A (root or non-root) node that becomes broadcast ready in round keeps sending messages throughout the round interval , where is defined to be the first round (strictly) after in which
(i) receives messages from all its children; and
(ii) does not receive a message from any of its parents.
(Notice that conditions (i) and (ii) are satisfied vacuously for the leaves and roots, respectively.)

The echo subroutine is implemented in a reversed manner: It is initiated at (all) the leaves, not necessarily simultaneously, after their role in the broadcast subroutine ends so that leaf becomes echo ready in round . A non-leaf node becomes echo ready in the first round in which it receives messages from all its children. A (leaf or non-leaf) node that becomes echo ready in round keeps sending messages throughout the round interval , where is defined to be the first round (strictly) after in which
(i) receives messages from all its parents; and
(ii) does not receive an message from any of its children.
(Notice that conditions (i) and (ii) are satisfied vacuously for the roots and leaves, respectively.)

Lemma 2.4.

The following properties hold for every B&E process:

  • [nosep, leftmargin=*]

  • Rounds , , , and exist and for every node .

  • If node is reachable from node in DAG , then and for .

  • If is the latest round in which the process is initiated at some root, then the process terminates by round .

Proof.

Follows since is a DAG and all paths in are shortest paths. ∎

Auxiliary Conditions.

In the aforementioned implementation of the broadcast (resp., echo) subroutine, being broadcast (resp., echo) ready is both a necessary and sufficient condition for a node to start sending (resp., ) messages. In Sec. 2.4, we describe variants of this subroutine in which being broadcast (resp., echo) ready is a necessary, but not necessarily sufficient, condition and the node starts sending (resp., ) messages only after additional conditions, referred to later on as auxiliary conditions, are satisfied.

Acknowledged Ball Growing.

As presented in Sec. 2.2, the ball growing subroutine propagates from the roots to the leaves. To ensure that root is signaled when the construction of its ball has finished (cf. termination detection), initiates a B&E process one round after it invokes the ball growing subroutine. The valid operation of this process is guaranteed since the ball growing process propagates at least as fast as the B&E process. We call the combined subroutine acknowledged ball growing.

2.4 The Main Algorithm

Our -leader selection algorithm consists of two phases executed repeatedly in alternation:

  • [nosep, leftmargin=*]

  • phase , a.k.a. the detection phase, that detects the existence of multiple candidates whp; and

  • phase , a.k.a. the elimination phase, in which all candidates but one withdraw with probability at least .

Starting with a detection phase, the algorithm executes the phases in alternation until the first detection phase that does not detect candidate multiplicity. Each node maintains a phase variable that indicates ’s current phase.

The two phases follow a similar structure: The (surviving) candidates start by initiating an acknowledged ball growing process. Among its other “duties”, this ball growing process is responsible for updating the phase variables of the nodes: node with that receives a message from node with assigns . When updating the phase variable to , node ceases to participate in phase , resetting all phase variables. Recalling the definition of the ball growing subroutine (see Sec. 2.2), this means in particular that if a candidate with receives a message from node with , then withdraws and subsequently follows the protocol like any other non-root node.

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: The ball growing process of phase essentially “takes control” over the graph and “forcibly” terminates phase (at nodes where it did not terminate already). We design the algorithm to ensure that at any point in time, there is at most one value for which there is an ongoing ball growing process in the graph (otherwise, we may get to undesired situations such as all candidates withdrawing).

Upon termination of the acknowledged ball growing process, the roots run back-to-back B&E iterations, initiating the broadcast process of the next B&E iteration one round after the echo process of the previous B&E iteration terminates (the choice of the parameter will become clear soon). Each node maintains a variable that stores ’s current B&E iteration. This variable is initialized to during the acknowledged ball growing process (considered hereafter as B&E iteration ) and incremented subsequently from to when becomes broadcast ready in B&E iteration (see Sec. 2.3). A phase ends when the echo process of B&E iteration terminates.

The variables may differ across the graph and to keep the B&E iterations in synchrony, we augment the B&E subroutines with the following auxiliary conditions (see Sec. 2.3): Node with (i.e., in B&E iteration ) does not start to send (resp., ) messages as long as it has a non-child (resp., non-parent) neighbor with .555This can be viewed as imposing the -synchronizer of [Awe85] on the B&E iterations of the balls. We emphasize that this includes neighbors that are neither children nor parents of .

For the sake of the next observation, we globally map the B&E iterations to sequence numbers so that B&E iterations of the first phase (which is a detection phase) are mapped to sequence numbers , respectively, B&E iterations of the second phase (which is an elimination phase) are mapped to sequence numbers , respectively, and so on. Let be a variable (defined only for the sake of the analysis) indicating the sequence number of node ’s current B&E iteration.

Observation 2.5.

For every two roots and , we have .

We say that round is -dirty (resp., -dirty) if some node with (resp., ) sends a message in round ; the round is said to be clean if it is neither -dirty nor -dirty. Obs. 2.5 implies that if and for some root in round , then and for any other root in round , hence the ball growing process of this phase has already ended and the ball growing process of the next phase has not yet started.

Corollary 2.6.

Let and be some -dirty and -dirty rounds, respectively. If (resp., ), then there exists some (resp., ) such that round is clean.

2.4.1 The Detection Phase

In the detection phase, the nodes test for candidate multiplicity in the graph. If the graph contains a single candidate , then the algorithm terminates upon completion of this phase and is declared to be the leader. Otherwise, certain boundary nodes (see Sec. 2.2) realize whp that multiple balls exist in their neighborhoods and signal the roots that they should proceed to the elimination phase (rather than terminate the algorithm) upon completion of the current detection phase. This signal is carried by messages delivered from the boundary nodes to the roots of their balls down the incrementing paths in conjunction with the messages of the (subsequent) B&E iterations.

For the actual candidate multiplicity test, once all nodes in the (inclusive) neighborhood of node participate in the detection phase, node checks if it is a locally observable boundary node and triggers a message delivery if it is. As the name implies, this check can be performed (locally) under the model assuming that the messages sent by the nodes encode their local states, including the level variables.

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: Although every locally observable boundary node is a boundary node, not all boundary nodes are locally observable: a node may belong to several different balls or two adjacent nodes with the same level variable may belong to different balls. For this kind of scenarios, randomness is utilized to break symmetry between the candidates and identify (some of) the boundary nodes.

Consider some root with upon termination of the acknowledged ball growing subroutine and recall that at this stage, runs back-to-back B&E iterations. In each round of these B&E iterations, picks some symbol uniformly at random (and independently of all other random choices) from a sufficiently large (yet constant size) symbol space and sends a message. This can be viewed as a random symbol stream that generates, round by round, and sends to its children.

The random symbol streams are disseminated throughout and utilized by the nodes (the boundary nodes in particular) to test for candidate multiplicity. For clarity of the exposition, it is convenient to think of a node that does not send a message, , as if it sends a message for the default symbol . The mechanism in charge of disseminating up the incrementing paths works as follows: If non-root node with receives messages with the same argument from all its parents at the beginning of round , then sends a message at the end of round ; in all other cases, sends a message.

Throughout this process, each node verifies that
(1) all messages sent by ’s parents in round carry the same argument ; and
(2) any message sent by a neighbor of with in round carries the same argument as in the message that sends in round (this is checked by in round ).
If any of these two conditions does not hold, then triggers a message delivery. A root that completes all B&E iterations in the detection phase without receiving any message terminates the algorithm and declares itself as the leader.

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: Since the aforementioned random tests should detect candidate multiplicity whp (i.e., with error probability inverse polynomial in ) and since the size of the symbol space from which the random symbol streams are generated is bounded, it follows that the length of the random symbol streams must be . How can we ensure that if the nodes cannot count beyond some constant?

To ensure that the random symbol stream is sufficiently long, we augment the echo subroutine invoked during B&E iteration of the detection phase (out of the B&E iterations in this phase) with one additional auxiliary condition referred to as the geometric auxiliary condition: Consider some node with and (i.e., in the -th B&E iteration of the detection phase) and suppose that it becomes echo ready (for B&E iteration ) in round . Then, tosses a fair coin in each round until the first round for which ; node does not send messages until round . This completes the description of the detection phase.

Lemma 2.7.

If multiple roots start a detection phase, then all of them receive a message before completing their (respective) B&E iterations whp.

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: The proof’s outline is as follows. We use the geometric auxiliary conditions to argue that there exists some root that spends rounds in B&E iteration whp. Employing Obs. 2.5, we conclude that the random symbol stream generated by every root is -long whp. Conditioned on that, we prove that there exists some boundary node that triggers a message delivery whp and that the corresponding message is delivered to before the phase ends.

Proof of Lem. 2.7.

Fix some detection phase. For a root , let be the number of rounds spends in B&E iterations , that is, the number of rounds in which (during this detection phase). We first argue that for all roots whp. To that end, let be the number of rounds in which node is prevented from sending its messages in B&E iteration due to the geometric auxiliary condition ( in the aforementioned notation of the geometric auxiliary condition) and notice that this auxiliary condition is designed so that

is a geometric random variable with parameter

. Therefore,

Condition hereafter on the event that for some node , namely, is prevented from sending its messages (in B&E iteration ) for at least rounds. Let be a root such that . By the definition of auxiliary conditions, B&E iteration of takes at least rounds. Obs. 2.5 guarantees that by the time starts B&E iteration , every other root must have already started B&E iteration (of this detection phase). Moreover, no root can start B&E iteration before finishes B&E iteration . We conclude that every root spends at least rounds in B&E iterations , thus establishing the argument.

Let be the prefix of the random symbol stream generated by root during the first rounds it spends in B&E iterations , i.e., during all but the last round of these B&E iterations (the reason for this missing round is explained soon), and let . We have just showed that for all roots whp.

The assertion is established by proving that if multiple roots exist in the graph and for all of them, then for every root , there exists some node that triggers a message delivery while whp. Indeed, if the message delivery is triggered by while , then a message is delivered to with the messages of B&E iteration at the latest, thus does not terminate the algorithm at the end of this detection phase and by the union bound, this holds simultaneously for all roots whp.

To that end, recall that node sends a message with some symbol in every round of the detection phase. In the scope of this proof, we say that posts the symbol stream in rounds if is the argument of the message sent by in round for every .

Consider some root and let be a boundary node in that minimizes the distance to . If is locally observable, then it triggers a message delivery (deterministically) already when , so assume hereafter that is not locally observable. Let be an incrementing -path and denote the length of by . Taking to be the round in which B&E iteration of begins, recall that posts in rounds . The choice of ensures that all nodes of other than are not boundary nodes, therefore if (i.e., if ), then the node that precede along — denote it by — posts in rounds . Moreover, by the definition of , specifically, by the choice of , we know that (and ) in all rounds .

If belongs to multiple balls, which necessarily means that and (see Lem. 2.1), then has another parent such that for some root . The probability that posts in rounds is at most . Otherwise, if belongs only to ball , then all its parents post in rounds (this holds vacuously if and has no parents), thus posts in rounds . Since is a non-locally observable boundary node (that belongs exclusively to ball ), it must have a neighbor with such that . The probability that posts in rounds is at most as well. Therefore, the probability that does not trigger a message delivery while is upper-bounded by which completes the proof since and since is an arbitrarily large constant. ∎

2.4.2 The Elimination Phase

In the elimination phase, each candidate picks a priority uniformly at random (and independently) from a totally ordered priority space ; a candidate whose priority is (strictly) smaller than is withdrawn. Taking the priority space to be , it follows by standard balls-in-bins arguments that the probability that exactly one candidate picks priority , which implies that exactly one candidate survives, is at least (in fact, it tends to as ).

[backgroundcolor=gray!30,topline=false,bottomline=false,linewidth=1pt]Intuition spotlight: The priorities of the candidates are disseminated in the graph so that candidate withdraws if it encounters a priority . This is implemented on top of the ball growing subroutine invoked at the beginning of the elimination phase so that the ball growing process of root “consumes” the ball of root if , eventually reaching and instructing it to withdraw. The structure of the phase (specifically, the B&E iterations that follow the ball growing process) guarantees that only roots with reach the end of the phase (without being withdrawn).

We augment the ball growing subroutine invoked at the beginning of the elimination phase with the following mechanism: When candidate is signaled to invoke the ball growing subroutine (so that it becomes a root), it appends its priority to the message it sends. A non-root node that joins the ball of records ’s priority in variable . A (root or non-root) node that receives a message with priority (strictly) larger than , behaves as if this is the first message it receives in this phase. In particular, resets all the variables of this phase and (re-)joins a ball from scratch. If is a root, then it also withdraws.

Notice that Obs. 2.5 still holds for the aforementioned augmented implementation of the ball growing subroutine. Therefore, when root reaches B&E iteration , i.e., , all other roots are in some B&E iteration which means that there is no “active” ball growing processes in the graph, that is, the current round is clean (of messages). Since a candidate with is certain to be withdrawn by some message appended with priority , we obtain the following observation.

Observation 2.8.

If root completes its B&E iterations in an elimination phase, then with probability at least , no other candidates exist in the graph.

2.5 Run-Time

The correctness of our algorithm follows from Lem. 2.7 and Obs. 2.8. To establish Thm. 1.1, it remains to analyze the algorithm’s run-time.

The first thing to notice in this regard is that the geometric auxiliary condition does not slow down the -th iteration of the detection phase by more than an factor whp. Combining Obs. 2.2 with Lem. 2.4, we can prove by induction on the phases that the -th phase (for ) ends by round whp, which is assuming that is fixed. The analysis is completed due to Obs. 2.8 ensuring that the algorithm terminates after elimination phases whp.

3 Negative Results

We now turn to establish some negative results that demonstrate the necessity of the assumption that . Our attention in this section is restricted to SA and algorithms operating under a fully synchronous scheduler on graph families and , where is a simple path of nodes and is augmented with self-loops.

The main lemma established in this section considers the -candidate binary consensus problem, a version of the classic binary consensus problem [FLP85]. In this problem, each node gets a binary input and returns a binary output under the following two constraints: (1) all nodes return the same output; and (2) if the nodes return output , then there exists some node such that . In addition, at most (and at least ) nodes are initially marked as candidates (thus distinguished from the rest of the nodes). We emphasize that the marked candidates do not affect the validity of the output. Since a -leader selection algorithm clearly implies a -candidate binary consensus algorithm, Theorem 1.2 is established by proving Lemma 3.1. Note that the proof of this lemma is based on a probabilistic indistinguishability argument, similar to those used in many distributed computing negative results, starting with the classic result of Itai and Rodeh [IR90].

Lemma 3.1.

If the upper bound on the number of candidates may grow as a function of , then there does not exist a SA algorithm that solves the -candidate binary consensus problem on the graphs in with a failure probability bounded away from .

Proof.

Assume by contradiction that there exists such an algorithm and let denote its message alphabet. For , consider the execution of on an instance that consists of path , where node is a candidate, node is not a candidate, and . By definition, there exist constants and and message sequences such that when runs on this instance, with probability at least , node , reads message in its (single) port in round and outputs at the end of round .

Now, consider graph for some sufficiently large (whose value will be determined later on) and consider a subgraph of , referred to as a -gadget, that consists of contiguous nodes of the underlying path , all of which receive input . Moreover, the nodes are marked as candidates in an alternating fashion so that if is a candidate, then is not a candidate, constrained by the requirement that is a candidate (and is not). The key observation is that when runs on , with probability at least , the nodes and of the -gadget read messages and , respectively, in (all) their ports in round and output at the end of round , independently of the random bits of the nodes outside the -gadget.

Fix and define a -gadget to be a subgraph of that consists of a -gadget appended to a -gadget, so, in total, the -gadget is a (sub)path that contains nodes, of which are candidates. Following the aforementioned observation, when runs on , with probability at least , some nodes in the -gadget output and others output ; we refer to this (clearly invalid) output as a failure event of the -gadget.

Since , , , and are constants that depend only on , , and are also constants that depend only on , and thus is also a constant that depends only on . Take to be an arbitrarily large constant. If is sufficiently large, then we can embed pairwise disjoint -gadgets in . Indeed, these -gadgets account to a total of candidates and recalling that , , and are constants, this number is smaller than for sufficiently large . When runs on , each of these -gadgets fails with probability at least (independently). Therefore, the probability that all nodes return the same binary output is at most . The assertion follows since this expression tends to as which is obtained as . ∎

The proof of Lem. 3.1 essentially shows that no SA algorithm can distinguish between and with a bounded failure probability. When the path is augmented with self-loops, we can use a very similar line of arguments to show that no algorithm can distinguish between and with a bounded failure probability. This allows us to establish the following lemma that should be contrasted with the SA MIS algorithm of [EW13] that works on general topology graphs (with no self-loops) and succeeds with probability .

Lemma 3.2.

There does not exist a algorithm that solves the MIS problem on the graphs in with a failure probability bounded away from .

References

  • [AAB11] Yehuda Afek, Noga Alon, Ziv Bar-Joseph, Alejandro Cornejo, Bernhard Haeupler, and Fabian Kuhn. Beeping a maximal independent set. In Proceedings of International Symposium on Distributed Computing (DISC), pages 32–50, 2011.
  • [AAD06] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computation in networks of passively mobile finite-state sensors. Distributed Computing, 18(4):235–253, 2006.
  • [AAHK86] Karl R. Abrahamson, Andrew Adler, Lisa Higham, and David G. Kirkpatrick. Probabilistic solitude verification on a ring. In Proceedings of ACM Symposium on Principles of Distributed Computing (PODC), pages 161–173, 1986.
  • [AM94] Yehuda Afek and Yossi Matias. Elections in anonymous networks. Inf. Comput., 113(2):312–330, 1994.
  • [Ang80] Dana Angluin. Local and global properties in networks of processors (extended abstract). In

    Proceedings of ACM SIGACT Symposium on Theory of Computing (STOC)

    , pages 82–93, 1980.
  • [AR09] James Aspnes and Eric Ruppert. An Introduction to Population Protocols, pages 97–120. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009.
  • [ASW88] Hagit Attiya, Marc Snir, and Manfred K. Warmuth. Computing on an anonymous ring. J. ACM, 35(4):845–875, 1988.
  • [Awe85] Baruch Awerbuch. Complexity of network synchronization. J. ACM, 32(4):804–823, 1985.
  • [Awe87] Baruch Awerbuch. Optimal distributed algorithms for minimum weight spanning tree, counting, leader election, and related problems. In Proceedings of ACM SIGACT Symposium on Theory of Computing (STOC), pages 230–240, 1987.
  • [CDRR16] Sarah Cannon, Joshua J. Daymude, Dana Randall, and Andréa W. Richa.

    A markov chain algorithm for compression in self-organizing particle systems.

    In Proceedings of ACM Symposium on Principles of Distributed Computing (PODC), pages 279–288, 2016.
  • [CK10] Alejandro Cornejo and Fabian Kuhn. Deploying wireless networks with beeps. In Proceedings of International Symposium on Distributed Computing (DISC), pages 148–162, 2010.
  • [DDG18] Joshua J. Daymude, Zahra Derakhshandeh, Robert Gmyr, Alexandra Porter, Andréa W. Richa, Christian Scheideler, and Thim Strothmann. On the runtime of universal coating for programmable matter. Natural Computing, 17(1):81–96, 2018.
  • [DGR14] Zahra Derakhshandeh, Robert Gmyr, Andréa W. Richa, Christian Scheideler, Thim Strothmann, and Shimrit Tzur-David. Infinite object coating in the amoebot model. CoRR, abs/1411.2356, 2014.
  • [DGR15] Zahra Derakhshandeh, Robert Gmyr, Andréa W. Richa, Christian Scheideler, and Thim Strothmann. An algorithmic framework for shape formation problems in self-organizing particle systems. In Proceedings of International Conference on Nanoscale Computing and Communication (NANOCOM), pages 21:1–21:2, 2015.
  • [DGR16] Zahra Derakhshandeh, Robert Gmyr, Andréa W. Richa, Christian Scheideler, and Thim Strothmann. Universal shape formation for programmable matter. In Proceedings of ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pages 289–299, 2016.
  • [DGRS13] Shlomi Dolev, Robert Gmyr, Andréa W. Richa, and Christian Scheideler. Ameba-inspired self-organizing particle systems. CoRR, abs/1307.4259, 2013.
  • [DGS15] Zahra Derakhshandeh, Robert Gmyr, Thim Strothmann, Rida Bazzi, Andréa W. Richa, and Christian Scheideler. Leader election and shape formation with self-organizing programmable matter. In Proceedings of International Conference on DNA Computing and Molecular Programming (DNA), pages 117–132, 2015.
  • [Dot14] David Doty. Timing in chemical reaction networks. In Proceedings of ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 772–784, 2014.
  • [EU16] Yuval Emek and Jara Uitto. Dynamic networks of finite state machines. In Proceedings of International Colloquium on Structural Information and Communication Complexity (SIROCCO), pages 19–34, 2016.
  • [EW13] Yuval Emek and Roger Wattenhofer. Stone age distributed computing. In Proceedings of ACM Symposium on Principles of Distributed Computing (PODC), pages 137–146, 2013.
  • [FL87] Greg N. Frederickson and Nancy A. Lynch. Electing a leader in a synchronous ring. J. ACM, 34(1):98–115, 1987.
  • [FLP85] Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson. Impossibility of distributed consensus with one faulty process. J. ACM, 32(2):374–382, 1985.
  • [Gar70] M. Gardner. The fantastic combinations of John Conway’s new solitaire game ‘life’. Scientific American, 223(4):120–123, 1970.
  • [GHS83] Robert G. Gallager, Pierre A. Humblet, and Philip M. Spira. A distributed algorithm for minimum-weight spanning trees. ACM Trans. Program. Lang. Syst., 5(1):66–77, 1983.
  • [HJK15] Lauri Hella, Matti Järvisalo, Antti Kuusisto, Juhana Laurinharju, Tuomo Lempiäinen, Kerkko Luosto, Jukka Suomela, and Jonni Virtema. Weak models of distributed computing, with connections to modal logic. Distributed Computing, 28(1):31–53, 2015.
  • [IR90] Alon Itai and Michael Rodeh. Symmetry breaking in distributed networks. Inf. Comput., 88(1):60–87, 1990.
  • [KFQS10] Jennie J. Kuzdzal-Fick, David C. Queller, and Joan E. Strassmann. An invitation to die: initiators of sociality in a social amoeba become selfish spores. Biology letters, 6(6):800–802, 2010.
  • [KN93] Laurent Keller and Peter Nonacs. The role of queen pheromones in social insects: queen control or queen signal? Animal Behaviour, 45(4):787–794, 1993.
  • [LL90] Ivan Lavallée and Christian Lavault. Spanning tree construction for nameless networks. In Proceedings of International Workshop on Distributed Algorithms (WDAG), pages 41–56, 1990.
  • [MCS11] Othon Michail, Ioannis Chatzigiannakis, and Paul G. Spirakis. New Models for Population Protocols. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers, 2011.
  • [Neu66] John Von Neumann. Theory of Self-Reproducing Automata. University of Illinois Press, Champaign, IL, USA, 1966.
  • [Pel00] David Peleg. Distributed Computing: A Locality-sensitive Approach. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2000.
  • [SCW05] Joanna M. Setchell, Marie Charpentier, and E. Jean Wickings. Mate guarding and paternity in mandrills: factors influencing alpha male monopoly. Animal Behaviour, 70(5):1105–1120, 2005.
  • [Sla09] Jonathan M.W. Slack. Essential developmental biology. John Wiley & Sons, 2009.
  • [SOP14] NSF workshop on self-organizing particle systems (SOPS). http://sops2014.cs.upb.de/, 2014.
  • [SS94] Baruch Schieber and Marc Snir. Calling names on nameless networks. Inf. Comput., 113(1):80–101, 1994.
  • [Wol02] Stephen Wolfram. A New Kind of Science. Wolfram Media Inc., Champaign, Ilinois, US, United States, 2002.