Population Protocols for Graph Class Identification Problems

11/09/2021
by   Hiroto Yasumi, et al.
0

In this paper, we focus on graph class identification problems in the population protocol model. A graph class identification problem aims to decide whether a given communication graph is in the desired class (e.g. whether the given communication graph is a ring graph). Angluin et al. proposed graph class identification protocols with directed graphs and designated initial states under global fairness [Angluin et al., DCOSS2005]. We consider graph class identification problems for undirected graphs on various assumptions such as initial states of agents, fairness of the execution, and initial knowledge of agents. In particular, we focus on lines, rings, k-regular graphs, stars, trees, and bipartite graphs. With designated initial states, we propose graph class identification protocols for k-regular graphs, and trees under global fairness, and propose a graph class identification protocol for stars under weak fairness. Moreover, we show that, even if agents know the number of agents n, there is no graph class identification protocol for lines, rings, k-regular graphs, trees, or bipartite graphs under weak fairness. On the other hand, with arbitrary initial states, we show that there is no graph class identification protocol for lines, rings, k-regular graphs, stars, trees, or bipartite graphs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/17/2020

Uniform Bipartition in the Population Protocol Model with Arbitrary Communication Graphs

In this paper, we focus on the uniform bipartition problem in the popula...
12/13/2019

The Complexity of Verifying Population Protocols

Population protocols [Angluin et al., PODC, 2004] are a model of distrib...
11/12/2019

Uniform Partition in Population Protocol Model under Weak Fairness

We focus on a uniform partition problem in a population protocol model. ...
07/15/2020

Peregrine 2.0: Explaining Correctness of Population Protocols through Stage Graphs

We present a new version of Peregrine, the tool for the analysis and par...
05/27/2020

Parallel Load Balancing on Constrained Client-Server Topologies

We study parallel Load Balancing protocols for a client-server distribut...
01/19/2019

On the Necessary Memory to Compute the Plurality in Multi-Agent Systems

We consider the Relative-Majority Problem (also known as Plurality), in ...
05/20/2021

Diversity, Fairness, and Sustainability in Population Protocols

Over the years, population protocols with the goal of reaching consensus...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Background and Motivation

The population protocol model is an abstract model for low-performance devices, introduced by Angluin et al. [4]. In this model, a network, called population, consists of multiple devices called agents. Those agents are anonymous (i.e., they do not have identifiers), and move unpredictably (i.e., they cannot control their movements). When two agents approach, they are able to communicate and update their states (this communication is called an interaction). By a sequence of interactions, the system proceeds a computation. In this model, there are various applications such as sensor networks used to monitor wild birds and molecular robot networks [20].

In this paper, we study the computability of graph properties of communication graphs in the population protocol model. Concretely, we focus on graph class identification problems that aim to decide whether the communication graph is in the desired graph class. In most distributed systems, it is essential to understand properties of the communication graph in order to design efficient algorithms. Actually, in the population protocol model, efficient protocols are proposed with limited communication graphs (e.g., ring graphs and regular graphs) [1, 6, 13, 14]. In the population protocol model, the computability of the graph property was first considered in [3]. In [3], Angluin et al. proposed various graph class identification protocols with directed graphs and designated initial states under global fairness. Concretely, Angluin et al. proposed graph class identification protocols for directed lines, directed rings, directed stars, and directed trees. Moreover, they proposed graph class identification protocols for other graphs such as 1) graphs having degree bounded by a constant

, 2) graphs containing a fixed subgraph, 3) graphs containing a directed cycle, and 4) graphs containing a directed cycle of odd length. However, there are still some open questions such as “What is the computability for undirected graphs?” and “How do other assumptions (e.g., initial states, fairness, etc.) affect the computability?” In this paper, we answer those questions. That is, we clarify the computability of graph class identification problems for undirected graphs under various assumptions such as initial states of agents, fairness of the execution, and an initial knowledge of agents.

We remark that some protocols in [3] for directed graphs can be easily extended to undirected graphs with designated initial states under global fairness (see Table 1). Concretely, graph class identification protocols for directed lines, directed rings, and directed stars can be easily extended to protocols for undirected lines, undirected rings, and undirected stars, respectively. In addition, the graph class identification protocol for bipartite graphs can be deduced from the protocol that decides whether a given graph contains a directed cycle of odd length. This is because, if we replace each edge of an undirected non-bipartite graph with two opposite directed edges, the directed non-bipartite graph always contains a directed cycle of odd length. On the other hand, the graph class identification protocol for directed trees cannot work for undirected trees because the protocol uses a property of directed trees such that in-degree (resp., out-degree) of each agent is at most one on an out-directed tree (resp., an in-directed tree). Note that agents can identify trees if they understand the graph contains no cycle. However, the graph class identification protocol for graphs containing a directed cycle in directed graphs cannot be used to identify a (simple) cycle in undirected graphs. This is because, if we replace an undirected edge with two opposite directed edges, the two directed edges compose a directed cycle.

Our Contributions

In this paper, we clarify the computability of graph class identification problems for undirected graphs under various assumptions. A summary of our results is given in Table 1. We propose a graph class identification protocol for trees with designated initial states under global fairness. This protocol works with constant number of states even if no initial knowledge is given. Moreover, under global fairness, we also propose a graph class identification protocol for -regular graphs with designated initial states. On the other hand, under weak fairness, we show that there exists no graph class identification protocol for lines, rings, -regular graphs, stars, trees, or bipartite graphs even if the upper bound of the number of agents is given. Moreover, in the case where the number of agents is given, we propose a graph class identification protocol for stars and prove that there exists no graph class identification protocol for lines, rings, -regular graphs, trees, or bipartite graphs. With arbitrary initial states, we prove that there is no protocol for lines, rings, -regular graphs, stars, trees, or bipartite graphs.

In this paper, because of space constraints, we omit the details of protocols (see the full version in the appendix).

Related Works

In the population protocol model, researchers studied various fundamental problems such as leader election problems [2, 11, 15, 18], counting problems [7, 8, 9], majority problems [5, 10, 17], etc. In [1, 6, 13, 14], researchers proposed efficient protocols for such fundamental problems with limited communication graphs. More concretely, Angluin et al. proposed a protocol that constructs a spanning tree with regular graphs [6]. Chen et al. proposed self-stabilizing leader election protocols with ring graphs [13] and regular graphs [14]. Alistarh et al. showed that protocols for complete graphs (including the leader election protocol, the majority protocol, etc.) can be simulated efficiently in regular graphs [1].

For the graph class identification problem, Chatzigiannakis et al. studied solvabilities for directed graphs with some properties on the mediated population protocol model [12], where the mediated population protocol model is an extension of the population protocol model. In this model, a communication link (on which agents interact) has a state. Agents can read and update the state of the communication link when agents interact on the communication link. In [12], they proposed graph class identification protocol for some graphs such as 1) graphs having degree bounded by a constant , 2) graphs in which the degree of each agent is at least , 3) graphs containing an agent such that in-degree of the agent is greater than out-degree of the agent, 4) graphs containing a directed path of at least edges, etc. Since Chatzigiannakis et al. proposed protocols for the mediated population protocol model, the protocols cannot work in the population protocol model. As impossibility results, they showed that there is no graph class identification protocol that decides whether the given directed graph has two edges and for two agents and , or whether the given directed graph is weakly connected.

As another perspective of communication graphs, Michail and Spirakis proposed a network constructors model that is an extension of the mediated population protocol [19]. The network constructors model aims to construct a desired graph on the complete communication graph by using communication links with two states. Each communication link only has active or inactive state. Initially, all communication links have inactive state. By activating/deactivating communication links, the protocol of this model constructs a desired communication graph that consists of agents and activated communication links. In [19], they proposed protocols that construct spanning lines, spanning rings, spanning stars, and regular graphs. Moreover, by relaxing the number of states, they proposed a protocol that constructs a large class of graphs.

Model Graph Properties
Initial states Fairness Initial knowledge Line Ring Bipartite Tree -regular Star
Designated Global * *
* *
None * -
Weak Unsolvable* *
/None Unsolvable*
Arbitrary Global/Weak //None Unsolvable*
* Contributions of this paper   †Deduced from Angluin et al. [3]
Table 1: The number of states to solve the graph class identification problems. is the number of agents and is an upper bound of the number of agents

2 Definitions

2.1 Population Protocol Model

A communication graph of a population is represented by a simple undirected connected graph , where represents a set of agents, and is a set of edges (containing neither multi-edges nor self-loops) that represent the possibility of an interaction between two agents (i.e., only if holds, two agents and can interact).

A protocol consists of a set of possible states of agents, a finite set of output symbols , an output function , and a set of transitions from to . Output symbols in represent outputs as the results according to the purpose of the protocol. Output function maps a state of an agent to an output symbol in . Each transition in is denoted by . This means that, when an agent in state interacts with an agent in state , their states become and , respectively. We say that such is an initiator and such is a responder. When and interact as an initiator and a responder, respectively, we simply say that interacts with . Transition is null if both and hold. We omit null transitions in the descriptions of protocols. Protocol is deterministic if, for any pair of states , exactly one transition exists in . We consider only deterministic protocols in this paper.

A configuration represents a global state of a population, defined as a vector of states of all agents. A state of agent

in configuration is denoted by . Moreover, when is clear from the context, we simply use to denote the state of agent . A transition from configuration to configuration is denoted by , and means that, by a single interaction between two agents, configuration is obtained from configuration . For two configurations and , if there exists a sequence of configurations , , , such that holds for every (), we say is reachable from , denoted by .

An execution of a protocol is an infinite sequence of configurations where holds for every (). An execution is weakly-fair if, for any adjacent agents and , interacts with and interacts with infinitely often111We use this definition only for the lower bound under weak fairness. For the upper bound, we use a slightly weaker assumption. We show that our proposed protocol for weak fairness works if, for any adjacent agents and , and interact infinitely often (i.e., it is possible that, for any interaction between some adjacent agents and , becomes an initiator and never becomes an initiator). Note that, in the protocol, if a transition exists for , a transition also exists. . An execution is globally-fair if, for each pair of configurations and such that , occurs infinitely often when occurs infinitely often. Intuitively, global fairness guarantees that, if configuration occurs infinitely often, then any possible interaction in also occurs infinitely often. Then, if occurs infinitely often, satisfying occurs infinitely often, and we can deduce that satisfying also occurs infinitely often. Overall, with global fairness, if a configuration occurs infinitely often, then every configuration reachable from also occurs infinitely often.

In this paper, we consider three possibilities for an initial knowledge of agents: the number of agents , the upper bound of the number of agents, and no knowledge. Note that the protocol depends on this initial knowledge. When we explicitly state that an integer is given as the number of agents, we write the protocol as . Similarly, when we explicitly state that an integer is given as the upper bound of the number of agents, the protocol is denoted by .

2.2 Graph Properties and Graph Class Identification Problems

We define graph properties treated in this paper as follows:

  • A graph satisfies property tree if there is no cycle on graph .

  • A graph satisfies property -regular if the degree of every agent in is .

  • A graph satisfies property star if is a tree with one internal agent and leaves.

  • A graph satisfies property bipartite if can be divided into two disjoint and independent sets and .

  • A graph satisfies property line if , , , , for , , .

  • A graph satisfies property ring if the degree of every agent in is .

Let be an arbitrary graph property. The identification problem aims to decide whether a given communication graph satisfies property . In the identification problem, the output set is . Recall that the output function maps a state of an agent to an output symbol in (i.e., or ). A configuration is stable if satisfies the following conditions: There exists such that 1) holds, and 2) for every configuration such that , holds.

An execution , , , solves the identification problem if includes a stable configuration that satisfies the following conditions.

  1. If a given graph satisfies graph property , holds.

  2. If a given graph does not satisfy graph property , holds.

A protocol solves the identification problem if every possible execution of protocol solves the identification problem.

3 Graph Class Identification Protocols

3.1 Tree Identification Protocol with No Initial Knowledge under Global Fairness

In this section, we give a tree identification protocol (hereinafter referred to as “TI protocol”) with 18 states and designated initial states under global fairness.

The basic strategy of the protocol is as follows. First, agents elect one leader token, one right token, and one left token. Agents carry these tokens on a graph by interactions as if each token moves freely on the graph. After the election, agents repeatedly execute a trial to detect a cycle by using the tokens. The trial starts when two adjacent agents and have the right token and the left token, respectively. During the trial, and hold the right token and the left token, respectively. To detect a cycle, agents use the right token and the left token as a single landmark. The right token and the left token correspond to a right side and a left side of the landmark, respectively. If agents can carry the leader token from the right side of the landmark to the left side of the landmark without passing through the landmark, the trial succeeds. Clearly, when the trial succeeds, there is a cycle. In this case, an agent with the leader token recognizes the success of the trial and decides that there is a cycle and thus the given graph is not a tree. Then, the decision is conveyed to all agents by the leader token and thus all agents decide that the given graph is not a tree. Initially, all agents think that the given graph is a tree. Hence, unless the trial succeeds, all agents continue to think that the given graph is a tree. Therefore, the protocol solves the problem.

Before we explain the details of the protocol, first we introduce variables at agent .

  • , , , , , , , , : Variable , initialized to , represents a token held by agent . If is not , agent has token. There are three types of tokens: a leader token (, , , and ), a left token ( and ), and a right token ( and ). We show the details of them later. represents no token.

  • , : Variable , initialized to , represents a decision of the tree. If holds for agent , then holds (i.e., decides that the given graph is a tree). If holds, then holds (i.e., decides that the given graph is not a tree).

The protocol uses 18 states because the number of values taken by variable is 9 and the number of values taken by variable is 2.

1:
2:, , , , , , , , : Token held by the agent, initialized to .
3:, : Decision of the tree, initialized to .
4:when agent interacts with agent  do
5:{ The election of tokens }
6:     if ,  then
7:         
8:     else if ,  then
9:         
10:     else if , , ,  then
11:         ,
12:         
13:{ Movement of tokens }
14:     else if   then
15:         if , , ,  then
16:              
17:         end if
18:         if  for  then
19:              
20:         else if   then
21:              
22:         end if
23:          *
24:{ Decision }
25:     else if   then
26:         ,
27:         
28:     else if   then
29:         ,
30:         
31:     else if   then
32:         ,
33:         
34:     else if   then
35:         ,
36:         
37:
38:* means that and exchange values.
39: Continued on the next page
Algorithm 1 A TI protocol (1/2)
40:     else if   then
41:         if , , , for  then
42:              ,
43:         end if
44:         if  for and  then
45:              
46:         end if
47:         if  for  then
48:              
49:         end if
50:         
51:     end if
52:end 
Algorithm 1 A TI protocol (2/2)

From now, we explain the details of the protocol. The protocol is given in Algorithms 1 and 1.

Election of three tokens (lines 2–8)

Initially, each agent has a right token. When two agents with right tokens interact, the agents change one of the tokens to a left token (lines 2–3). When two agents with left tokens interact, the agents change one of the tokens to a leader token (lines 4–5). When two agents with leader tokens interact, the agents delete one of the tokens (lines 6–7). As we explain later, agents carry a token on a graph by interactions as if a token moves freely on the graph. Thus, by the above behaviors, eventually agents elect one right token, one left token, and one leader token.

In the cycle detection part, we will just show behaviors after agents complete the token election (i.e., agents elect one right token, one left token, and one leader token). However, in this protocol, agents may make a wrong decision before agents complete the token election. Agents overcome this problem by the following behaviors.

  • Agents behave as if the leader token has the decision, and agents follow the decision. Concretely, when agent moves the leader token to agent by an interaction, agent copies to . Since the leader token moves freely on the graph, finally all agents follow the decision of the leader token.

  • When two agents with leader tokens interact and agents delete one of them, the agents reset of the remaining leader token. That is, if agent has the remaining leader token, it assigns to (line 8).

Note that the last token is elected by an interaction between agents with the leader tokens (i.e., the last interaction in this election part occurs between agents with the leader tokens). By this interaction, the elected leader token resets its to . Hence, of the leader token is just after agents complete the token election, and all agents follow of the leader token. Thus, because agents correctly detect a cycle after the token election (we will show this later), agents are not affected by the wrong decision.

Movement of tokens (lines 9–18)

When an agent having a token interacts with an agent having no token, the agents move the token (lines 9–18). Concretely, the token moves by a behavior of line 18. In lines 10–12, of the leader token is conveyed. We will explain the behavior of lines 13–17 after the explanation of the trial of the cycle detection.

The trial of the cycle detection (lines 19–43)

In this paragraph, we show that, by a trial of the cycle detection, agents correctly detect a cycle after agents complete the token election. To begin with, we explain the start of the trial. To start the trial, agents place the left token and the right token next to each other. To distinguish between a moving token and a placed token, we use a trial mode. Agents regard right and left tokens in a trial mode as placed tokens. Thus, when agents place the right token and the left token, agents make the right token and the left token transition to the trial mode. An token (resp., an token) represents the right token (resp., the left token) in the trial mode. An token (resp., an token) represents the right token (resp., the left token) in a non-trial mode.

An image of the start of the trial is shown in Figure 1. Figures 1(a) and 1(b) show the behavior such that agents make the left token and the right token transition to the trial mode. First, an agent with an token changes an token to an token by an interaction (Figure 1(a)), where the token represents the default leader token. By the interaction, the agents exchange their tokens and the token transitions to an token, where the token represents the leader token next to the token. This behavior appears in lines 19–21. Then, an agent with the token changes the right token to a trial mode by an interaction (Figure 1(b)). By the interaction, the agents exchange their tokens. Thus, since the token represents the leader token next to the token, agents place an token next to the token by the interaction. Hence, by the interaction, agents place the tokens in the following order: the token, the token, the leader token (Figure 1(c)). Moreover, by the interaction, the token transitions to an token, where the token represents the leader token trying to detect a cycle. This behavior appears in lines 22–24. When agents place all tokens as shown in Figure 1(c), a trial of the cycle detection starts.

From now, we explain the main behavior of the cycle detection (Figure 2 and 3). Let (resp., ) be an agent having the token (resp., the token). Let (resp., ) be a set of agents adjacent to (resp., ). Let and . In a trial, agents try to carry the leader token from an agent in to an agent in without using the edge between and .

First, we explain the case where a trial succeeds (Figure 2). In the trial, agents carry the token while the token and the token are placed at and , respectively. Concretely, if the following procedure occurs, the trial succeeds.

  1. Agents carry the token from an agent in to an agent in without using the edge between and (Figure 2(c)).

  2. An agent having the token interacts with agent having the token. By the interaction, agents exchange their tokens and the token transitions to an token (Figure 2(d)). In addition, by the interaction, agents confirm that the token was placed at while agents move the leader token to an agent in . The token represents the leader token that confirmed it. This behavior appears in lines 25–27.

  3. Agent having the token interacts with agent having the token (Figure 2(e)). By the interaction, agents confirm that the token was placed at while agents move the leader token to an agent in . This behavior appears in lines 28–30.

Clearly, if there is no cycle, agents do not perform this procedure. Thus, if agents perform this procedure, an agent with the leader token decides that there is a cycle and thus the given graph is not a tree (Figure 2(f)). Concretely, the agent with the leader token changes its to (line 30).

Next, we explain the case where a trial fails (Figure 3). There are three cases where the trial fails: (1) An agent having the or token fails to interact with the right token, (2) an agent having the or token fails to wait for the leader token, and (3) an agent having the token fails to interact with an agent having the token. Case (1) is that an agent having the token (resp., the token) interacts with an agent that does not have the token (resp., the token). Figure 3(A-1) and (B-1) shows an example of case (1). By the interaction, agents make the token transition to the token (lines 9–17 and 31–43). If agents make the token transition to the token, the condition in line 22 is never satisfied in the trial. If agents make the token transition to the token, the condition in line 28 is never satisfied in the trial. Case (2) is that an agent having an token (resp., an token) interacts with an agent that does not have the token (resp., the token). Figure 3(A-2) and (B-2) shows an example of case (2). By the interaction, agents make the token (resp., the token) transition to an token (resp., an token) by the behavior of lines 13–14 or 31–37, and thus the condition in line 25 or 28 is never satisfied in the trial. Case (3) is that an agent having the token interacts with an agent having a token that is not the token. Figure 3(A-3) and (B-3) shows an example of case (3). By the interaction, agents make the token transition to the token (lines 31–43). If agents make the token transition to the token, the condition in line 25 is never satisfied in the trial.

Agents have an infinite number of chances of the trial. This is because agents can make the leader token, the left token, and the right token transition to the token, the token, and the token, respectively, from any configuration (lines 9–18 and 31–43). Hence, from global fairness, eventually agents make the left and right tokens transition to the trial mode on the cycle and then agents find the cycle by the leader token. Thus, eventually a trial succeeds if there is a cycle.

By the behaviors of the trial, since of the leader token is just after agents complete the token election, of the leader token converges to a correct value. Since eventually all agents follow the decision of the leader token, all agents correctly decide whether the given graph is a tree or not.

Figure 1: An image of the start of the trial
Figure 2: An image of the success of the trial
Figure 3: Images of the fail of the trial

Correctness

First of all, if the number of agents is less than 3, clearly a leader token is not generated in Algorithm 1. Hence, in the case, of each agent converges to . Thus, since the given graph with is a tree, each agent make a correct decision in this case. From now on, we consider the case where the number of agents is at least 3.

To begin with, we define some notions for the numbers of the leader, left, and right tokens as follows:

Definition 1.

The number of agents with or tokens is denoted by . The number of agents with or tokens is denoted by . The number of agents with , , , or tokens is denoted by .

Next, we define a configuration where agents complete the token election.

Definition 2.

For an execution , , , we say that agents complete the token election at if , , or holds in , and , , and hold in .

From now, we show that agents eventually complete the token election, and, for agent with the leader token, hold just after the election.

Lemma 3.

For any globally-fair execution , , , there is a configuration at which agents complete the token election.

In , there exists an agent that has an token and is . Moreover, in any configuration after , , , and hold.

Proof.

Consider a globally-fair execution , , , . From the pseudocode, when an agent having a leader token interacts with an agent having no leader token, agents move the leader token. Similarly, when an agent having a left token (resp., a right token) interacts with an agent having no left token (resp., no right token), agents move the token. Only if an agent having a leader token interacts with an agent having a leader token, the number of leader tokens decreases. Similarly, only if an agent having a left token (resp., a right token) interacts with an agent having a left token (resp., a right token), the number of the tokens decreases. These imply that, from global fairness, if there are two or more tokens of the same type (leader, left, or right), eventually adjacent agents have the tokens and then they interact.

Hence, from global fairness, because there is no behavior to increase , continues to decrease as long as holds, by the behavior of lines 2–3. Thus, after some configuration, holds and the behavior of lines 2–3 does not occur. After that, because there is no behavior to increase except for the behavior of lines 2–3, continues to decrease as long as holds, by the behavior of lines 4–5. Thus, after some configuration, holds and the behavior of lines 4–5 does not occur. After that, because there is no behavior to increase except for the behavior of lines 4–5, continues to decrease as long as holds, by the behavior of lines 6–7. Thus, after some configuration, holds. Hence, there exists a configuration such that , , and hold after and , , or holds in .

If holds, agents execute lines 6–8 of the pseudocode at transition because only the behavior of lines 6–8 decreases the number of leader tokens. For an agent with the leader token, the leader token transitions to an token and transitions to when agents execute lines 6–8. If holds, agents execute lines 4–5 of the pseudocode at transition , and the first leader token is generated by this transition (and hence the leader token is the token and for an agent with the token). These imply that, in , there exists an agent that has the token and is . Therefore, the lemma holds. ∎

From Lemma 3, after agents complete the token election, only one leader token remains. From now on, we define of the leader token as such that agent has the leader token.

From the pseudocode, of the leader token is conveyed to all agents. This implies that, after of the leader token converges, of each agent also converges to the same value as of the leader token. Hence, from now, we show that of the leader token converges to (resp., ) if there is a cycle (resp., no cycle) on the graph.

First, we show that of the leader token converges to if there is no cycle on the graph.

Lemma 4.

For any globally-fair execution , if there is no cycle on a given communication graph, of the leader token converges to .

Proof.

Variable of the leader token transitions to only if agents execute lines 28–30. From Lemma 3, when agents complete the token election, of the leader token transitions to . Thus, for the purpose of contradiction, we assume that, for a globally-fair execution with a graph containing no cycle, agents execute lines 28–30 after agents complete the token election. From now, let us consider the configuration after agents complete the token election. We first prove that, to execute lines 28–30, agents execute the following procedure.

  1. By executing lines 19–21, an token and an token are generated.

  2. By executing lines 22–24, an token and an token are generated.

  3. By executing lines 25–27, an token is generated.

  4. Agents execute lines 28–30.

From now, we show why agents execute the above procedure to execute lines 28–30. To execute lines 28–30, an token is required (line 28). Recall that, when agents complete the token election, the leader token is . Hence, to generate an token, agents need to execute lines 25–27 (i.e., the item 3 of the procedure is necessary). This is because the behavior of lines 25–27 is the only way to generate an token. To execute lines 25–27, an token is required (line 25). To generate an token, agents need to execute lines 22–24 (i.e., the item 2 of the procedure is necessary) because the behavior of lines 22–24 is the only way to generate an token. Similarly, to execute lines 22–24, an token is required (line 22), and, to generate an token, agents need to execute lines 19–21 (i.e., the item 1 of the procedure is necessary) because the behavior of lines 19–21 is the only way to generate an token.

In the procedure, agents may perform the behaviors of some items multiple times by resetting the leader token to a token (e.g., agents may perform the behaviors of items 1, 2, 3, and 4 after performing the behaviors of items 1 and 2). However, we can observe that agents finally execute a procedure such that agents perform the behavior of each item only once in the procedure. From now on, we consider only such a procedure.

From the pseudocode, to execute lines 28–30, the following three conditions should hold during the procedure. Note that, after agents complete the token election, , , and hold.

  • After executing lines 19–21, an agent having an token does not interact with other agents until the agent interacts with an agent having an token (i.e., the agent interacts only when agents execute lines 22–24). Otherwise, agents make the token transition to an token and cannot execute lines 22–24 (i.e., the item 2 of the procedure cannot be executed).

  • After executing lines 19–21, an agent having an token does not interact with other agents until the agent interacts with an agent having an token (i.e., the agent interacts only when agents execute lines 25–27). Otherwise, agents make the token transition to an token and cannot execute lines 25–27 (i.e., the item 3 of the procedure cannot be executed).

  • After executing lines 22–24, an agent having an token does not interact with other agents until the agent interacts with an agent having an token (i.e., the agent interacts only when agents execute lines 28–30). Otherwise, agents make the token transition to an token and cannot execute lines 28–30 (i.e., the item 4 of the procedure cannot be executed).

From items 1 and 2, an token exists next to an token when agents execute lines 22–24. Hence, from the pseudocode, an token and an token are next to each other just after agents execute lines 22–24. In addition, an token and the token are also next to each other just after agents execute lines 22–24. To execute lines 25–27, an agent having the token must interact with the agent having the token without meeting the agent having the token. Furthermore, the agent having the token must not interact with other agents until agents execute lines 28–30. By the assumption, since agents execute lines 28–30, there are two paths from the agent having the token to the agent having the token just after agents execute lines 22–24. One of the paths is the path via the agent having the token. The other is the path without passing through the agent having the token. Therefore, there is a cycle in . This is a contradiction. ∎

Next, we show that of the leader token converges to if there is a cycle on the graph.

Lemma 5.

For any globally-fair execution , if there is a cycle on a given communication graph, of the leader token converges to .

Proof.

Consider a globally-fair execution with a graph containing a cycle. In , let be a configuration such that occurs infinitely often. From Lemma 3, eventually agents complete the token election and thus occurs infinitely often after agents complete the token election.

Clearly, each condition in lines 2–8 is not satisfied after . Thus, from the pseudocode, a token moves by any interaction (except for null transitions) after . This implies that tokens can move freely on after . Hence, from global fairness, a configuration such that all tokens are on a cycle occurs. Moreover, there exists a configuration such that is reachable from and , , and tokens are on the cycle in . This is because occurs if the following behaviors occur from .

  1. Making and tokens: If an agent having an (or ) token and an agent having an (or ) token can interact in , they interact and then an token and an token are generated by the behavior of lines 31–43. Otherwise, since the left token and the right token are on a cycle in (and hence an agent with the token has at least two edges), each agent having the token can interact with an agent having no token. In the case, an agent having an token (resp., an token) interacts with an agent having no token, and an token (resp., an token) is generated.

  2. Making an token: If the leader token is an token or an token, an agent having the token interacts with an agent having an token or no token. As a result, an token is generated. If the leader token is an token, the token moves to an agent that is on a cycle and is adjacent to an agent with an token (or an token). Then, an agent having the token interacts with an agent having the token (or the token) and then an token is generated.

There exists a configuration such that the configuration is reachable from and, on a cycle, an agent having an token is adjacent to an agent having an token in the configuration. This is because tokens can move freely on a graph. In the configuration, agents can execute lines 19–21. If agents execute lines 19–21, the configuration transitions to a configuration such that , , and tokens exist in a cycle. From the configuration, the token can move to an agent next to an agent with the token while an agent with the token and an agent with the token do not interact with any agent. This is because they are on a cycle and the token can move along the cycle. Then, an agent having the token can interact with an agent having the token and then agents execute lines 22–24. Such behavior causes a configuration such that , , and tokens are on a cycle. From the configuration, the token can move to an agent next to an agent having the token while an agent having the token and an agent having the token do not interact with any agent. This is because they are on a cycle and the token can move along the cycle. Then, an agent having the token can interact with an agent having the token and agents can execute lines 25–27. After that, an agent having an token can interact with an agent having the token and agents can execute lines 28–30. Hence, from global fairness, since each of the configurations occurs infinitely often, agents execute lines 28–30 infinitely often and agents assign to of the leader token infinitely often. Although of the leader token transitions to if agents execute line 8, agents does not execute line 8 after . Therefore, the lemma holds. ∎

From Lemmas 3, 4, and 5, we prove the following theorem.

Theorem 6.

Algorithms 1 and 1 solve the tree identification problem. That is, there exists a protocol with constant states and designated initial states that solves the tree identification problem under global fairness.

Proof.

From Lemma 3, there is a configuration such that of the leader token is in and agents complete the token election at . Hence, from Lemmas 4 and 5, if there is a cycle (resp., no cycle) in a given communication graph, of the leader token converges to (resp., ). From the pseudocode, since each token can move freely on the graph, of each agent converges to the same value of of the leader token. Thus, if there is a cycle (resp., no cycle) in a given communication graph, of each agent converges to (resp., ). Therefore, the theorem holds. ∎

3.2 -regular Identification Protocol with knowledge of under Global Fairness

In this subsection, we give a -regular identification protocol (hereinafter referred to as “RI protocol”) with states and designated initial states under global fairness. In this protocol, the upper bound of the number of agents is given. However, we also show that the protocol solves the problem with states if the number of agents is given.

From now, we explain the basic strategy of the protocol. First, agents elect a leader token. In this protocol, agents with leader tokens leave some information in agents. To keep only the information that is left after completion of the election, we introduce level of an agent. If an agent at level has the leader token, we say that the leader token is at level . Agents with leader tokens leave the information with their levels. Before agents complete the election of leader tokens, agents keep increasing their levels (we explain later how to increase the level), and agents discard the information with smaller levels when agents increase their levels. When agents complete the election of leader tokens, the agent with the leader token is the only agent that has the largest level. Then, all agents eventually converge to the level. Hence, since agents discard the information with smaller levels, agents virtually discard any information that was left before agents complete the election. From now on, we consider configurations after agents elect a leader token and discard any outdated information.

Now, we explain how the protocol solves the -regular identification problem by using the leader token. Concretely, each agent examines whether its degree is at least , and whether its degree is at least . If an agent confirms that its degree is at least but does not confirm that its degree is at least , then the agent thinks that its degree is . Each agent examines whether its degree is at least as follows: An agent with the leader token checks whether can interact with different agents. To check it, agent with the leader token marks adjacent agents and counts how many times has marked. Concretely, when agent having the leader token interacts with an agent , agent marks agent by making change to a marked state. Agent counts how many times interacts with an agent having a non-marked state (hereinafter referred to as “a non-marked agent”). If agent having the leader token interacts with non-marked agents successively, decides that can interact with different agents (i.e., its degree is at least ).

If an agent confirms that its degree is at least , the agent stores this information locally. To do this, we introduce a variable at agent : Variable , , initialized to , represents whether the degree of agent is at least . If holds, agent thinks that its degree is at least . If an agent confirms that its degree is at least , agent stores this information locally by making transition from to .

Next, we show how agents decide whether the graph is -regular. In this protocol, first an agent with the leader token decides whether the graph is -regular, and then the decision is conveyed to all agents by the leader token. We use variable at agent for the decision: Variable , , initialized to , represents the decision of the -regular graph. If holds for agent , then holds. If holds, then holds. Whenever an agent with the leader token makes transition to , agent makes transition to . If an agent with the leader token finds an agent such that or its degree is at least , agents reset to . Note that, since all agents follow the decision of the leader token, this behavior practically resets of each agent. If there is such agent , agent with the leader token eventually finds agent since the leader token moves freely on the graph. Hence, if the graph is not -regular, of the leader token (i.e., such that agent has the leader token) transitions to infinitely often. On the other hand, if the graph is -regular, eventually of each agent transitions from to . Let us consider a configuration where of each agent other than an agent is and is . After the configuration, when agent makes and transition to , agent has the leader token (i.e., of the leader token transitions to ). Hence, since there is no agent such that its is or its degree is at least , of the leader token never transitions to afterwards and thus of the leader token converges to . Thus, since agents convey the decision of the leader token to all agents, eventually all agents make a correct decision.

Before we explain the details of the protocol, first we introduce other variables at agent .

  • , , , , , : Variable , initialized to , represents states for a leader token and marked agents. If is neither nor , agent has a leader token. In particular, if , , , holds, agent has an token. Moreover, represents that agent has interacted with different non-marked agents (i.e., agent has at least edges). If holds, agent has no leader token. If holds, agent has no leader token and is marked by other agents.

  • , , , , : Variable , initialized to , represents the level of agent .

The protocol uses states because the number of values taken by variable is , the number of values taken by variable is , and the number of values taken by other variables ( and ) is constant.

Now, we explain the details of the protocol. The protocol is given in Algorithm 2.

1:
2:, , , , , : States for a leader token and marked agents, initialized to .
3:, , , , : States for the level of agent , initialized to .
4:, : States representing whether the degree of agent is at least , initialized to .
5:, : Decision of the -regular graph, initialized to .
6:when agent interacts with agent do
7: The behavior when agents have the same level
8:     if  then
9:{ The election of leader tokens }
10:         if  (, , , , then
11:              
12:              ,
13:              
14:              
15:{ Decision and movement of the token }
16:         else if  (, , , , then
17:              ,
18:         else if  (, , , , then
19:              ,
20:              
21:         else if   then
22:              ,
23:              if  then
24:                  
25:                  
26:              end if
27:{ Reset of of the leader token (the degree of agent is at least ) }
28:         else if   then
29:              ,
30:              
31:         end if
32:{ Reset of of the leader token ( or is ) }
33:         if   then
34:              ,
35:         end if
36: The behavior when agents have different levels
37:     else if  then
38:         
39:         
40:         
41:     end if
42:end
Algorithm 2 A RI protocol

The election of leader tokens with levels (lines 2–7 and 26–30 of the pseudocode)

Initially, each agent has the leader token and the level of each agent is 0. If two agents with leader tokens at the same level interact, agents delete one of the leader tokens and increase the level of the agent with the remaining leader token by one (lines 2–5). Moreover, and transition to (lines 6–7), where agent is the agent with the remaining leader token. Next, we consider the case where two agents at different levels interact. If an agent at the larger level interacts with an agent at the smaller level, agent update its level to the same level as the larger level (regardless of possession of the leader token). This behavior appears in lines 26–27. Furthermore, at the interaction, agent resets to (line 28), and agent deletes its leader token if agent has the leader token (line 29). We can observe that there is level such that all agents converge to level , because agents update their levels by only above behaviors and there is no behavior that increases the number of leader tokens. Since an agent at the largest level updates its level only if the agent has the leader token, there is an agent with the leader token at the largest level in any configuration. Thus, since each agent converges to level and the leader token moves freely among agents at the same level (we will show this movement behavior later), eventually agents elect a leader token by above behaviors.

Agents at the largest level delete the leader token only by the behavior of lines 3–7. This implies that, if at least two agents at the largest level have the leader token, eventually agents at the largest level with the leader tokens interact and then the largest level is updated. Hence, only one leader token c