On the Necessary Memory to Compute the Plurality in Multi-Agent Systems

We consider the Relative-Majority Problem (also known as Plurality), in which, given a multi-agent system where each agent is initially provided an input value out of a set of k possible ones, each agent is required to eventually compute the input value with the highest frequency in the initial configuration. We consider the problem in the general Population Protocols model in which, given an underlying undirected connected graph whose nodes represent the agents, edges are selected by a globally fair scheduler. The state complexity that is required for solving the Plurality Problem (i.e., the minimum number of memory states that each agent needs to have in order to solve the problem), has been a long-standing open problem. The best protocol so far for the general multi-valued case requires polynomial memory: Salehkaleybar et al. (2015) devised a protocol that solves the problem by employing O(k 2^k) states per agent, and they conjectured their upper bound to be optimal. On the other hand, under the strong assumption that agents initially agree on a total ordering of the initial input values, Gasieniec et al. (2017), provided an elegant logarithmic-memory plurality protocol. In this work, we refute Salehkaleybar et al.'s conjecture, by providing a plurality protocol which employs O(k^11) states per agent. Central to our result is an ordering protocol which allows to leverage on the plurality protocol by Gasieniec et al., of independent interest. We also provide a Ω(k^2)-state lower bound on the necessary memory to solve the problem, proving that the Plurality Problem cannot be solved within the mere memory necessary to encode the output.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/25/2019

Leader Election Requires Logarithmic Time in Population Protocols

In this paper, it is shown that any leader election problem requires log...
10/24/2021

New Bounds for the Flock-of-Birds Problem

In this paper, we continue a line of work on obtaining succinct populati...
02/25/2022

Complexity of Deliberative Coalition Formation

Elkind et al. (AAAI, 2021) introduced a model for deliberative coalition...
07/13/2019

Efficient self-stabilizing leader election in population protocols

We consider the standard population protocol model, where (a priori) ind...
11/09/2021

Population Protocols for Graph Class Identification Problems

In this paper, we focus on graph class identification problems in the po...
09/18/2020

Approximate Majority With Catalytic Inputs

Third-state dynamics (Angluin et al. 2008; Perron et al. 2009) is a well...
11/12/2019

Uniform Partition in Population Protocol Model under Weak Fairness

We focus on a uniform partition problem in a population protocol model. ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider a network of people, where each person supports one opinion from a set of possible opinions. There is also a scheduler who decides in each round which pair of neighbors can interact. The goal is to eventually reach an agreement on the opinion with the largest number of supporters, i.e. the plurality opinion (or majority when

). Here, eventually means at an unspecified moment in time, which the agents are not necessarily aware of (i.e.

global termination is not required [25]).

The main resource we are interested in minimizing is the state complexity of each node:

How many different states does each person need to go through during such computation?

This voting task is known as the Plurality Problem (or as the Voting Problem) in the asynchronous Population Protocols model [1, 24]. For , the problem is well understood: each person needs to maintain two bits in order for the people to elect the opinion of the majority [9, 21], regardless of the network size , and the problem cannot be solved with a single bit [21].

However, the state complexity of the problem for general has so far remained elusive: a clever protocol by Salehkaleybar et al. [24], called DMVR, shows how to solve the problem with states per person. They conjectured the DMVR protocol to be optimal:

“We conjecture that the DMVR protocol is an optimal solution for majority voting problem, i.e. at least states are required for any possible solution.”

On the other hand, under the assumption that agents initially agree on a total ordering of the initial input values, [17] provide an elegant plurality protocol which makes use of a polynomial number of states only. It remained however rather unclear whether the above assumption can be removed in order to achieve a polynomial number of states for the general Plurality Problem as well.

1.1 Related Work

Progress towards understanding the inherent computational complexity for a multi-agent system to achieve certain tasks has been largely empirical in nature. More recently, deeper insights have been offered by analytical studies with respect to some coordination problems [23]. In this regard, understanding the amount of memory necessary for a multi-agent system in order to solve a computational problem is a fundamental issue, as it constrains the simplicity of the individual agents which make up the system [22]. Several research areas such as Chemical Reaction Networks [13] and Programmable Matter [18] investigate the design of computing systems composed of elementary units; in this regard, a high memory requirement for a computational problem constitute a prohibitive barrier to its feasibility in such systems.

The Plurality Problem (also known as Plurality Consensus Problem in Distributed Computing), is an extensively studied problem in many areas of distributed computing, such as population protocols [1, 8, 9, 21, 24], fixed-volume Chemical Reaction Networks [13, 27], asynchronous Gossip protocols [5, 6, 10, 15, 16], Statistical Physics [12] and Mathematical Biology [7, 11, 20, 26].

In the Population Protocols model, the memory is usually measured in terms of the number of states (state complexity) rather than the number of bits, following the convention for abstract automata [19]. In the context of the Plurality Problem, for , the protocols of [9, 21] require states per node, and in [21], they showed that the problem cannot be solved with states. For general , the protocol of [24] uses states per node, and the only lower bound known has been so far the trivial , as each node/agent needs at least distinct states to specify its own opinion (which is from a set of size ). Under the crucial assumption that agents initially agree on a representation of the input values as distinct integers, [17] provides an elegant solution to the Plurality Problem which employs states only.

1.2 Our Results

In this work we refute the conjecture of [24], by devising a general ordering protocol which allows the agents to agree on a mapping of the initial input values to the integers , thus satisfying the assumption of the protocol by [17]. We further show how to adapt the plurality protocol by [17] in a way that allows to couple its execution in parallel with the ordering protocol such that, once the ordering protocol has converged to the aforementioned mapping, the execution of the plurality protocol is also eventually consistent with the provided ordering of colors. We emphasize that agents are not required to detect when the protocol terminates; this is indeed easily shown to be impossible under the general assumption of a fair scheduler. The resulting plurality protocol make use of states per agent. There is a population protocol which solves the Plurality Problem under a globally fair scheduler, by employing states per agent. Furthermore, we prove that states per node are necessary (Theorem 2).

1.2.1 Insights on the Ordering Problem.

The main idea for solving the ordering problem is to have some agents form a linked list, where each node is a single agent representing one of the initial colors. The fairness property of the scheduler allows for an adversarial kind of asynchronicity in how agents’ interactions take place. Because of this distributed nature of the problem, (temporary) creation of multiple linked lists cannot be avoided. Thus, it is necessary to devise a way to eliminate multiple linked lists, whenever more than one of them are detected. We achieve this goal by having agents from one of the linked lists leave it; also, as soon as these leaving agents interact with their successor or predecessor in their former list, they force them to leave the list as well, thus propagating the removal process until the entire list gets destroyed.

On the other hand, in order to form the linked list, the simple idea of having removed agents appending themselves to an existing linked list does not work. One of the issues with this naive approach is that a free agent may interact with the last agent of a list which is in the process of being destroyed, but the removal process in may still not have reached . Our approach to resolve this latter issue consists of, firstly, forcing the destruction process of a linked lists to start from the first agents of the lists, and secondly, forcing free agents to attach to an existing list by climbing it up from its first agent and appending themselves to its end once they have traversed it all. This way, by the time that there is only one first agent of a linked list (we call such agents root agents), we can be sure that all the free agents must follow the linked list starting by the agent , thus avoid extending incomplete linked lists.

1.3 Model and Basic Definitions

1.3.1 Population Protocols

In this work, we consider the communication model of Populations Protocols [1]: the multi-agent system is represented by a connected graph of nodes/agents, where each node implements a finite state machine with state space . The communication in this model proceeds in discrete steps. We remark that, as for asynchronous continuous-time models with Poisson transition rates, they can always be mapped to a discrete-time model [14].

At each time step, an (oriented) edge is chosen by a certain scheduler, and the two endpoint nodes interact. Furthermore, there is a transition function

that, given an ordered pair of states

for two interacting nodes and , returns their new states . We call configuration, and denote it by

, the vector whose entry

corresponds to agent ’s state after time steps. We say that a configuration is reachable from configuration if there exists a sequence of edges such that if we start from and we let the nodes interact according to , the resulting configuration is .

In recent works, the scheduler in this model is typically assumed to be probabilistic: the edge that is selected at each step is determined by a probability distribution on the edges. The most general studied scheduler is the

fair scheduler [2], which guarantees the following global fairness property [3, 4]. A scheduler is said to be globally fair, iff whenever a configuration appears infinitely often in an infinite execution , also any configuration reachable from appears infinitely often. Some of our results hold for an even weaker111Formally, the globally fair scheduler is not a special case of the weak one since, if the activation of an edge does not lead to a different configuration, it can be ignored under a globally fair scheduler. However, if such useless activations are ignored, it is easy to see that the globally fair scheduler is a special case of the weak one. version of scheduler, which satisfies the weak fairness property [4, 17]. A scheduler is said to be weakly fair, iff any edge appears infinitely often in the activation series . Note that any probabilistic scheduler which selects any edge with a positive probability, is a globally fair scheduler, in the sense that the global fairness property holds with probability 1. Indeed, the fairness condition for a scheduler may be viewed as an attempt to capture, in a general way, useful probability-1 properties in a probability-free model [2]. This is crucially the case when correctness is required to be deterministic (i.e. the probability of failure should be 0) [21, 24].

We emphasize that our theoretical results concern the existence of certain times in the execution of the protocols for which some given properties hold, but no general time upper bound is provided, since a fair scheduler can typically delay some edge activation arbitrarily.

1.3.2 -Plurality Problem.

Let be a network of agents, such that each agent initially supports a value in a set of possible values of size . We refer to the input values as colors. For each color , denote by the set of agents supporting color . We further denote as the input color of . We say that a population protocol solves the -plurality problem if it reaches any configuration , such that for any it holds that the agents agree on the color with the greatest number of supporters in the initial configuration . More formally, there is an output function such that for any and any agent , equals the plurality color. If the relative majority is not unique, the agents should reach agreement on any of the plurality colors.

In this work, we focus on solving the -Plurality Problem under a fair scheduler with the goal of optimizing the state complexity, which we denote by .

We emphasize that we do not assume any non-trivial lower bounds on the support of the initial majority compared to other colors, nor that the agents know the size of the network , or that they know in advance the number of colors . We do not make any assumption on the underlying graph other than connectedness. We remark that the analysis of our protocol in Theorem 1.2 holds for strongly connected directed graphs; however, for the sake of simplicity, we restrict ourselves to the original setting by [23].

Crucially, motivated by real-world scenarios such as DNA computing and biological protocols, we do not even assume that the nodes initially agree on a binary representations of the colors: they are only able to recognize whether two colors are equal and to memorize them. This latter assumption separates the polynomial state complexity of [17] from the exponential state complexity of [23].

2 Lower Bound on

Since the agents need at least to be able to distinguish their initial colors from each other, the trivial lower bound follows. In this section, we show that . Any protocol for the -Plurality Problem requires at least memory states per agent.

Proof

The high level idea is to employ an indistinguishability argument. That is, we show that for any protocol with less than states, there must be two initial configurations, and , with different plurality colors, such that a configuration is reachable from both and . Therefore, the protocol must fail in at least one of these two initial configurations.

Let be a protocol that solves the plurality consensus problem with initial colors, and let be the output function of . Define . We start by observing that there must be some color , such that . For any initial configuration and color , let be the number of agents in with initial color

. For an odd integer

, let be the set of all initial configurations , such that , and for any color , is an even number. Given that, for the sake of the lower bound, we can assume a complete topology, the number of configurations in is equal to the number of ways to put pair of balls into bins. Therefore, we have For each , since the plurality color in is , will reach a configuration that maps all agents in the configuration to . The number of such possible configurations is at most the number of ways to put balls into bins. For a sufficiently large , the number of such possible configurations is at most Observe that for and sufficiently large , the upper bound on the number of possible final configurations is less than the lower bound on . Therefore, there must be two distinct initial configurations and a configuration in which all agents are mapped to , such that is reachable from both and , by some activation sequences and respectively. By definition of , we have the following observation. For each where , there exists a color such that . Let be the color obtained from Observation 2 when applied to and . Without loss of generality, assume that . Let be an initial configuration with agents, all having initial color . Let us consider the two initial configurations and . Observe that the plurality color in is still , while the plurality color in is now . Since and are possible initial sequences of interactions in and respectively, both and can reach the configuration . Therefore, a protocol using only states can fail to distinguish between initial configurations and . Hence, fails to solve the problem on at least one initial configuration.

3 Upper bound on

In the following, we present a protocol that solves the problem with polynomial state complexity; we prove that . The protocol proposed by Gasieniec et al. [2] solves the problem using a polynomial number of states, under the hypothesis that agents agree on a way to represent each color with a -bit label.

First, we present a protocol that constructs such a shared labeling for the input colors (Theorem 3.1). Then, we combine these two protocols to design a new protocol that solves the -Plurality Problem (Theorem 1.2).

3.1 Protocol for the Ordering Problem

In the Ordering Problem, each agent initially obtains its input color , from a set of possible colors of size . The goal of the agent is to eventually agree on a bijection between the set of the possible input colors of size , and the integers . In other words, each agent eventually gets a label , such that for any two agents and , iff . We want to solve the Ordering Problem by means of a protocol which uses as few states as possible.

A weakly fair scheduler activates pairs of agents to interact. We consider the underlying topology of possible interactions to be a complete directed graph. We show how to remove such assumption in General Graphs section.

In this section, we prove the following theorem. There is a population protocol which solves the Ordering Problem under a weakly fair scheduler, by employing states per agent.

We refer the reader to the section Insights on the Difficulty in the Introduction for an overview of the main ideas behind protocol .

Memory Organization. The state of each agent , encodes the following information:

  1. , the initial color, which never changes.

  2. , the desired value, stored in bits.

  3. , a bit, indicating whether or not is a leader.

  4. , a bit, indicating whether or not is a root.

  5. , a color from the set . If and is on a linked list, then is the color of the agent preceding on the linked list. Otherwise is set to be .

  6. , a color from the set . If is on a linked list, is the color of the agent succeeding on the linked list (or if is the last agent in the linked list). Otherwise, is the color of the agent whom is following on a linked list, to reach the end of that linked list, or if is not following a linked list yet.

Thus, the number of states used is at most .

Definitions. An agent is called a leader, iff is set. A leader is called a root, iff is set. A leader is called isolated, iff is not a root and .

A linked list of links, is a sequence of leaders , such that only is a root, and . A linked list is said to be consistent, iff none of its agents’ information change by any sequence of further activations, except possibly where is the last agent on the linked list.

An isolated agent is a good agent, iff is either or the color of one of the agents of a consistent linked list.

Initialization. Before the execution of the protocol, each agent sets , , , and .

Transition Function. Let us suppose two agents interact, . The transition function that updates their states is given by the following Python code, where clear function is for isolating an agent.

def clear(u):
   r_u = d_u = 0
   pre_u = suc_u = c_u
if c_a == c_b:
   if l_a and l_b and (not r_a or r_b):
      l_a = r_a = 0   #1
   elif l_a and not l_b:
      d_b = d_a   #2
elif c_a != c_b and l_a and l_b:
   if r_a and r_b:
      clear(a)   #3
   elif not r_a and pre_a == c_a and suc_a == c_b and pre_b == c_b:
      clear(a)   #4
   elif r_a and not r_b:
      if suc_a == c_a or (suc_a == c_b and (pre_b != c_a or d_b != 1)):
         d_b = 1   #5 
         suc_a = suc_b = c_b
         pre_b = c_a
      elif pre_a == c_a:
         suc_b = suc_a   #6
   elif not r_a and not r_b:
      if pre_a != c_a and suc_a == c_b and pre_b != c_a:
         suc_a = c_a   #7
      elif pre_b == c_a and (pre_a == c_a or suc_a != c_b):
         clear(b)   #8
      elif pre_a != c_a and suc_a == c_b and pre_b == c_a and d_a + 1 != d_b:
         suc_a = c_a   #9
         clear(b)
      elif pre_a == c_a and suc_a == c_b:
         if suc_b != c_b:
            suc_a = suc_b   #10
         else:
            d_a = d_b + 1   #11
            suc_b = suc_a = c_a
            pre_a = c_b
Listing 1: Protocol for the Ordering Problem.

As seen above, there are 11 rules. The rules are defined for directed pair interactions, but can easily be modified to handle the undirected-interaction case.

3.1.1 Proof of Theorem 3.1.

We now prove the correctness of Protocol (Algorithm 1). We have the following.

Lemma 1

After some number of activations , in each nonempty set of agents, only one is a leader, and among all leaders only one is a root. After such configuration is reached, the leader and root bits of all agents will never change.

Proof

The protocol never changes a leader or root bit from False to True. When two leaders with the same color interact, one of them clears its leader bit, due to Rule 1 (notice that the direction of interaction is relevant here). Therefore, the number of leaders decreases until no two leaders have the same color, after which no leader bit of any agent ever changes. Afterwards, when two roots interact, they now have different colors and only one of them remains a root, due to Rule 3. Furthermore, note that when two leaders interact where one of them is a root, the one who remains a leader is also a root, due to Rule 1. Hence, we conclude that there is always a root, and after some number of interactions the root must be unique, after which no root bit of any agent ever changes.

Let be the number of activations described in Lemma 1. Let be the set of leaders after activations and let represent the unique root. We now prove, by using induction on , that for any integer , after some number of activations , there is a consistent linked list of links whose agents belong to .

From now on, we may refer to a leader by its color . Observe that, since there is only one root and no two leaders have the same color, any linked list that exists after activations, is a consistent one.

Base case . If after activations, does not have a successor (i.e. ), then as soon as interacts with another leader, it makes the other one its successor, due to Rule 5. Otherwise, as soon as interacts with , by Rule 5 we can be sure that they form a consistent linked list of 1 link.

Induction step. Suppose that and after activations, a consistent linked list of links exists. Let , , …, denote the agents succeeding agent on the linked list, respectively. Suppose , and let denote . Consider the first interaction between and , after activations. After such interaction, if and , we have a consistent linked list of links; otherwise, Rule or Rule executes and . We now assume .

We prove the following.

Lemma 2

Suppose some number of activation has passed, and , , , …, form a consistent linked list of links where , and also . After some more activations, a good agent exists.

Proof (Proof of Lemma 2)

If a good agent already exists after activations, the claim is proved. Therefore, we assume that no good agent exists right after activations. Define . Let be the set of agents in which are isolated and . It follows from the hypothesis that . First, we prove the lemma assuming . Then, we prove the other cases by induction on the size of the .

Case . From the definitions above, it follows that implies and . Let be an agent. Since is not a good agent, we have . Let denote . Consider the first moment after activations in which and interact. If one of or became a good agent, we are done. Otherwise, Rule 4 executes and is cleared. Thus, after some number of activations , is a good agent.

We now use induction on to prove the remaining cases.

Base case . Consider an agent . Let agent be . If , let agent be . We repeat this process until we reach some agent such that either or for some . Since , if then and is cleared by the time it and interact, due to Rule 8. Note that the only way gets new members, is that an agent becomes cleared, which implies the existence of a good agent. Otherwise, for some , which means that we incur in a cycle when we follow the values of agents. In particular, there will be a pair of agents on this cycle such that when they interact (if they are not already cleared by that time), Rule 8 or Rule 9 executes and an agent is cleared. Therefore, after some activations , an agent is cleared and a good agent exists.

Induction step. Suppose and the statement holds for all . We show that it also holds for . Again, we repeat the process described in the base case. This time, we stop at agent if any of the following holds: i) , ii) for some , or iii) .

The first two cases follow from the same argument as in the base case. In the third case, suppose that agents and interact at time . If an agent has been cleared by time , then we have a good agent. Otherwise, if no agents has been cleared between activations and and, by time , agent is not in anymore, then the size of has been reduced by at least 1. The latter event implies that, by induction hypothesis, after some more activations either good agent exists or, by Rule 8, an interaction between and clears . Thus, eventually a good agent exists.

Let be a good agent, whose existence is guaranteed by Lemma 2. The only activation that changes the state of , is an interaction with (or when ). If is not the last agent of the linked list, it will be updated to be its successor (or when ). Therefore, after at most such activations, interacts with the last agent on the linked list, and since , it is added to the linked list (provided that the linked list have not already increased its size by attaching another good agent to it). Therefore, after some activations , a consistent linked list of links is formed, concluding the induction.

We have thus proved that, after some number of activations , there is a consistent linked list that includes all agents from . Let be the last agent on the linked list. Rule 7 ensures that after some activations, . Also, after some activations all non-leader agents copy the assigned number of their leader. Afterwards, the whole system stabilizes and no agent changes its state, concluding the proof of the theorem.

As a final remark notice that, for an agent , there may be sequences of edge activations that lead the assigned label to reach a value which grows as a function of before stabilizing. We thus assume that the variable overflows when exceeding the largest number it can store, and gets set back to . Notice that is guaranteed to be large enough to store . It is straightforward to verify that this latter assumption does not affect our analysis above. ∎

4 Plurality Protocol with States

We now come back to the original problem by proving the following result.

There is a population protocol which solves the -Plurality Problem under a weakly fair scheduler, when the underlying graph is complete, by employing states per agent.

Recall that, initially, each agent obtains its initial color which we shall rename to , from a set of possible colors of size . Let be . For the sake of simplicity, in this section we consider the underlying topology of possible interactions to be a complete graph. We show how to remove such assumption in Section General Graphs, thus proving Theorem 1.2. A weak scheduler activates pairs of agents to interact. The goal is for all agents to agree on the plurality color, using as few states as possible.

4.0.1 Main Intuition behind .

The protocol proposed by Gasieniec et al. [17], which we shall call , solves this problem under the hypothesis that each color is denoted by a never changing -bit label, such that each bit is either -1 or 1, rather than the more standard 0 or 1. We adopt the same notation and assume that the ordering protocol stores the values in such format. The idea is to run both protocols, and , in parallel and, whenever for an agent , and are not equal, we ensure that after some activations, . When the latter condition holds, we can set to be and reinitialize and according to initialization of .

Notice that, since every agent is required to eventually learn the label of the plurality color, each agent also stores a color that corresponds to that label.

Memory Organization. The state of each agent , encodes the following information:

  1. and , as described for (where , and in are renamed to , and , respectively),

  2. , as described in , and

  3. , a color from the set , which holds the relative majority color.

The number of states used is at most .

Definitions. An agent is called unstable, iff . For each , and for each that is an -bit number with bit values either -1 or 1, let us define to be the set of all agents such that the first bits of are equal to .

Initialization. Before the execution of the protocol, for each agent , the variables and are initialized according to . Note that, instead of all bits set to 0, has all bits set to -1. Moreover, we set and initialize and according to . is set to be .

Transition Function. We now define the transition function . Let us suppose that two agents and are activated, with . Let be the transition function of , and be the transition function of .

First, the values related to are updated according to . If or is the label of the winning color in (as described in ), let us set or , respectively. Afterwards,

  1. If and , we update the values related to according to .

  2. If , let , and let . Let and be analogously defined. If , we set and initialize and according to initialization rule of . Otherwise, let be if , otherwise. For each , if and share the same -bit prefix, we have that

    1. If , set and ,

    2. Otherwise, update and according to .

    If , we update the array and, if needed, we propagate the changes as in .

4.0.2 Proof of Theorem 4.

First, we prove that for each , and for each that is an -bit number with bit values equal to either -1 or 1, the two invariants of hold for , that is

  1. , and

  2. .

The interactions in which states are updated according to satisfy the invariants due to the correctness of . The other interactions can be divided into the following two cases.

  1. For any agent , if changes and the change includes a bit from the -bit prefix of , we know that by definition of the protocol. Therefore, if is in before the change, the same value is subtracted from both sides of the first invariant. Otherwise, if is in after the change, the same value is added to both sides of the first invariant. Moreover, since after the reinitialization of it is still the case that , the invariants hold.

  2. If and are two agents such that and changes simultaneously, then we know that by definition of the protocol and, because of the second invariant, either and , or and . In both cases, remains unchanged after is set to and is set to , so the first invariant holds. It is immediate to check that the second invariant still holds as well.

We proved that the two invariants hold throughout the execution of the protocol. It follows from the correctness of that after some number of activations , for each agent , doesn’t change anymore.

Lemma 3

After some number of activations , for each agent , equals .

By the definition of , since remains unchanged after activations, it is obvious that remains unchanged as well. Thus, from the correctness of , it follows that the whole system eventually stabilizes and every agent knows the label of the plurality color. Furthermore, when an agent interacts with an agent with the winning color label, sets to . Otherwise, if has the winning color itself, as soon as it is activated it sets to (if it is not set already). Therefore, it only remains to prove Lemma 3.

4.0.3 Proof of Lemma 3.

Suppose activations have passed. Since after activations, the value of agents remains unchanged, by the definition it immediately follows that the number of unstable agents never increases.

Hence, to conclude the proof it suffices to prove the following fact. Suppose that, after some number of activations have passed, an unstable agent still exists. Then, after some additional number of activations, the number of unstable agents decreases.

To see why Fact 4.0.3 holds, suppose that some number of activations have passed and is an unstable agent. Let . Since the protocol does not change and for all , the size of never increases. We prove Fact 4.0.3 by induction on .

Base case . As soon as is activated, it will set to and thus the number of unstable agents will decrease.

Induction step. Suppose , and for all , after some activations either or the number of unstable agents decreases. Let be an integer, and let denote the -bit prefix of . Let be the set of all agents such that is unstable and . does not let agents in interact in , but any two agents from can interact with each other. It can easily be seen that the two invariants hold for agents in in . After some interactions, we can distinguish the following cases: i) an unstable agent in becomes stable, ii) an unstable agent becomes stable and is added to , or iii) the agents in in stabilize.

In the latter case, suppose without loss of generality that . By the first invariant of on , we know that there will be another agent such that . As soon as and interact, the protocol ensures that after the interaction, . Thus, after some number of activations, either the number of unstable agents decreases or the size of decreases. Hence, Fact 4.0.3 follows by the induction hypothesis, and the proof of Lemma 3 is completed. ∎

5 General Graphs

The protocol works on complete directed graphs, but it can be easily modified to work on complete undirected graphs. We now present a protocol which works on undirected connected graphs, under a globally fair scheduler, and finally prove our main result, Theorem 1.2.

5.0.1 Plurality Protocol on General Graphs.

The idea is that, whenever a pair of agents is activated, the two agents can swap their updated states. This way, the agents effectively travel on the nodes of the underlying graph and possibly interact with other agents that were not initially adjacent.

Therefore, let us define the transition function , where is the transition functions of modified . The initialization of is the same as that of

Proof (Proof of Theorem 1.2)

Let be any connected graph. Let , , … any an infinite sequence of configurations obtained by running on under a globally fair scheduler, where is the initial configuration. Since the number of possible states is finite, the number of possible configurations is also finite. Therefore, there exists a configuration that appears infinitely often in the sequence.

For all distinct pairs , let be a series of edges forming a path from to , and suppose that the edges in gets activated first in the order in which they appear in the path, and then in reverse order. Let be the concatenation of such edge activations. If we activate edges according to , then travels along (possibly interacting with some other agents), until it interacts with , and then travels back to its position. Therefore, the sequence of activations ensures that pair of agents interact with each other at least once. If we keep activating edges according to the sequences , for each pair of agents , then starting from , each pair of agents interact infinitely often.

Remark that, a globally fair scheduler is also a weakly fair one. By correctness of under a weakly fair scheduler (Theorem 4), by repeating the mentioned edge activation sequence starting from , a stable configuration will be reached (a configuration in which all agents know the initial plurality color, and their guess remains correct thereafter). Therefore, is reachable from . By the definition of a globally fair scheduler, since is infinitely reached, the stable configuration is eventually reached.

References

  • [1] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computation in networks of passively mobile finite-state sensors. Distributed Computing, 18(4):235–253, 2006.
  • [2] Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computational power of population protocols. Distributed Computing, 20(4):279–304, November 2007.
  • [3] James Aspnes, Joffroy Beauquier, Janna Burman, and Devan Sohier. Time and Space Optimal Counting in Population Protocols. In 20th International Conference on Principles of Distributed Systems (OPODIS 2016), volume 70 of Leibniz International Proc. in Informatics (LIPIcs), pages 13:1–13:17, Dagstuhl, Germany, 2017.
  • [4] Joffroy Beauquier, Janna Burman, Simon Clavière, and Devan Sohier. Space-Optimal Counting in Population Protocols. In Proc. of the 29th International Symposium on Distributed Computing - Volume 9363, DISC 2015, pages 631–646, New York, NY, USA, 2015. Springer-Verlag New York, Inc.
  • [5] Luca Becchetti, Andrea E. F. Clementi, Emanuele Natale, Francesco Pasquale, and Riccardo Silvestri. Plurality consensus in the gossip model. In Proc. of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, pages 371–390, 2015.
  • [6] Luca Becchetti, Andrea E. F. Clementi, Emanuele Natale, Francesco Pasquale, and Luca Trevisan. Stabilizing consensus with many opinions. In Proc.of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, pages 620–635, 2016.
  • [7] Ohad Ben-Shahar, Shlomi Dolev, Andrey Dolgin, and Michael Segal. Direction election in flocking swarms. Ad Hoc Networks, 12:250–258, 2014.
  • [8] F. Benezit, P. Thiran, and M. Vetterli. The Distributed Multiple Voting Problem. IEEE Journal of Selected Topics in Signal Proc., 5(4):791–804, August 2011.
  • [9] Florence Bénézit, Patrick Thiran, and Martin Vetterli. Interval consensus: From quantized gossip to voting. In Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2009, pages 3661–3664, 2009.
  • [10] Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized Gossip Algorithms. IEEE/ACM Trans. Netw., 14(SI):2508–2530, June 2006.
  • [11] ID Couzin, J Krause, NR Franks, and SA Levin. Effective leadership and decision-making in animal groups on the move. Nature, 433 (7025):513 – 516, 2 2005. Publisher: Nature Publishing Group.
  • [12] David A. Levin and Yuval Peres. Markov Chains and Mixing Times. American Mathematical Society, Providence, R.I, 1 edition edition, December 2008.
  • [13] David Doty. Timing in chemical reaction networks. In Proc. of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, pages 772–784, 2014.
  • [14] Robert Elsässer, Tom Friedetzky, Dominik Kaaser, Frederik Mallmann-Trenn, and Horst Trinker. Brief Announcement: Rapid Asynchronous Plurality Consensus. In Proc. of the ACM Symposium on Principles of Distributed Computing, PODC ’17, pages 363–365, New York, NY, USA, 2017. ACM.
  • [15] Mohsen Ghaffari and Johannes Lengler. Tight analysis for the 3-majority consensus dynamics. CoRR, abs/1705.05583, 2017.
  • [16] Mohsen Ghaffari and Merav Parter. A polylogarithmic gossip algorithm for plurality consensus. In Proc. of the 2016 ACM Symposium on Principles of Distributed Computing, PODC 2016, pages 117–126, 2016.
  • [17] Leszek Gasieniec, David Hamilton, Russell Martin, Paul G. Spirakis, and Grzegorz Stachowiak. Deterministic Population Protocols for Exact Majority and Plurality. In LIPIcs-Leibniz International Proc. in Informatics, volume 70, 2016.
  • [18] Robert Gmyr, Kristian Hinnenthal, Irina Kostitsyna, Fabian Kuhn, Dorian Rudolph, and Christian Scheideler. Shape Recognition by a Finite Automaton Robot. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), volume 117 of Leibniz International Proc. in Informatics (LIPIcs), pages 52:1–52:15, Dagstuhl, Germany, 2018.
  • [19] Markus Holzer and Martin Kutrib. Descriptional and computational complexity of finite automata—A survey. Information and Computation, 209(3):456–470, March 2011.
  • [20] Q. Ma, A. Johansson, A. Tero, T. Nakagaki, and D. J. T. Sumpter. Current-reinforced random walks for constructing transport networks. Journal of The Royal Society Interface, 10(80):20120864–20120864, December 2012.
  • [21] George B. Mertzios, Sotiris E. Nikoletseas, Christoforos L. Raptopoulos, and Paul G. Spirakis. Determining majority in networks with local interactions and very small local memory. Distributed Computing, 30(1):1–16, 2017.
  • [22] Valentina Pitoni. Memory management with explicit time in resource-bounded agents. In

    Proc. of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018

    , 2018.
  • [23] Bijan Ranjbar-Sahraei, Haitham Bou Ammar, Daan Bloembergen, Karl Tuyls, and Gerhard Weiss. Theory of Cooperation in Complex Social Networks. In Proc. of the 28th AAAI Conference on Artificial Intelligence, AAAI’14, pages 1471–1477, Québec City, Québec, Canada, 2014. AAAI Press.
  • [24] Saber Salehkaleybar, Arsalan Sharif-Nassab, and S. Jamaloddin Golestani. Distributed voting/ranking with optimal number of states per node. IEEE Trans. Signal and Information Processing over Networks, 1(4):259–267, 2015.
  • [25] Nicola Santoro. Design and Analysis of Distributed Algorithms. Wiley-Interscience, 1 edition edition, April 2006.
  • [26] David J.T. Sumpter, Jens Krause, Richard James, Iain D. Couzin, and Ashley J.W. Ward. Consensus decision making by fish. Current Biology, 18(22):1773 – 1777, 2008.
  • [27] Oleg N. Temkin, Andrew V. Zeigarnik, and D. G. Bonchev. Chemical Reaction Networks: A Graph-Theoretical Approach. CRC Press, Boca Raton, Fla, 1 edition edition, August 1996.