1 Introduction
The model of population protocols originates from the seminal paper of Angluin et al. [4]. This model provides tools for the formal analysis of pairwise interactions between simple indistinguishable entities referred to as agents. The agents are equipped with limited storage, communication and computation capabilities. When two agents engage in a direct interaction their memory content is assessed and their states are modified according to the predefined transition function that forms an integral part of the population protocol. In the probabilistic variant of population protocols adopted in this paper, in each step the random scheduler selects a pair of agents uniformly at random. In this variant, in addition to state utilisation one is also interested in the running time of the proposed solutions. In more recent work on population protocols the focus is on parallel time defined as the total number of pairwise interactions leading to the solution divided by the size (in our case
) of the population. The parallel time provides also a good estimation on the number of interactions each agent was involved in throughout the computation process.
Unless stated otherwise we assume that any protocol starts in the initial configuration where all agents are in the same predefined initial state. A population protocol terminates with success if the whole population eventually stabilises, i.e., it arrives at and stays indefinitely in the final configuration of states reflecting the desired property of the solution.
1.1 Constructors
While in the standard population protocol model the population of agents remains unstructured, in the model introduced in [18] and adopted in this paper any two agents may become connected by establishing a physical (edge) connections between them. The two connected agents may later choose to drop their connection when they meet again. In this way the agents can self organize themselves into a desired temporary or more definite structures. Such distributed and dynamically structured systems based on population protocols are called network constructors or simply constructors [18].
Note that the expected number of interactions to fix (establish or modify) a particular connection is as each pair of agents is selected uniformly at random. And this cost often dominates the time complexity of the constructors protocols. But there are some exceptions where the focus is not on very specific but more arbitrary connections randomly selected from a large group of available connections. A good example is the construction of almost balanced trees from [11] requiring interactions.
The model When possible we will use capital letters to denote states of the agents. In order to accommodate management of edge connections we need to extend the transition function in which every rule is of the following type
The first two terms on both sides of the rule refer to the states and of interacting agents before and and after the interaction. The third term before and after the interaction indicates the status of connection between the two where means the interacting agents are connected or not denoted by
Note that the state of an agent can be more complex, e.g., represented as a tuple with several components, where only some components may change during an interaction. In such compound cases but also when the states are denoted by longer terms or numbers we will use vector representation with
brackets.One of the central problems in network constructors is formation of a stable line (or ring) comprising all agents. When no leader is initially assumed, the fastest known protocol for a stable line construction requires interactions in expectation, i.e. parallel time, see [18]. Until now, no bounds were known for efficient ring construction. In this paper we propose fastest possible protocols for the line and ring construction, while preserving constant space utilisation in agents.
Our clock, line and ring constructions are always correct and they stabilise with high probability (whp) which we define as follows. Let be a universal constant referring to the reliability of our protocols. We say that an event occurs with negligibleprobability if it occurs with probability at most , and an event occurs with high probability if it occurs with probability at least . This estimate is of an asymptotic nature, i.e., we assume is large enough to validate the results. Similarly, we say that an algorithm succeeds with high probability if it succeeds with probability at least . When we refer to the probability of failure different to , we say directly with probability at least
. Our protocols make heavy use of Chernoff bounds and the new tail bounds for sums of geometric random variables derived in
[17]. We refer to this new bound as to ChernoffJanson bound.1.2 Related work
One of the main tools used in this paper refers to the central problem of leader election. In this problem the final configuration comprises a single agent in the leader state and all other agents in the follower state. The leader election problem received in recent years greater attention in the context of population protocols. In particular, the results from [10, 14] laid down the foundation for the proof that leader election cannot be solved in a sublinear time with agents utilising a fixed number of states [13]. In further work [3], Alistarh and Gelashvili studied the relevant upper bound, where they proposed a new leader election protocol stabilising in time assuming states per agent.
In a very recent work Alistarh et al. [1] consider a more general tradeoff between the number of states used by agents and the time complexity of stabilisation. In particular, the authors provide a separation argument distinguishing between slowly stabilising population protocols which utilise states and rapidly stabilising protocols with states per agent. This result nicely coincides with another fundamental observation by Chatzigiannakis et al. [9] which states that population protocols utilizing states are limited to semilinear predicates, while the availability of states admits computation of symmetric predicates. Further developments include also a protocol which elects the leader in time w.h.p. and in expectation utilizing states [8]. The number of states was later reduced to by Alistarh et al. in [2] and by Berenbrink et al. in [7] through the application of two types of synthetic coins.
In more recent work of Gąsieniec and Stachowiak reduce memory utilisation to while preserving the time complexity whp [16]. We also know that the high probability can be traded for faster leader election in the expected parallel time , see [15]. This upper bound was reduced to the optimal expected time by Berenbrink et al. in [6]. In fact the main open problem is to establish whether one can elect a single leader in time whp while preserving the optimal number of states
The protocols and methods discussed in this paper are closely related to the concept of phase clocks. The term and the first analysis of a leader based space phase clock was given by Angluin et al. in [5]. Further extensions including a junta based clock and nested clocks counting any time whp, for any constant can be found in [16]. In very recent work [12] Doty et al. propose and analyse constant resolution clocks utilising states as the main engine in the optimal majority computation protocols.
1.3 Our results
Our main contribution is a new constant space clock allowing (in the model with edge connections) to count parallel time whp. This clock is used to confirm the conclusion of the slow leader election protocol. The selected a leader is used to construct a line and a ring of agents in the optimal parallel time whp. We also propose and analyse the second clock based on the selected leader which also operates in time Please note that this clock is universal, i.e., it can be used in population protocols with and without edges. Thanks to periodic application of the second clock one can monitor efficient construction of lines (and rings due to lack of space discussed in the Appendix). All our new protocols use the optimal constant space and operate whp. With the exception of replication our protocols are also optimal wrt the time complexity . They are also always correct meaning that with a negligible probability they may operate longer, however, they never terminate with the wrong answer.
2 Two clocks and leader election
In order to compute a single leader in the population we are executing two protocols simultaneously. We run the slow (naive) leader election protocol to identify a unique leader in time whp, simultaneously with the new matching based clock (discussed below) which counts time whp. When this clock concludes the remaining leader progresses to further stages where we compute the line, the ring and discuss the line replication protocol.
The transition rules of the considered protocols follow.
Slow leader election
where represents a (remaining) leader candidate, and stands for a follower or a free agent. It is known that such naive leader election protocol operates in time whp.
Matching based clock The proposed matching based clock assumes the constructors model in which the transition function recognises whether to agents are connected by an edge, indicated by , or not, indicated by . The agents begin in the predefined state When two agents in state interact they get connected and they enter the counting stage in which their counters are initially set to and eventually reach the maximum value . Note that these counters can either go up or down depending on the rule (of the transition function) is used during interaction. Note also that the number of agents present in the counting stage is always even. The counting stage protocol guaranties that the counters of all agents which enter this stage reach value in time see Theorem 1. And in the next interaction between the two connected agents in state the connection is removed and the states are updated to indicating the end of the counting stage.
The rules of the transition function used in the counting stage are as follows:
Initialisation
Timid counting

For all connected and

For all disconnected
Maximum level epidemic
Conclude and disconnect
Leader based clock We allocate separate constant memory to host the states of the leader based clock. This allows to run the actions of the two clocks simultaneously and independently. The followers in the leader based clock start with the counter set to 0 which is denoted by while refers to the leader state. Note that state is initiated for the leader based clock as soon as the agent reaches state or in the matching based clock. The timid counting rules now refer to the interactions with the leader
Timid counting

Leader interactions, where

Nonleader interactions, where
One can show that the two clocks have the same asymptotic time performance, see Section 3 for the relevant detail. Note that the leader based clock can be used independently from presence of edges in the population. In particular, this clock can be used to count time required to remove all edges used in the matching based clock as well as to count time needed to form the line and the ring of all nodes.
Periodic leader based clock One can extend the functionality of the leader based clock such that it paces multiple rounds, each operating in time of some more complex process, e.g., line replication.
The extension uses three consecutive stages 0, 1 and 2, where each stage corresponds to one full execution of the leader based clock, and all three stages form a single round. Thus any round starts in stage 0 on which conclusion maximum level epidemic signal is accompanied by the message that stage 0 is concluded. Note that such message is distributed rapidly by oneway epidemic in time whp. And when this happens all agents proceed to stage 1. Note that the signal to start stage 1 remains in the system throughout the whole stage but it will be wiped out by the signal announcing the beginning of stage 2. Analogously, on the conclusion of stage 2 the signal announcing stage 0 (and the new round) will wipe out the previous message. This way after at most time delay (caused by the time of the epidemic) all agents will always run the clock in the same stage whp.
3 The analysis
In this section we analyse the time complexity the two clocks from Section 2. We will prove the following theorem towards the end of this section, but we need first a collection of lemmas.
Theorem 1.
In either of the clocks the time in which state appears for the first time is whp.
We first focus on the matching based leaderless clock and later extend the reasoning to the leader based clock.
Let us define the edge collector problem in which one is asked to collect all edges of a given matching of cardinality This process concludes when the random scheduler generates all edges of a given matching via pairwise interactions of agents in the population.
Lemma 1.
For any cardinality the time complexity of the edge collector problem is . In addition, the time needed to collect the last edges (of the matching) is at least whp.
Proof.
The probability of collecting an edge when edges are still to be collected is and in turn the expected time needed to collect an edge is . Thus the expected time to collect edges, when edges remain is
By ChernoffJanson bound when this time is . Using the same bound for results in time exceeding whp. ∎
Lemma 2.
All matching edges are formed in the expected time and whp .
Proof.
The probability of an interaction forming edge when edges are already present is . So the number of interactions forming edge
has geometric distribution with the expected value
. Thus the expected time of forming all edges isBy ChernoffJanson bound this time is whp. ∎
The following lemma refers to early interactions of the matching based clock.
Lemma 3.
After time at least agents are paired in edges whp.
Proof.
Assume that so far we have formed edges. The probability that in an interaction edge is formed is . So the expected number of interactions of forming edge is . Thus the expected number of interactions of forming first edges satisfies
By ChernoffJanson bound (for large enough ) this process requires at most interactions whp. This is equivalent to parallel time at most . ∎
There exists a positive constant for which the following lemma holds.
Lemma 4.
In a time window of size where any edge in the matching is used in at most interactions whp.
Proof.
By Union bound the probability that an edge is a subject to at least interactions in time does not exceed
and this value is smaller than is for big enough. ∎
Lemma 5.
In time window of size where , there are at most edge interactions whp.
Proof.
The probability that a given interaction is an edge interaction is . Thus in the time window of size there are expected edge interactions. By Chernoff Bound the number of edge interactions is at most whp. ∎
Depending on the context and for the clarity of the presentation in what follows we will use the notions of a counter and a level interchangeably.
Lemma 6.
Assume that integer . Since time and for as long as at least one level is present in the clock, there is a subpopulation of at least agents residing on levels whp. Also during this time no agent reaches level whp.
Proof.
As we proved in Lemma 3, during the initial time at least agents enter the clock with state whp. Some of these agents could also relocate to the higher levels. By Lemma 5 applied to the initial time period there are at most of the latter whp. Thus in the time interval level is the host of at least agents constantly residing at this level whp. Also by Lemma 4 no agent gets to level whp.
The proof is done by induction on . Assume before time there are at least agents on levels . We will prove that the thesis of the Lemma also holds before time whp.
We notice first that during period all agents which entered the clock are at least once on level whp. And indeed during this period an agent avoids interactions with agents on levels with probability at most
Thus, in this period, any agent which entered the clock goes to level at most whp. And Lemma 4 guarantees that no agent reaches level during period .
In order to prove the first thesis of the lemma we consider two cases.
In the first case in time there are at least agents on levels not exceeding . Since by Lemma 5 in time period at most such agents can increase their level whp. And in turn, in time there are at least agents on levels .
In the second case in time the number of agents on levels at most is between and .
Let be the set of agents belonging to the levels above in time If in time the number of agents on levels smaller than is bigger than , then by Lemma 5 the probability that in time window this number drops below is negligible. Consider any set with agents residing at levels smaller than and estimate how many agents from set interact with them. For as long as agents from do not interact with , the probability of interaction between an unused (not in contact with agents) agent in and some agent in is at least . Any such interaction increases the number of agents on levels not exceeding . Consider a sequence of zeroes and ones in which position is one (1) if and only if either

interaction is between an unused agent in with some agent in if there are more than unused agents in

if this number is smaller than value 1 is drawn with a fixed probability .
By Chernoff bound the probability that this sequence has less than ones is negligible. Since this sequence has less than ones only when the number of agents moved to levels not exceeding is smaller than . Also by Lemma 5 during period at most other agents may increase their level beyond whp. So in this subcase the number of agents on levels not exceeding increases in period by at least
If in time the number of agents on levels below is smaller than , then the probability of an interaction between such an agent and an agent in is at most . Any such an interaction increases the number of agents on levels not exceeding . By Chernoff bound the probability that this number of interactions exceeds in is negligible. Thus in this subcase the probability that the number of agents on levels at most exceeds is negligible.
We need the following two claims.
Claim 1: During period there are at most agents located at levels which increment their level whp.
Indeed, for as long as there are at most agents on levels not greater than , the probability that such agent interacts as the initiator with a clock agent is at most . Such an interaction increments the level of this clock agent with probability at most . We prove that the probability of at least such incrementations is negligible. Consider a sequence of zeroes and ones in which position is one if and only if either

interaction increments initiator’s level and there are at most agents on levels not greater than

if this number is greater than value 1 is drawn with a fixed probability .
By Chernoff bound this sequence has less than ones (1s) whp. On the other hand we have at most agents on levels at most whp. Thus whp at most agents on levels not exceeding can increment their levels in acting as initiators. Analogously we can prove that whp at most agents on levels not exceeding can increment their levels in acting as responders. So altogether at most agents on levels increment their levels during period whp.
Claim 2: During period there are at least interactions between agents on level and those residing on levels higher than whp.
For as long as there are at most agents on levels at most , at least agents are on levels higher than . The probability of an interaction of such agents with an agent on level is at least . Any such an interaction increases the number of agents on levels not exceeding . Consider a sequence of zeroes and ones in which at position is one (1) if and only if either

there are at most agents on levels not greater than and interaction increases the number of such agents

the number of agents on levels not exceeding is greater than and value 1 is drawn with a fixed probability .
By Chernoff bound this sequence has more than ones (1s) whp. On the other hand we have at most agents on levels at most whp. Thus whp at least agents on levels exceeding can reduce their levels to at most during period while acting as initiators.
Because of both Claims 1 and 2 after time there are at least more agents with state than in time . This proves that in time there are at least agents with state . ∎
Lemma 7.
The time in which the first agent achieves level is larger than whp.
Proof.
Let be the time when for the first time there are no agents available at levels lower than By Lemma 6 during period , there are at least agents on level or lower. Let be the number of edges in time . Thus between time and at least agents have to increment their levels to . This is done by collecting (interacting via) edges adjacent to them. By Lemma 1 this takes time at least . This process has to be repeated for levels when no agents reach state whp. ∎
Lemma 8.
The first agent moves to level in time whp.
Proof.
The total time to initiate edges is whp by Lemma 2. If the first agent achieves level earlier the lemma remains true. If this is not the case, the time is determined by collection of all edges which needs to be repeated times resulting in the total time . ∎
Now we are ready to prove the thesis of Theorem 1. The thesis for matching based clock follows directly from Lemmas 7 and 8. The thesis for the leader based clock can be proved by a sequence of lemmas almost identical to Lemmas 6, 7 and 8. In the analog of Lemma 6 we can take followers instead of edges. This is because Lemma 2 assures that the time counted by the matching based clock is long enough to form all edges whp. Note that agents are initiated at level of the leader based clock in time whp by the epidemic resulting in dismantling of the matching based clock. And in turn we can use the initial time instead of in the analog of Lemma 6.
4 Fast formation of lines
Line formation We define and analyse a new optimal line formation protocol which operates in time whp. while utilising a constant number of extra states (not mixed with other protocols including clocks). The protocol is preceded by leader election confirmed by the matching based clock. And when this happens, the periodic leader based clock starts running together with the following line formation protocol based on two main rules defined below.
Form head and tail
This rule creates the initial head in state and the tail in state of the newly formed line. Note that since the line formation process uses separate memory the leader in the leader based clock remains in the leadership state, i.e., it is the head state is used solely in the line formation protocol.
Extend the line
This rule extends the current line by addition of an extra agent from the head end of the line.
Theorem 2.
The line formation protocol operates in time whp.
Proof.
The probability of an interaction adding agent to the line when agents are already present is . So the number of interactions to add agent has geometric distribution with the expected value . Thus the expected time of forming the line is
By ChernoffJanson bound this time is whp. ∎
In order to make the line formation protocol always correct we need some backup rules for the unlikely case of desynchronisation when two or more leaders survive to the line formation stage. In such case we need to continue leader elimination.
Also when a leader meets already formed head.
Finally we have to dismantle excessive lines if two or more lines are formed. This is done using extra state which dismantles the line edge by edge starting from the head.
5 Fast formation of rings
Ring Formation After the line formation is completed we need one extra round controlled by the periodic leader based clock to close the ring.
Close the ring
Theorem 3.
The ring formation protocol operates in time whp.
Proof.
In the first round (clocked by the periodic leader based clock) the ring formation protocol executes a line is formed and in the second it the head of the line connects with the tail, where both rounds operate in time ∎
In order to make the ring formation protocol always correct we need extra (on the top of line formation) backup rules for the unlikely case of desynchronisation when two or more leaders survive, and in turn two or more lines (possibly already closed in rings or crossconnected) are formed. And indeed when two heads closed in two different rings meet one of accepts dismantling role .
And when it meets the tail in its ring forms a line to be dismantled.
When the head on a ring meets the head of a line, the ring head goes to state meaning that the ring has to be disconnected, and the line head starts dismantling the line.
When the head on a ring meets a leader or a free agent, the ring head goes to state as before, and the other agent becomes or stays free.
Finally, When the head on a ring in state meets the tail on the same ring the ring is replaced by the line still open to accept new nodes until the end of the round clocked by the periodic leader based clock.
6 Line replication
In this section we consider a raw line replication mechanism allowing multiple replication of one or more lines of agents. When the replication process starts a small number of agents belong to some relatively short lines (sequences) of agents. Each such line has the head (the first agent on the line) and the tail (the last agent on the line). We refer to all agents between the head and the tail of a line as regular agents. We also assume that each agent on a line carries one bit (0 or 1) of information, so in fact one can interpret each line as a binary sequence which carries one or more messages. In our presentation we focus on replication of a single line, but our protocol can be simultaneously applied to many lines of the same or different content.
The agents utilise a constant number of states organised in triplets
where
 Role

is either the head of the line , the tail of the line or a regular internal agent .
 B

refers to combined notation for the bit of information stored at position on the line. Please note that this location is computed modulo counting from the head’s position The second agent has position the third the forth etc. Note that while this enumeration is limited it allows an agent carrying distinguish between the two neighbours on the line carrying (on the way to ) and (towards ). Finally, by we denote the sole value of the bit without its location, i.e.,
 Buffer

is used to carry either bit or control messages. When an agent is not currently involved in any action its buffer is in the neutral state
In the replicated (old) line, when the buffer is empty but the agent support transfer towards the buffer is in state Similarly, when the buffer is occupied by a bit moving towards the relevant value is .
In the newly formed (new) line, when the buffer is empty but the agent support transfer towards the buffer is in state Similarly, when the buffer is occupied by a bit moving towards the relevant value is . We also distinguish another value which indicates that a new node is expected at the current tail of the line.
For example when a regular (internal) agent at position is neutral, i.e., it is not currently involved in the replication process, it’s state is set to
(R1) Start of the line replication The process begins when the head of a line in the neutral state meets a free agent in state This interaction is governed by the following rule:
When this rule is applied, in the old line a signal (pipeline all bits towards the head) is created, and in the new line signal means await further instructions, i.e., either to add a new agent or conclude the replication process.
In what follows, we first explain how the information (the sequence of bits) is transferred from the old line onto the new one. We later discuss how the new line is being built simultaneously.
(R2) Create or message When signal arrives at the agent and the agent is neutral, message is placed in the buffer of the latter.
A similar action is taken at the tail agent in neutral state
The use of these two rules allows to propagate the request to pipeline all bit messages towards the head The next two rules explain how the bit messages are moved towards the head
(R3) Move a nontail message towards
Note that when the bit message is moved the request for further bit messages remains in the agent.
(R4) Move the tail message towards
Note that when the tail message is moved the neutrality of the tail agent is restored. Eventually, thanks to the final transfer of the tail message all buffers in the old line are reset to . And the role of the old line in the replication process concludes when this message is moved to the new line.
The following two rules govern transfer of messages between the old and the new line.
(R5) Transfer a nontail message to the head of the new line
Note that during such transfer the direction of the message is changed towards the tail of the new line.
(R6) Transfer the tail message to the head of the new line
As indicated earlier, due to the transfer of the tail message the neutrality of the new line is restored. In addition the two lines get disconnected and the old line is now ready to replicate again.
Finally, we show how the new line is constructed with the help of bit messages arriving from the old line. Recall that the buffer message at the current end of the new line indicates that the line can be still extended.
(R7) Move a nontail message towards (nonexistent) tail
After this move the buffer in the agent expects further messages.
(R8) Move the tail message towards (nonexistent) tail
In this case the neutrality of the agent is restored.
When there is no room for a message coming from the head of the new line yet another agent has to be added. This is done in two stages. We first request addition of a new agent using message
(R9) Request extension via message (nontail message )
and a specific rule requesting extension beyond the head of the new line.
And when this message is already present the new agent is added from the pool of free agents.
(R10) Extend the new line
Note that after this rule is applied the newly added agent still awaits its bit message which is denoted by . The new bit message arrives with the help of the following two rules.
(R11) Arrival of a nontail bit message
As a nontail bit arrived the line will be still extended which is denoted by messages (expect more bit messages) in the agent and (expect further extension). The situation is different when the tail bit message arrives.
(R12) Arrival of the tail bit message
After this rule is applied the neutrality at the tail end of the new line is restored. Note, however, that since the neutrality of the agents closer to the head of this line was restored earlier the front of the new line can be already involved in the next line replication process. But since we use different messages for the transfers in the old and the new lines, the two simultaneously run processes will not interrupt one another.
We conclude with the following theorem.
Theorem 4.
The population protocol based on rules R1R12 is a correct line replication protocol.
Proof.
We argue first about correction of the replication protocol for the old line, where

The actions of the tail node are governed by rules R2 and R4. The first rules creates the bit message and the second moves this message towards the head of the line restoring the neutrality of the tail agent .

The actions of a regular node require also rule R3 which support movement of multiple nontail bit messages towards

The actions of the head are more complex. The process begins with application of rule R1 which encapsulates three different actions: adding the head of the new line, replication of its bit in the newly created head. Transfer of nontail bit messages to the new line is managed by rules R3 and R5 and the tail message by rules R4 and R6, when after the application of the latter the old line concludes the replication process.
For the full cycles of rules used by agents see Figure 1.
The new line formation requires different organisation of states and transitions. Note that all agents introduced to the line must originate from the state see Figure 2.

The formation of the tail agent requires only two rules: R10 to add new agent and R12 to equip this agents with the bit

The situation with the regular nodes is more complex as they have to accept their own bit (rule R11) add additional agent (rules R9 and R10) and keep pipelining nontail bit messages (rule R7) until the tail bit message arrives (rule R8) and finally establish the neutrality of the agent (rule R8 or rule R12 when neighbour of the tail agent).

Rule R1 creates the head of the new line, rules R9 and R10 add a new agent, rules R5 and R7 pipeline nontail bit messages in the direction of the tail node until the tail bit message arrives (rule R6) when the neutrality of the tail node is reached (rule R8).
∎
With respect to the time complexity it is relatively easy to observe that the proposed replication protocol requires at most parallel time as transfer of each bit message requires parallel time in expectation. As all transfers are done (pipelined) simultaneously we would like to claim that this process concludes faster in time in expectation. However we can only prove the bound for which we need the following theorem.
Theorem 5.
The aforementioned raw line replication protocol can be amended to operate in parallel time whp.
Proof.
We can amend the states of the agents such that two bit messages: the main and the newly arrived, can be stored in the buffer simultaneously. Note that this does not violate the assumed constant space limit. The bit messages pipelining process works in synchronised rounds where each round lasts parallel time paced by the periodic leader based clock, see Section 2. The pace of this clock is set to accommodate single addition of an agent to the new line and a constant number of interactions along each edge of the line.
A round begins when the first stage of the clock is initiated. During this stage a new agent is added towards the end of the new line, if such a need arises. During the second stage the main bit message currently present in the buffer of an agent moves to the neighbour agent (where it resides as newly arrived bit message) on the line towards the destination ( in the old line or the current tail in the new one). In the last third stage the newly arrived bit message (if any) is rebranded as the main bit message to make it ready for further transfer in the next round. Note that since all bit messages had enough time (coupon collector problem argument) to move one position closer to their destination such rebranding is safe. Finally, after rounds (the tail bit message has to move across edges) all bit messages originating in the old line reach their destination, which translates to the parallel time with high probability. ∎
One can further amend this protocol to obtain fully amended replication protocol with buffers of size 1. In this protocol an agent restrains its actions until two rounds after its neighbour (towards the destination) released its bit message. And when this happens the message originating in this agent moves along the line (towards its destination) by one position in each round. Since all moving bit messages are always at distance two at the end of each round no buffer conflicts are observed. And since the last bit message starts moving after rounds and it moves to destination in rounds, the total time complexity of fully amended replication protocol is whp.
Theorem 6.
The fully amended line replication protocol operates whp in time .
We conclude with the following corollary.
Corollary 1.
The raw line replication protocol operates in time whp.
Proof.
We say that configuration dominates another configuration (denoted ) iff configuration can be obtained from through a finite number of steps (rule applications) of the raw replication protocol.
We first recall that fully amended replication protocol requires synchronised rounds, see the proof of Theorem 6. Note also that application of a single round preserves domination relationship between arbitrary configurations. I.e., if two configurations enter a round, the respective configurations on the conclusion of the round and also satisfy We also observed earlier that during each round of the (fully) amended protocol each message moves one position closer to the destination. Thus since the configurations in the raw replication protocol are always dominating the configurations of the fully amended one, and the time complexity of the latter is we conclude that the time complexity of the raw protocol is also ∎
References
 [1] Timespace tradeoffs in population protocols. In Proc. SODA 2017, pp. 2560–2579. Cited by: §1.2.
 [2] Spaceoptimal majority in population protocols. In Proc. SODA 2018, pp. 2221–2239. Cited by: §1.2.
 [3] Polylogarithmictime leader election in population protocols. In Proc. ICALP 2015, pp. 479–491. Cited by: §1.2.
 [4] Computation in networks of passively mobile finitestate sensors. In Proc. PODC 2004, pp. 290–299. Cited by: §1.
 [5] (2008) Fast computation by population protocols with a leader. Distributed Comput. 21 (3), pp. 183–199. Cited by: §1.2.
 [6] Optimal time and space leader election in population protocols. In Proc. STOC 2020, pp. 119–129. Cited by: §1.2.
 [7] Simple and efficient leader election. In Proc. SOSA 2018, OASICS, Vol. 61, pp. 9:1–9:11. Cited by: §1.2.
 [8] Brief announcement: population protocols for leader election and exact majority with O(log n) states and O(log n) convergence time. In Proc. PODC 2017, pp. 451–453. Cited by: §1.2.
 [9] (2011) Passively mobile communicating machines that use restricted space. Theor. Comput. Sci. 412 (46), pp. 6469–6483. Cited by: §1.2.
 [10] Speed faults in computation by chemical reaction networks. In Proc. DISC 2014, pp. 16–30. Cited by: §1.2.
 [11] (2021) On the distributed construction of stable networks in polylogarithmic parallel time. Information 12 (6), pp. 254–266. Cited by: §1.1.
 [12] (2021, to appear at FOCS 2021) A time and space optimal stable population protocol solving exact majority. CoRR abs/2106.10201. Cited by: §1.2.
 [13] Stable leader election in population protocols requires linear time. In Proc. DISC 2015, pp. 602–616. Cited by: §1.2.
 [14] Timing in chemical reaction networks. In Proc. SODA 2014, pp. 772–784. Cited by: §1.2.
 [15] Almost logarithmictime space optimal leader election in population protocols. In Proc. SPAA 2019, pp. 93–102. Cited by: §1.2.
 [16] (2021) Enhanced phase clocks, population protocols, and fast space optimal leader election. J. ACM 68 (1), pp. 2:1–2:21. Cited by: §1.2, §1.2.
 [17] (2018) Tail bounds for sums of geometric and exponential variables. Satistics and Probability Letters 135 (1), pp. 1–6. Cited by: §1.1.
 [18] (2016) Simple and efficient local codes for distributed stable network construction. Distributed Computing 29 (3), pp. 207–237. Cited by: §1.1, §1.1.