# Robust Comparison in Population Protocols

There has recently been a surge of interest in the computational and complexity properties of the population model, which assumes n anonymous, computationally-bounded nodes, interacting at random, and attempting to jointly compute global predicates. In particular, a significant amount of work, has gone towards investigating majority and consensus dynamics in this model: assuming that each node is initially in one of two states X or Y, determine which state had higher initial count. In this paper, we consider a natural generalization of majority/consensus, which we call comparison. We are given two baseline states, X_0 and Y_0, present in any initial configuration in fixed, possibly small counts. Importantly, one of these states has higher count than the other: we will assume |X_0| > C |Y_0| for some constant C. The challenge is to design a protocol which can quickly and reliably decide on which of the baseline states X_0 and Y_0 has higher initial count. We propose a simple algorithm solving comparison: the baseline algorithm uses O(log n) states per node, and converges in O(log n) (parallel) time, with high probability, to a state where whole population votes on opinions X or Y at rates proportional to initial |X_0| vs. |Y_0| concentrations. We then describe how such output can be then used to solve comparison. The algorithm is self-stabilizing, in the sense that it converges to the correct decision even if the relative counts of baseline states X_0 and Y_0 change dynamically during the execution, and leak-robust, in the sense that it can withstand spurious faulty reactions. Our analysis relies on a new martingale concentration result which relates the evolution of a population protocol to its expected (steady-state) analysis, which should be broadly applicable in the context of population protocols and opinion dynamics.

## Authors

• 45 publications
• 3 publications
• 22 publications
• ### A stable majority population protocol using logarithmic time and states

We study population protocols, a model of distributed computing appropri...
12/31/2020 ∙ by David Doty, et al. ∙ 0

• ### An O(log^3/2n) Parallel Time Population Protocol for Majority with O(log n) States

In population protocols, the underlying distributed network consists of ...
11/25/2020 ∙ by Stav Ben Nun, et al. ∙ 0

• ### A time and space optimal stable population protocol solving exact majority

We study population protocols, a model of distributed computing appropri...
06/04/2021 ∙ by David Doty, et al. ∙ 0

• ### Self-Stabilizing Phase Clocks and the Adaptive Majority Problem

We present a self-stabilising phase clock for population protocols. In t...
06/24/2021 ∙ by Petra Berenbrink, et al. ∙ 0

• ### Byzantine-Resilient Population Protocols

Population protocols model information spreading in networks where pairw...
05/15/2021 ∙ by Costas Busch, et al. ∙ 0

• ### On the Metastability of Quadratic Majority Dynamics on Clustered Graphs and its Biological Implications

We investigate the behavior of a simple majority dynamics on network top...
05/03/2018 ∙ by Emilio Cruciani, et al. ∙ 0

• ### Fractal Scaling of Population Counts Over Time Spans

Attributes which are infrequently expressed in a population can require ...
06/18/2018 ∙ by Aubrey Jaffer, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Population protocols are a model of distributed computation in which a set of simple agents, or nodes, modeled as identical state machines, cooperate to jointly compute predicates over the system’s initial state. A distinguishing feature is that agents have no control over their interaction pattern: they interact in pairs, chosen by an external scheduler. A common assumption, which we will also adopt in this paper, is that the interaction schedule is uniform random across all possible node pairs.

Since its introduction by Angluin, Aspnes, Diamadi, Fisher, and Peralta [AAD06], this model has become a popular way of modeling distributed computation in various settings, from animal populations, to wireless networks, and chemical reaction networks. Significant attention has been given to the computational power of population protocols [AAER07, CMN11], as well as determining the complexity thresholds for fundamental problems, such as leader election and majority [ER18].

One classic example of the algorithmic power of population protocols is the classic three-state approximate majority algorithm. Discovered independently by [AAE08, PVV09], this simple dynamics has been implemented in synthetic DNA [CDS13], and has been linked to the fundamental cell cycle biological process [CCN12]. In brief, the majority problem assumes that all agents are initially in one of the states or , and the task is to converge on a consensus decision as to which one of the two had higher initial count. This is done via the following simple sequence of interactions:

 A+B→C+C,A+C→A+A, and B+C→B+B.

Intuitively, if both “strong” opinions ( or ) interact, then they both move to the “undecided” state , while either of the “strong” opinions or turns an undecided agent to its side. Angluin et al. [AAE08] showed that this simple algorithm has surprisingly strong properties: it converges to the correct majority decision with high probability (w.h.p.),111Throughout this paper, we adopt the standard definition of high probability at least , where is a constant. as long as the initial difference between the initial states is , in time that is poly-logarithmic in , and that it can even withstand Byzantine failures.

Reference [ADK17] considered a related robust detection problem, in which nodes aim to determine if a distinct detectable state is present or absent from the population. This state may appear or disappear during the execution, so the algorithm should be self-stabilizing–in the sense that nodes should always converge to the correct answer given the current configuration. Moreover, the authors require that the algorithm be robust to leaks [TWS15], which are roughly defined as low-probability faulty reactions in which any state implemented by the algorithm may appear spuriously.222Leaks are meant to model the impact of the laws of chemistry on the algorithm execution, which might for instance reverse reactions with some small probability. We detail the definition of leaks and their impact on the execution in the model section. The robust detection protocol proposed in [ADK17] satisfies both these requirements. Reference [DK18] considers the same problem, showing that any self-stabilizing protocol for detection requires states per node if the goal is poly-logarithmic time, and states if the goal is time. Second, they show that detection can in fact be solved in states, by a protocol which leverages oscillatory dynamics as a building block, but does not stabilize, as some states may keep oscillating between very small and large counts.

#### The Robust Comparison Problem.

In this paper, we consider a natural joint generalization of the majority and robust detection tasks, which we call robust comparison. In this task, we are given two baseline states, and , present in any initial configuration in possibly small (logarithmic) counts. Importantly, one of these states has higher count than the other: we assume that for some constant . The goal is to design a protocol which can quickly decide on which of these baseline states has higher count. The protocol should be self-stabilizing, in the sense that it converges to the correct decision even if the relative counts of baseline states and change during the execution, and robust, in the sense that it should be resistant to leaks.

To our knowledge, the comparison problem has not been considered at this level of generality before. The classic majority problem is a static, one-shot special instance of comparison, in which both initial state have initial count , and we wish to determine which one has higher initial count. At the same time, robust detection can be seen as a special case of robust comparison, where one of the baseline states has zero count, or as a dynamic version of consensus/opinion dynamics, in which the correct output value can change dynamically during the execution.

#### Contribution.

This paper proposes a simple and general algorithm solving robust comparison in population protocols, providing strong concentration bounds on its convergence using a new analysis technique.

#### The Algorithm.

Our algorithm, called PopComp, uses states per node, stabilizes to the correct answer in parallel time from any initial configuration, and is robust to leaks. It works as follows. Assume some agents in baseline states and , whose counts we wish to compare. The interaction rules are such that the counts of those two states are never going to change, since their relationship is what we need to determine. Without loss of generality, in the following. The algorithm will implement sequences of “detector” states and , where is a parameter, as well as a neutral state .

The intuitive role of the indexed strong and states is to measure how long the interaction chain is between the current agent and an or state at any given point. For example, any node which interacts directly with will move to state , and symmetrically, any node which interacts directly with will move to state . The key interaction is between a node in state or , which interacts with a node of lower index . In this case, the former agent will be part of a shorter interaction chain, moving to state , while the latter agent increases the length of its chain by one, moving to as well. We obtain the reactions of the type:

 ∀s>j>i Xi+Xj→Xi+1+Xi+1, Yi+Yj→Yi+1+Yi+1, (1) Xi+Yj→Xi+1+Xi+1 and Yi+Xj→Yi+1+Yi+1. (2)

Notice that is a natural upper bound for the length of an interaction chain, since every agent is hops away from or , with high probability. One key observation is that we can reliably use the relative sizes of these interaction chains to distinguish between the baseline states. We leverage this observation as follows. We cap the maximum level at . Nodes continue to increase their level or re-set it to a previous one, according to Equation 1, as long as its value is . As soon as the length of the chain would increase past , agents move to the neutral state , at which point they stop influencing other agents in terms of their choice. A neutral agent can become non-neutral only if it interacts with another or agent with , and it re-sets the length of its chain to .

#### Analysis.

As is often the case in population protocols, this algorithm is intuitive; however, its recursive structure requires a very careful analysis. A natural first approach would be a “steady-state” analysis, in which one writes out the expected counts of agents of every type and the relationships between them assuming stable counts. One then solves this system of constraints in order to determine the expected counts at “equilibrium.” However, at best, this approach yields expected bounds on the state counts, and cannot characterize the concentration of state counts at some given point in the execution. In particular, in the case of our algorithm, since consecutive level counts are highly correlated, characterizing their concentration is challenging—if not impossible—using known techniques. Linking steady-state behavior with algorithm dynamics is known to be generally difficult when analyzing population protocols, and for some algorithms, e.g. [DV12, ADK17] only steady-state analysis is provided.

We introduce a new approach to circumvent this limitation, based on two technical ideas. The first is that, even though the state counts at various levels are correlated, their evolution roughly follows a super-martingale-type behavior, with “noise” due to the natural variability of state counts at previous levels. (See Section 5.1 for a detailed walk-through.) A tempting approach would be to apply a Bernstein-type martingale concentration inequalities, e.g. [BLM13] to characterize the concentration of state counts around their expectation. However, known results do not apply to our setting, for instance due to the presence of the noise term.

We overcome this problem by proving a new customized concentration bound, which should be of independent interest. In particular, this result allows us to bound the influence of variability at previous levels, and prove concentration for each of the level counts. In brief, we obtain that, if the base level counts and are separated by a large enough multiplicative constant , then the counts at the last level will be separated by another multiplicative constant , w.h.p. Moreover, this result allows us to show fast convergence: level counts will recover to concentrate close to their expected mean in poly-logarithmic parallel time. In turn, this result opens up several extensions.

#### Extensions.

The first extension boosts the probability that an agent identifies the correct output state from the oconstant one postulated by the previous result, to . This is achieved via a general sampling/approximate counting mechanism, which has each agent use additional state to sample the population and determine the majority state with higher confidence.

As a second interesting extension, we exhibit a non-trivial space-time trade-off for variants of this protocol. For instance, we exhibit two protocol variants which employ and states, and ensure convergence in parallel time and , respectively. These protocols show that it is possible to perform comparison in sub-linear time using less than logarithmic states per agent. The analysis of these variants also involves the application of our concentration theorem.

Finally, we show that our algorithm is leak-robust, in the sense that it can withstand spurious reactions which create or delete arbitrary states, which are common in real-world implementations [TWS15]. Again, this property follows by simply applying the concentration theorem with modified parameters to account for faulty reactions.

## 2 Related Work

Our work is part of a wider research effort studying consensus/majority dynamics in population protocols. For algorithms with exact/deterministic correctness guarantees, tight or almost-tight space-time trade-offs are now known, thanks to recent progress [DV12, MNRS17, AAE17, AAG18, BEF18a, BEF18b]. In brief, there is evidence that the logarithmic space and time complexity thresholds are tight for exact majority [AAG18]. At the same time, constant-state solutions with fast convergence (but no stabilization) are known for both approximate and exact majority [KU18].

By contrast, the complexity of approximate solutions–which may converge to the wrong answer with some probability–and that of dynamic ones–where the input may change during the execution–is not well understood. For the former approach, this may be in part because the classic three-state approximate majority protocol [AAE08] unifies several desirable properties: fast convergence, robustness to Byzantine faults, and an optimal state space size.

In this paper, we generalize the approximate majority problem to the case where the two initial states have fixed, small counts, and the goal of the other agents is to determine which baseline state/signal is more populous/stronger. The references technically closest to ours are the recent work on detection dynamics [ADK17, DK18], which we have covered in the previous section. In relation to this work, we note that the algorithm we analyze is a generalization of the detection dynamics considered by [ADK17]: in particular, if we merged the and states, we would obtain a similar algorithm to the basic version of PopComp.

We make several significant contributions relative to the latter reference. First, we consider a more general problem, which is closer to consensus dynamics than to detection/rumor-spreading. Second, we provide a much more accurate, and technically challenging analysis. Specifically, [ADK17] only provides an expected-value analysis for the detection problem. In contrast, we are able to provide strong concentration bounds for comparison, which can be further boosted via additional mechanisms, and provide a thorough exploration of time-space trade-offs for this problem. Moreover, we note that the strong concentration bounds on opinion dynamics we present are required for the analysis of the comparison protocol. In addition, our analysis introduces a powerful and novel generalized Bernstein-type inequality, which should be a useful addition to the analysis toolbox of population dynamics.

## 3 System Model and Problem Statement

#### Population Protocols.

A population protocol is a distributed system with nodes, also called molecules or agents. Nodes execute a deterministic state machine with states from a finite set , whose size may be a function of . Nodes are anonymous, so agents in the same state are identical and interchangeable. Consequently, the state of the system at any point is characterized by the number of nodes in each state with non-zero count. Formally, a configuration is a function , where represents the number of agents in state . Nodes interact in pairs, according to an outside entity called the scheduler. In this paper, we will assume a uniform random scheduler, which picks every possible interaction pair uniformly at random, which corresponds to having a well-mixed solution.

An algorithm, also known as a population protocol, is defined as follows. We define the set of all allowed initial configurations of the protocol for agents, a finite set of output symbols , a transition function , and an output function . The system starts in one of the initial configurations (clearly, ), and each agent keeps updating its local state following interactions with other agents, according to the transition function . The execution proceeds in steps, where in each step a new pair of agents is selected uniformly at random from the set of all pairs. Each of the two agents updates its state according to the function .

#### Time, Space, and Stabilization.

Our basic notion of steps counts the number of interactions until some given predicate holds on the entire population. Parallel time is defined as total number of pairwise interaction divided by the number of nodes . We measure space as the number of states which can be implemented by each node. We say that a population protocol is self-stabilizing [AAFJ08] if it is guaranteed to converge to a set of output configurations which satisfy a given predicate from any initial configuration, and for which every extension also satisfies the given predicate. The parallel time to reach those output configurations is the stabilization time.

#### Leaks and Robustness.

We now recall the definition of leak reactions (leaks), following [ADK17]. Given the above, any population protocol can be specified as a sequence of transition rules of the form

 X+Y→Z+T.

Given the set of such transitions defining a protocol, reference [ADK17] partitions protocol states into catalytic states, which never change count following any reaction: for instance, state is catalytic if it only participates in reactions of the type where and are arbitrary. By contrast, non-catalytic states can change their count, for instance to be created or transformed by the protocol into other states. In a nutshell, leaks are spurious reactions which can consume and create arbitrary non-catalytic species, from other non-catalytic species. Leaks are induced by the basic laws of chemistry. For instance, by the law of reversibility, every interaction has some (low) probability of being reversed; by the law of catalysis, every catalytic reaction can also occur in the absence of the catalyst state. In practice, leaks can cause any molecule type implemented by the algorithm to appear spuriously during its execution, with some low probability.

More formally, a leak is a reaction of the type where and denote arbitrary non-catalytic states. For generality, in the following we will assume that the exact leak reactions are chosen adversarially

, but that their rate, that is, their probability of occurring at a given moment, will be upper bounded by a fixed parameter

. An algorithm which maintains its correctness guarantees in spite of leaks is called leak-robust [ADK17]. Notice that protocols such as the four-state exact majority algorithm [DV12] are not leak-robust, since the correctness of their output crucially depends on having exact molecule counts throughout the execution.

## 4 The PopComp Robust Comparison Algorithm

### 4.1 The Baseline Algorithm

In this section, we present the baseline variant of the algorithm, which ensures a constant separation between the two states, in favor of the more numerous one, with high probability. In the next section, we will build on this algorithm to boost the fraction of nodes which correctly identify the majority to .

#### Algorithm Description.

Each node’s state is either , or , where is a level parameter, whose value is specified later in the analysis. The intuition is that states correspond to answer with decreasing “confidence" (symmetrically for states) and is a neutral state (it roughly corresponds to both states and being merged). We call a molecule strong if its state is not . The state changes according to the following rules:

 For all 1≤i≤s:X0+Xi→X0+X1X0+Yi→X0+X1Y0+Xi→Y0+Y1Y0+Yi→Y0+Y1~{}For all 1≤i

The idea is that the state of molecules is used to spread the information about the number of initial molecules in and states, which never change, among all other molecules, while we maintain approximately the ratio . This is done by confidence levels (resp. ). A molecule decreases its confidence by one during each reaction but it spreads its information to the less confident molecule in the reaction. When the confidence passes the threshold , the molecule moves to a neutral state . We will show that the number of molecules in consecutive levels roughly doubles at every level, with high probability. We present an experimental illustration of this intuition in Figure 1.

#### Guarantees.

The precise analysis of this algorithm is presented in sections 5 and 5.3, and results in the following theorem.

###### Theorem 1.

For , such that , the algorithm stabilizes in parallel time to a configuration where with high probability.

### 4.2 Algorithm Extensions

#### Boosting Precision.

The algorithm described in the previous subsection ensures constant separation–roughly, we can guarantee with a proper choice of parameters that at least of all molecules have the correct output, and at most have the wrong output. Now we describe a way of amplifying this correctness guarantee. We describe it with respect to our algorithm, but the transformation is generic and would apply to any comparison algorithm.

Assume that, in addition to their state, molecules are equipped with a counter that contains an integer value in the interval , where is a parameter. The counter is increased by one if a molecule reacts with a strong molecule of type , and decreased by one if it reacts with a molecule of type . If a molecule reacts with a molecule in state , the counter remains unchanged. The output function maps all states with a positive counter to output and all states with a negative counter to .

Note that, when the confidence levels stabilize in the baseline algorithm, the counter should function similarly to a random walk biased towards the majority. More precisely, it is biased towards if , and vice versa. Because there are

strong molecules, each one reacts with enough strong molecules, and therefore the random walk should quickly converge to its stationary distribution. The stationary distribution will give us the estimate that there are only

molecules with wrong value of counter in expectation.

There are two ways of implementing the above dynamics. The first method has every molecule participate in the counting process. This requires increasing the number of states to , but has the advantage that each molecule is participating in the output. Second, similarly to [GP16], one can split the population initially into two roughly equal-size parts. The first half implements the original amplification algorithm, while the second half consists of molecules implementing the random-walk counter. Thus, the number of states becomes , but with the disadvantage that a constant fraction of all molecules do not produce any output at all.

The above construction ensures that the algorithm stabilizes in time to a configuration where at most have the incorrect output. It uses states. The proof is provided in Section 6.

In Section 7, we explore different variants of this algorithm which trade off a lower state space for higher convergence time. Interestingly, we will show that there exist a variant with states per node, which converges in time, and a variant with states per node which still converges in sub-linear time. Since these variants require a more careful re-definition of the protocol, we present them separately in the corresponding section. Following [DK18], we obtain the following lower bound on the time-space complexity trade-offs for detection/comparison:

###### Theorem 2 ([Dk18], corollary of Theorem 4.1).

Any protocol that solves detection in parallel time by convergence to a stationary distribution requires states for some absolute positive constant .

Our protocol gives a trade-off for comparison (and thus detection) with states for any , leaving an exponential gap between lower and upper bounds.

## 5 Analysis of the Baseline Algorithm

In this section we will focus on the concentration properties of and , the number of molecules of type and , respectively, for each level. Intuitively, given initial counts of , the argument establishes (i) upper- and lower-bounds on the counts of in the “steady state” of the protocol, (ii) shows that the protocol concentrates around those bounds, and (iii) that concentration occurs quickly.

#### Notation.

Denote . Also denote , and .

To specify value of some variable after precise number of interactions, we add after the variable – i.e. denotes the probability that a randomly chosen molecule steps of protocol is of type .

### 5.1 Warm-Up: Tightly Bounding Total Level Counts, and a Concentration Theorem

The goal of this section is to develop some of the intuition behind the analysis, as well as some preliminary results, by providing bounds on the joint count at each level, denoted by . We begin with the observation that if we replace all states and with in the algorithm, then the interaction rules become:

 For all 1≤i≤s:U0+Ui→U0+U1~{}For all 1≤i

Note that this closely matches the detection dynamics of [ADK17]: intuitively, in this case, we are not trying to compare the counts of two species, but instead trying to detect the presence of a single species in the initial solution. We note that the analysis in [ADK17] only provides expected bounds on the species count at every level. Thus, the preliminary results of this section illustrate our analysis technique by tightening the bounds for this detection algorithm to characterize concentration. In turn, concentration bound are essential to analyze the behavior of the comparison dynamics we consider.

Let for any level . We begin by introducing some auxiliary variables , for each level , which are intuitively the steady-state (expected) values to which the level counts should converge in the limit. Let also . Note that defining the values directly in terms of the convergence of the process can be difficult, so instead we will directly provide an operational definition for them. More precisely, we define these values recursively as follows:

 ~r0=r0=|X0|+|Y0|nand~ri+1=1−(1−~ri)2=~ri⋅(2−~ri),∀s>i≥0

where the recurrence follows from the observation that an agent is in state iff in its last interaction, at least one of the interacting agents was in state . We can expand this recursion to obtain the following estimates for these level counts.

###### Observation 1.

For any , it holds that In particular, we have

Our goal will be to provide a concentration bound for the values of the the level counts to match these steady-state values. Broadly, our setup is as follows. We will fix a level index and time , such that, at this time, the level counts at levels are well-concentrated around their means , with high probability. Then, we will show that there exists a time , such that, with high probability, the level count at level is concentrated around its own predicted mean . More precisely, let us fix a level and a time , and assume that there exists a constant such that , with high probability. We will proceed to prove that there exists a constant and a time such that given a sufficiently large time interval .

The argument will begin by analyzing the evolution of the level counts at time . In particular, denote by the indicator variables for the following events at step , which govern the evolution of :

• iff first reacting agent was from ,

• iff second reacting agent was from ,

• and iff any of the reacting agents were from .

We obtain the following recurrence on the expected value of :

 E[Rc+1(t+1) | t] =E[Rc+1(t)−Δ1(t)−Δ2(t)+Δ3(t)+Δ4(t) | t] =Rc+1(t)−2rc+1(t)+2[1−(1−rc(t))2] =(1−2n)Rc+1(t)+2nA(t)

where we define

. Second, we bound the variance by direct calculation:

 Var[Rc+1(t+1) | t] =Var[Rc+1(t+1)−Rc+1(t) | t]=Var[Δ3(t)+Δ4(t)−Δ1(t)−Δ2(t) | t] ≤4(rc+1(t)+rc+1(t)+(1−(1−rc(t))2)+(1−(1−rc(t))2))=8Rc+1(t)n+8A(t)n.

Finally, we use the induction hypothesis to bound the deviation of from , with high probability, as

 |A(t)−~Rc+1| =∣∣n[1−(1−rc(t))2]−n(1−(1−~rc)2)∣∣≤nξc~rc(2−~rc)≤2ξc~Rc+1.

#### The Concentration Theorem.

We now take a step back, and examine the claims we have already proven, and their relationship to our target. We wish to obtain a concentration bound on the level count in terms of its predicted steady-state value . We have a handle on the expected value of and on its variance, but these values critically depend on the quantity . At the same time, we also have a strong probabilistic bound on how much can vary, by the last inequality. A natural candidate to establish a concentration bound on would be to recognize that it has super-martingale behavior, and apply a Bernstein-type inequality for its concentration around its mean. However, it is hard to see how to apply this result to our setting, in particular due to the presence of the “noise” term . Fortunately, we are able to prove the following concentration result instead.

###### Theorem 3.

Fix parameters and with , and . Further, fix constants . Let denote time, and let be stochastic processes such that for all time steps the following hold:

1. ,

2. ,

3. ,

4. .

Then there exists an interval length such that for any the following holds with high probability:

 |B(t′)−a|≤εa+O(c1√alogn+logn),

for .

The proof of this result is technical, and is deferred to the Appendix. To complete our exposition, notice that this result closely matches our set of previous derivations for , while relation (1) holds w.h.p. for the previous level as part of the induction step. More precisely, we can follow the above derivations and plug in , and , , , and , to obtain the following concentration result on the level counts after a sufficiently long time has passed.

###### Lemma 1.

Fix a level index , an initial time , and let be a sufficiently large time interval. Fix a constant and assume that for any step it holds that the level count is always -concentrated around , that is . Then there exists a constant such that, for any , with high probability, .

Finally, we unroll the recursion for a fixed level , and obtain that the following concentration bound should hold after a given point in time. Note that level zero is always perfectly concentrated around .

###### Corollary 1.

Given a level and a fixed initial time , there exists an absolute constant and a time interval length and such that for any , it holds with high probability that .

### 5.2 Step Two: Analyzing the Comparison Process

We now proceed to analyze the core of our comparison algorithm. We leave aside the voting amplification component, which we analyze separately in Section 6. The strategy is a more complex version of the one from the previous section: we derive bounds on the level counts of states and , for each state in turn. We will focus on the derivation for , since the case of is symmetric.

Let , and , for every level . We begin by defining estimate values to which the level counts should concentrate in the steady-state:

 ~x0=x0=|X0|n~xi+1=~xi⋅(2−~ri−~ri−1);
 ~y0=y0=|Y0|n~yi+1=~yi⋅(2−~ri−~ri−1).

These values are computed by following the recursion suggested by steady-state analysis: for an agent to end up in state , it needs to be either in state and be the first reagent in interaction with any of , or the second reagent in interaction with any of . We unroll the recursion to obtain a well-informed guess as to the values around which these variables should concentrate.

###### Observation 2.

There is . It can be verified by induction that .

The rest of this section will be dedicated to proving the following concentration result on the level counts. We will show:

###### Lemma 2.

Let be a level index and let be a sufficiently large step. Assume that during all steps it holds that for with defined as in Corollary 1, and that for some . Then, for , there exists a value such that, with high probability, it holds that

###### Proof.

Fix a level index and time , such that, at this time, the level counts at levels are well-concentrated around their means , with high probability. We show that there exists a time , such that, with high probability, the level count at level is concentrated around its own predicted mean . Fix a level and a time , and assume that there exists a constant such that , with high probability. We will proceed to prove that there exists a constant and a time such that given a sufficiently large time interval .

The argument will begin by analyzing the evolution of the level counts at time . We define as indicator variables for the following events at step :

• iff the first reacting agent was from ;

• iff the second reacting agent was from ;

• iff the first reacting agent was from and second reacting agent had a level , or the first reacting agent had a level and the second reacting agent was from ;

• iff the first reacting agent had level and the second reacting agent is from , or if the first reacting agent is from , and the second reacting agent has level .

Notice that these events cover all the cases where the count of

might change in this step. As before, the plan is to set up the usage of the Concentration Theorem for the random variable

. For this, we will characterize its mean and variance at step , assuming that the counts at the previous levels are well-behaved, which we can safely assume by the induction step. By careful calculation, we obtain:

 E[Xc+1(t+1)|t] =E[Xc+1(t)−Δ′1(t)−Δ′2(t)+Δ′3(t)+Δ′4(t)|t] =Xc+1(t)−2⋅xc+1(t)+2xc(t)[2−rc(t)−rc−1(t)]=(1−2n)Xc+1(t)+2nA′(t)

where we defined . Further, we have:

 Var[Xc+1(t+1) | t] =Var[Xc+1(t+1)−Xc+1(t) | t]=Var[Δ′3(t)+Δ′4(t)−Δ′1(t)−Δ′2(t) | t] ≤4(Var[Δ′1(t)|t]+Var[Δ′2(t)|t]+Var[Δ′3(t)|t]+Var[Δ′4(t)|t]) ≤4(E[Δ′1(t)2|t]+E[Δ′2(t)2|t]+E[Δ′3(t)2|t]+E[Δ′4(t)2|t]) =4(E[Δ′1(t)|t]+E[Δ′2(t)|t]+E[Δ′3(t)|t]+E[Δ′4(t)|t]) ≤4(xc+1(t)+xc+1(t)+xc(t)[2−rc(t)−rc−1(t)]+xc(t)[2−rc(t)−rc−1(t)])) =8Xc+1(t)n+8A′(t)n.

Another careful upper bound argument yields that

 |A′(t)−~Xc+1| =n⋅∣∣xc(t)[2−rc(t)−rc−1(t)]−~xc[2−~rc−~rc−1]∣∣ ≤n⋅∣∣xc(t)[2−rc(t)−rc−1(t)]−xc(t)[2−~rc−~rc−1]∣∣ +n⋅∣∣xc(t)[2−~rc−~rc−1]−~xc[2−~rc−~rc−1]∣∣ ≤nxc(t)(ε~rc+ε~rc−1)+nεc~xc[2−~rc−~rc−1] =εnxc(t)(~rc+~rc−1)+εc~Xc+1 ≤εn(1+εc)~xc(~rc+~rc−1)+εc~Xc+1 ≤ε(1+εc)~Xc+1~rc(t)+~rc−12−~rc−~rc−1+εc~Xc+1 ≤(2ε~rc1−~rc+εc)~Xc+1.

At this point, we have enough data to invoke Theorem 3, which guarantees that after steps we have

 |Xc+1(T)−~Xc+1|≤(2ϵ~rc(t)1−~rc(t)+ϵc)~Xc+1+O(√~Xc+1logn).

We can then iterate this result to obtain the separation result for the proportion of agents supporting either opinion:

###### Theorem 4.

Let be a sufficiently large time interval. Assume that and . For appropriately chosen constants , if , then the total count of the population of agents of opinion “X,” formally , will satisfy , with high probability.

###### Proof.

Consider the minimal parameter such that . For this value, it will hold that and . By Corollary 1, after a time interval of length all values satisfy for a constant that can be made arbitrarily close to 0 (the cost is traded off against the constant hidden in ). After that time, we repeatedly apply Lemma 2 for the first levels of . The guarantee for opinions is that , where

 ε′=∑i≤d′2ε~ri1−~ri+O⎛⎝∑i≤d′√logn~Xi⎞⎠.

We note that, by the geometric sum progression (since only constant number of terms satisfy :

 ∑i≤d2ε~ri1−~ri≤∑i≤d100ε~ri=Θ(ε)

and since we have that , the second term is also an arbitrarily small constant, we have that is also constant that can be arbitrary small. We then observe that , and since can be chosen to be large and to be small, this is at least for some . ∎

### 5.3 Bootstrapping convergence time

We now show how to bootstrap on the results in the previous section, and prove convergence within interactions, shaving off the additional logarithmic factors. We employ a generic technique which leverages that: (i) each of the processes we analyzed mixes fast and (ii) the effect of many sources can be separated and analyzed separately. As a result, we can show that the overall process mixes fast (the interactions is as fast as mixing). But first, we need to rephrase two technical results from [ADK17], adapting them to bi-chromatic setting, where we have two possible initial states.

First, for an agent of a type or we denote , and if is in a state then we define . We also talk about a color of a type of , denoted