A single fertilized mouse egg and human egg develop into organisms with vastly different numbers of cells. How do cellular mechanisms regulate the number of cells in complex biological systems? In order to function in adverse environments, organisms must maintain stability and be able to recover from unplanned circumstances. For example, a lizard that loses its tail can grow a new one, and internal organs require mechanisms to recover from cell loss caused by injury or from execessive cell proliferation due to development or disease. But how do individual cells know how to respond in order to reestablish the desired population size?
Regulation of population size may be achieved through a combination of internal programs running within each cell and intercellular communication. One approach could be for individual cells to count the population using a distributed protocol. But an interesting question is how to control the population size if each cell lacks sufficient memory to count. Understanding the mechanisms for regulating population size in an adversarial environment, in light of memory constraints of individual agents, is a natural computational question.
In this work, we study the problem of robustly maintaining a stable population size from the perspective of distributed computing. We introduce a new coordination question that we call the population stability problem. Consider a population of agents with the ability to replicate and self-destruct. How can such distributed systems detect and recover from adversarial deletions and insertions of agents so as to maintain the desired population size?
Our focus is on systems that consist of huge numbers of agents, where each agent individually has very limited memory and connective capability and can directly communicate with only a few other agents in the system. We model communication using a synchronous variant of the population model of Angluin et al. [AADFP06, AAE07]. In each round, a constant fraction of agents is matched at random and can exchange messages, where the matched agents are chosen independently in each round. Population size must be maintained in this setting in the presence of an adversary that observes the entire state of the system and can continually delete or insert agents. We describe the model in more detail below.
The population stability problem augments a growing body of work that uses the language and ideas of distributed computing to model biological systems consisting of a collection of resource-constrained components that collectively accomplish complex tasks. Naturally, we do not claim direct relevance of our results to biological systems due to potential modeling differences. Regardless, the population stability problem makes sense in any system consisting of individual components with the ability to reproduce.
Our main contributions are as follows.
A New Problem in Distributed Computing: The Population Stability Problem.
We introduce a new problem in distributed computing. A population of memory-constrained agents (i.e. processors with the ability to reproduce and self-destruct) is subjected to adversarial attacks. Whereas many attacks can be envisioned, we consider a worst-case adversary that can delete or insert agents at a bounded rate. The goal is to maintain a stable population size within a small multiplicative factor of the original size . This problem appears fundamentally different from the classical problems of distributed computing, such as consensus, leader election, majority, common coin flipping, or computing general functions of the joint state of the parties.
Models for Communication and the Adversary.
The communication model we consider is a synchronous variant of the population model of [AADFP06, AAE07]. That model was designed to represent sensor networks consisting of very limited mobile agents with no control over their own movement and whose goal is to compute some function of their inputs or evaluate a property of the system. Whereas [AADFP06] assumed that pairs of agents can communicate via pairwise interactions as scheduled by a uniformly random matching process, we assume in addition that agents are synchronized and interact with one another in rounds. Within each round, at least a fraction of agents participates in pairwise interactions, again scheduled uniformly at random. The agents additionally have the ability to self-destruct and to reproduce by producing a second identical copy of themselves.
It is clear that we cannot allow the adversary to delete most or all of the agents in a single round, since maintaining a stable population size in the presence of such an adversary is impossible. Consequently, we give the adversary a budget of alterations to perform in each round, where an alteration consists of removing, inserting or modifying the memory of a single agent. We allow the adversary to observe the memory contents of every agent before determining its alterations for a round. Both and are parameters of the model. The model is described in detail in Section 2.
Protocol for Population Stability.
We present a protocol with three-bit messages requiring states (i.e. bits of memory) per agent that tolerates worst-case insertions or deletions in each round, for any constant . Formally, our main theorem is the following.
Let be positive constants, where is a lower bound
on the fraction of agents that is matched in each round.
Then there exists a population stability
protocol with three-bit messages and states per agent and
guaranteeing that if the adversary inserts or deletes at most agents in each round, then with
all but negligible probability the population will remain between
agents in each round, then with all but negligible probability the population will remain betweenand for any polynomial number of rounds.
The main idea employed in our construction is to color the agents with the values and in such a way that information about the population size is encoded in the distribution of colors. That is, given a set of agents with assigned colors in , consider the distribution specified by choosing an agent at random and observing its color. We are able to assign colors to agents in such a way that approximate population size is encoded in the variance of this distribution. Subsequently, each individual agent locally computes a very weak estimator of whether the variance is too large or too small, and makes a local decision of whether to reproduce or self-destruct. Although each individual agent’s estimate is noisy, we show that in the aggregate, the local decisions are globally able to maintain a stable population size even in the presence of a powerful adversary.
1.2 Discussion and extensions
The first obstacle we must confront in designing a protocol for population stability is the memory constraint. Each agent does not have sufficient memory to store a unique identifier or to count to the population target . Yet collectively, they must have a good approximation of and must individually decide whether to replicate if the population is too low or to self-destruct if the population is too high. Making their task even more challenging, these memory-constrained agents must correct deviations in the size of the population and make their decisions while the adversary is acting.
A second major challenge is that the adversary is very powerful. Although the number of agents inserted or deleted in each round is bounded, the adversary can observe the complete state of the system111That is, the adversary has the ability to read the memory contents of every agent. before deciding on its actions, and may insert agents of arbitrary initial state. These capabilities present difficulties for many standard techniques and abstractions used in distributed computing. For instance, many protocols are based on leader election, where a single leader processor is chosen to direct or facilitate the task at hand. However, since our adversary is able to observe the state of every agent, the adversary can simply wait for a leader to be chosen and then delete it. Furthermore, even without observing the internal memory of agents, the adversary could insert many additional agents that are all identical in state to the leader. Indeed, since agents can be in one of only distinct states and the adversary is able to insert agents per round, the adversary can even insert many copies of every possible agent type in each round. Consequently any approach that relies on the existence of agents of unique or extremely special state, such as leader election, seems doomed to failure. This appears to render ineffectual a large part of the distributed computing toolbox.
Recall that we allow the adversary to insert new agents with arbitrary initial state. Starting from that internal state, we assume that the inserted agents execute the same protocol as honest agents. We could instead consider an even stronger adversary that inserts agents running arbitrary malicious protocols specifying their subsequent behavior. However, the population stability problem as described above is clearly impossible in the face of this stronger adversary, since our model does not include the ability to destroy agents that do not cooperate. A malicious agent can simply ignore all interactions with other agents and replicate itself at every opportunity. Such malicious agents would quickly replicate themseves out of control, rapidly exceeding the population target.
One may consider a different model that allows agents not only to self-destruct but also to remove other agents it encounters. In such a setting, our protocol can be extended to achieve population stability even if the adversary is allowed to insert agents that execute arbitrary malicious programs, as long as there is a bound on how frequently malicious agents can replicate and an agent is able to detect when it encounters an agent whose program is different from its own. However, that setting is not the focus of this work.
Correction and detection.
The objective in the population stability problem is to maintain a population size that is close to a target value , and to correct the population if it deviates too far from this target. A closely related problem is that of simply detecting whether the population has deviated too far from the target or whether it has exceeded some threshold, objectives that seem very similar to the problem of approximate counting [Mor78]. It would be natural to try to solve our problem by first detecting whether the population is too large or too small and then correcting appropriately. With a weaker adversary that can only delete agents but not insert additional ones (and additionally is oblivious to the internal states and coin flips of agents), this approach can be made to work using approximate counting techniques. In the adversarial model considered here, constructing approximate counters and detecting changes in population size are interesting open questions.
In this work we study a synchronous model, where all agents communicate and perform updates in rounds. As has commonly been the case across distributed computing, it is natural to study a new problem first in a synchronous setting to distill key ideas and techniques before adding additional complications in an asynchronous setting.
We note, however, that synchronization is far from an unreasonable assumption in biological systems. Indeed, many multicellular systems do achieve synchrony either through regular external stimuli such as sunlight or through chemical control mechanisms. For instance, heart cells maintained in culture were able to achieve a high degree of synchronization of their rhythmic contractions [JMT87cardiology]
. Neuronal cells too exhibit highly synchronized behavior. Even when grown in culture, specialized neuronal cells show the capacity to synchronize the release of particular hormones at regular time intervals[EMCW92gnrh]. Bacterial populations have also been shown to have the capability of producing coordinated oscillations [MO'S09molecularclock, DMTH10bacteria].
Nonetheless, an extremely natural and interesting question is how to solve the population stability problem in a setting without synchrony or with only partial synchrony. For instance, one could consider a setting where agents have clocks that have bounded drift relative to one another. Related to this is the typical random scheduler setting of population protocols [AAE07], in which a single pair of agents at a time is chosen to interact and update state. By a concentration argument, this process allows agents to maintain clocks that do not drift too quickly relative to one another.
While the construction in this paper requires synchrony, there are some known techniques in the population protocol literature for maintaining approximate synchronization in a non-synchronous setting; see, for instance, the recent work of [AAG18]. A natural extension is to show whether our techniques can be combined with synchronizers to achieve population stability in such settings.
Alternate communication models.
Another very interesting question is to explore the population stability problem under a different communication model. In this paper, pairs of agents that communicate are chosen independently at random in each round. Alternatively, one could consider settings in which the neighbors of an agent are consistent over time, perhaps reflecting underlying geometric constraints.
One natural approach is to use a fixed sparse communication graph, for instance an expander. However, modeling problems arise in determining how connectivity changes upon agent replication, insertion, or deletion. In various settings along these lines, it is straightforward for the adversary to disconnect the communication graph and consequently to violate population stability. An alternate approach could be to associate agents with points in , and to allow each agent to communicate with a small number of the nearest other agents.
Population stability in the high-memory setting.
We note that in the absence of memory constraints, there is a trivial protocol both for approximate counting and for the population stability problem if the adversary can only delete and not insert new agents. Each agent simply flips coins to generate a unique identifier . For an interval of rounds each agent broadcasts the set of identifiers it has received so far. With high probability, all identifiers are unique and agents receive the identifiers of every agent that was alive throughout the interval, so each agent learns a close approximation of the population size and can make a decision of whether to self-destruct, replicate, or neither. However, this protocol relies heavily on agents having very large memory, and does not yield an approach to solve the problem in the low-memory setting.
A note about success probability.
Theorem 1 states that our protocol maintains a stable population for any polynomial rounds with all but negligible probability. Recall that a function is negligible if it tends to zero faster than any polynomial. That is, a function is negligible if for any there exists an such that for all sufficiently large .
Throughout this paper, we use the phrases “with high probability” and “with overwhelming probability” to mean with probability for some negligible function .
1.3 Technical overview
1.3.1 Preliminary attempts
We first describe two preliminary attempts at protocols for the population stability problem which, while unsound, will provide useful intuition toward the design of our actual protocol.
Attempt 1: non-interactive leader election
As a first attempt in the low-memory setting, consider the following approach, which is based on an idea from [AAEGR17]. Each agent flips a biased coin where , where outcome means that the agent is a leader. For rounds, each agent sends any agent it encounters the bit if its coin was zero and it has not received the message , and the bit if its coin was or it has received a from another agent. In the absence of an adversary, this allows every agent to learn whether a was obtained in any of the initial coin flips. The probability of this event differs noticeably depending on whether the population is too small or too large. After repeating to amplify the signal, with high probability the agents can detect if the population is too small or too large and can replicate or self-destruct accordingly. A protocol of this form can be shown to work in the presence of an adversary that can only delete and not insert agents, and is additionally oblivious to the coin flips made by the agents. However, in the adversary model considered here with insertion as well as deletion and full knowledge of the states of agents, the protocol will fail. The adversary can either insert an agent with coin value in each phase, or else identify the agent or agents with coin value and selectively remove these agents. Consequently the adversary can cause the population to grow or shrink arbitrarily.
This attempt highlights a fundamental difficulty in designing protocols in our adversarial model. The protocol relied on a non-interactive strategy related to leader election, where the presence or absence of a leader could be used to infer the approximate size of the population. However, as we have discussed above, the use of a special state with one or only a few agents of that state (in this case agents with coin value ) provides the adversary with an easy avenue of attack, namely the deletion of agents of that state or the insertion of many additional agents of that state. Consequently, constructions of this flavor seem to have little promise in this adversarial setting.
Attempt 2: independent coloring
As a next attempt, consider the simple protocol in which each agent flips a fair coin, receiving at random a color . For each agent, compare the colors of the next two agents encountered. If the colors are equal, then split, and otherwise self-destruct. Observe that if an agent encounters the same agent twice, the colors must be the same, while if an agent encounters two different agents, the colors are independently random. Consequently if the population currently has size , then the probability of splitting is , which is slightly larger than the probability of self-destructing, and so this protocol would cause the population to increase slightly over time. To compensate this, modify the protocol to split only with probability if the colors are equal while still splitting with probability if the colors are unequal.222Another perspective on why the protocol without this step cannot maintain a stable population is that the protocol run by each agent has no dependence on the population target . Consequently, if agents are added or removed (either by adversary, or even as a result of random drift) then the protocol should behave as if those agents were there to begin with and will not correct this deviation in the population.
Now, the population will stay the same size in expectation if its current size is equal to the target , will decrease in expectation if , and will increase in expectation if . Qualitatively, this is exactly the behavior that we want. However, tending in expectation to correct itself is insufficient for maintaining a stable population. In fact, despite a very weak bias to correct drifts in the population, the signal is overwhelmed by the noise, and the size of the population under this protocol will behave very much like a random walk. Even in the absence of an adversary, this protocol will cause the population to drift extremely far from its initial size.
In some sense the protocol we have just outlined behaves even worse than the empty protocol, in that it fails to maintain a stable population when there is no adversary at all. However, the protocol does have one intriguing feature. It entirely lacks any “special” agent types for the adversary to exploit. Consequently, if we could design a more sophisticated protocol along these lines that could maintain a stable population in the absence of an adversary, we might hope that it could do the same even in the presence of an adversary.
1.3.2 Overview of our protocol
We now describe our actual protocol. At a very high level, the idea behind our protocol is as follows. Through some coloring process which we will discuss below, agents are colored with the colors . After the agents are colored, agents run a step called the evaluation phase in which agents make the decision of whether to reproduce or self-destruct. The coloring process and evaluation phase are then repeated indefinitely. We will refer to each iteration of the coloring process followed by the evaluation phase as an epoch.
During the evaluation phase, each agent that is matched with another agent in this round compares its own color with the color of its neighbor (i.e. the agent to which it is matched). If the two agents have the same color, then the agent will replicate itself with some probability . If the two agents have different colors, then the agent will self-destruct. Note that if the coloring process consisted of every agent tossing its own coin, then this would be essentially the same as Attempt 2 above. We will instead employ a more structured coloring process that results in agent colors that are not generated independently at random.
The coloring process will consist of two phases, a (noninteractive) leader selection phase and a recruitment phase. In the leader selection phase, of the agents will become “leaders”333That is, each agent will become a leader with probability . and each leader will choose a random color in . In the recruitment phase, each leader will identify uncolored agents and color each of them with its own color. We note that the leader will not directly encounter each of these agents, but will directly color some agents which in turn will color other agents. To begin with, each leader activates the first inactive agent that it encounters, sharing its color with the new agent. Each agent is subsequently responsible for recruiting inactive agents in the same manner, forming a recruitment tree of depth . By delegating the coloring in this manner, the recruitment process can be performed in only rounds.444In order to achieve constant message size, our full protocol will be slightly different and will use additional rounds for the recruitment process.
For each leader, at the end of the recruitment phase, agents will have obtained a color based on the original coin toss of that leader. We will refer to these agents as the cluster associated with the leader, and we will sometimes describe agents as belonging to the same cluster or different clusters. At the end of the recruitment phase and the beginning of the evaluation phase, a constant fraction of the population will have been colored having been recruited into the clusters of the various leaders, roughly half with color and the other half with color .
Consider a particular agent in the evaluation phase. If the agent meets another agent from the same cluster, then they necessarily have the same color. If the agent meets an agent from a different cluster, then their colors are independently random. Consequently, if the current population size is , then with probability the two agents will have the same color, and with probability the two agents will have different colors. We can choose the splitting probability so that the expected change in population is zero for . For the expected change in population will be negative, and for the expected change in population will be positive. Moreover, unlike in Attempt 2 above, which behaved similarly to a random walk, here the effect is strong enough to maintain the population in a small interval around the target value , with all but negligible probability.
Moreover, the adversary can do little to influence the result of the protocol. For a population size , the number of leaders selected is
, and so the standard deviation of the number of leaders with each color is roughly. Consequently, an adversary that can insert or delete agents can do little to influence the distribution of colors. Even if the adversary selectively inserts or deletes agents that are leaders and have a particular color, the effect of the adversary on the distribution of colors is dominated by the random deviation of the sampling process. Unlike in Attempt 1 above, there are many leaders, and so the adversary is unable to delete enough of them or to insert enough additional leaders to overwhelm the protocol. As we will show, the adversary cannot cause substantial deviations in the size of the population.
Note that one strategy the adversary may attempt is inserting agents that do not know the correct round number within the epoch. In the protocol described so far, there is no mechanism for detecting and correcting this, and so over many rounds, adversarial insertions may lead to a population of agents attempting to execute different portions of the protocol. In order to address this, agents can exchange which round they are in, and self-destruct upon encountering an agent that is in a different round of the epoch. This results in the self-destruction of any agent with the wrong round number as soon as it encounters an agent with the correct round number. A corresponding number of correct agents are also destroyed, but we will show that the number of correct agents removed in this manner is sufficiently small.
As discussed briefly in Section 1.1, we can think of this protocol as encoding the population size in the variance of a distribution and then sampling from this distribution to obtain a weak estimate of the variance. Since the variance of the fraction of successes in many independent Bernoulli trials decreases as the number of trials increases, if the number of leaders is larger, then the fraction of colored agents with color will be more closely concentrated around . On the other hand, if the number of leaders is smaller, then we expect the fraction of colored agents with color to be farther from . Since the expected number of leaders is proportional to the current size of the population, an approximation to the population size is encoded in the fraction of agents of each color. Consider the distribution obtained by selecting an agent at random and reading its color. Comparing the colors of two agents serves as a very weak estimate of the variance of this distribution, while aggregating the results of many agents’ individual choices of whether to replicate or self-destruct serves to amplify the accuracy of this estimate.
Achieving constant-size messages.
The protocol described above involves messages of size bits, essentially as large as the entire memory of an agent. We now outline how to modify the protocol to use constant-size messages. The only large portions of the messages described so far consist of the current round in the epoch and the depth in the recruitment tree, each of which can be encoded in bits.
The current round in the epoch is sent to prevent the adversary from confusing the protocol by inserting agents with the wrong round number. However, rather than sending the exact round, we will instead send the single bit specifying whether or not the agent is currently in the evaluation round. If an agent is entering the evaluation round and its neighbor is not, then both agents will self-destruct. We can show that this suffices to maintain the invariant that a large majority of the agents are in the same round of the epoch.
The depth of the recruitment tree is sent to allow each leader to induce the recruitment of the correct number of agents. Note that we cannot simply recruit for rounds, since recruiting agents may encounter other agents that are already colored and cannot recruit them. However, we can slow down the recruitment process to allow the depth in the recruitment tree to be determined as a function of the round number. To do this, we recruit for rounds, divided into subphases of rounds each.555We actually only require that subphases are rounds, so it is sufficient to recruit for rounds. In a single subphase, a recruiting agent will recruit only the first inactive agent it sees even if it encounters many inactive agents in the subphase. This allows agents to determine their depth in the recruitment tree based on the round in which they were recruited. Since a subphase consists of rounds, we will show that an inactive agent will be encountered with high probability.
This yields a population control protocol with constant-size messages, since messages consist of four binary values, namely an agent’s color, whether or not it is active, whether or not it is recruiting, and whether or not it is currently in the evaluation round. In the analysis we will see how to achieve the same result with only three-bit messages.
1.4 Related work
Population Protocols. The population protocol model was introduced by Angluin et al. [AADFP04, AADFP06]. In this model a collection of agents, which are modeled by finite state machines, move around unpredictably and have pairwise interactions. The original definition considers a worst-case environment/scheduler, while later formulations [AAE07] consider the case where each interaction occurs between a pair of agents chosen uniformly at random. In a population protocol, agents start with an initial configuration, and the goal is to jointly compute a function of this input. Previous works have tried to identity the class of functions that can be computed in such a model [AAER07], and the tradeoffs between the resources need to do so (e.g. [AAEGR17]). In these works, the agents are always active throughout the execution of the protocol.
Another line of work expands the population model to the case in which agents can crash or undergo transient failures that corrupt their states. Delporte-Gallet et al. [DFGR06] consider a setting in which agents must compute a function of their inputs in presence of such failures. They construct a compiler that takes as input a protocol that works in the failure-free model, and outputs a protocol that works in the presense of failures as long as modifying a small number of inputs does not change the function output. Angluin et al. [AAFJ08] incorporated the notion of self-stabilization into the population protocol model, giving self-stabilizing protocols for some classical problems such as leader election and token passing. They focus on the goal of stably maintaining some property such as having a unique leader or a legal coloring of the communication graph.
Unlike these works, in our work agents have the ability to reproduce and self-destruct, and the goal of maintaining a consistent population size must be carried out in the presence of an adversary with the corresponding capability to insert and delete agents.
Approximate Counting. The problem of maintaining the population size of a collection of memory-constrained agents is related to the problem of counting items when the available memory of the agents is less than bits. Approximate counters were introduced by Morris [Mor78] as technique to accurately approximate a value using only bits of memory. Techniques for approximate counting in the population model were developed in [ABBS16, AAEGR17].
A sequence of works by Di Luna et al. [LBBC14, LBBC14b] consider the problem of estimating the size of a network of agents that communicate according to a dynamic connection graph, in presence of an adversary that can add and remove edges in the graph.
Cellular Automata. Cellular automata were proposed by von Neumann [Neu] as a model to reason about artificial self-reproduction. A cellular automaton consists of a regular grid of cells, each assuming one of a finite number of states. Over time, the states of cells change according to some fixed rule (e.g. a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. Conway’s Game of Life [GameofLife] is a cellular automaton that works with a simpler set of rules than von Neumann’s rules, and was shown to be Turing-complete by Berlekamp, Conway and Guy [BCG]. Cook [Cook] proved that rule 110 (a binary, one-dimensional cellular automata) is Turing-complete.
Our setting is crucially different from the cellular automaton setting, since agents in our model do not simply change state but can be deleted from the system during the computation. Another difference between our setting and the cellular automaton setting is that we consider an adversarial model whereas in cellular automata cells deterministically change state. In some sense, in Conway’s game, death “plays by the rules” while in our game death is sudden and unpredictable.
Dynamic Environments. A recent work of Goldreich and Ron [GR17] considers environments that evolve according to a fixed local rule. They define an environment as a collection of small components of a large system which interact in a local level, and change state according to a fixed rule. As an example, they focus on the model of a two-dimensional cellular automata. They ask how many queries a global observer must make about local components in order to test whether the evolution of the environment obeys a fixed known rule or to predict the state of the system at a given time and location. Although their work seems very different than ours, it bears some intellectual similarity in seeking information about a global property of the system from local information. However, whereas in [GR17] a global observer who can query a limited number of individual cells asks “does the global system obey a specific evolution rule,” in our case individual agents need to decide locally what they should do to maintain the overall global property of population size.
Self-Stabilization. Also related to our question is the self-stabilization problem introduced by Dijkstra [Dj74]. Given a system that starts in an arbitrary state, the goal of a stabilization algorithm is to eventually converge to the correct state. In this setting, however, deletion of system components is not considered. Super-stabilization [DH97] is the problem of achieving self stabilization in dynamic networks, that is, network where nodes are dynamically added and removed. While this setting is closer to ours, super-stabilization algorithms make additional assumptions about the system, such as that each node in the system is uniquely identified.
Distributed Algorithms Explaining Ant Colony Behaviors. A single ant has very limited communication and processing power, yet collectively a colony of ants can perform complex tasks such as consensus decision-making, leader election, and navigation. In [CDLN14] Cornejo et al. give a mathematical model for the problem of task allocation in ant colonies, and propose a very efficient protocol for satisfying this task. One of the main goals of their paper is to provide a formal model enabling the comparison of the various task allocation algorithms proposed in the biology literature. Similarly, in [GMRL15] Ghaffari et al. use techniques from distributed computing theory in order to gain insight into the ant colony house-hunting problem, where a set of agents need to identify potential nests, evaluate the quality of candidates, and reach consensus in a distributed manner.
2 The Population Stability Problem
As discussed above, the population stability problem is concerned with a system of agents with bounded memory and the ability to reproduce and self-destruct. The system is subjected to adversarial attacks that delete or insert processors. The objective is to maintain a stable population size despite these adversarial attacks. In this section we give a formal description of the problem.
The population stability problem is parameterized by the initial number of agents , the number of distinct memory states each agent can be in, the number of alterations the adversary is allow to make in each round (where an alteration consists of removing or inserting an processor), a value specifying how tightly concentrated the population size must remain around the target value , and a value specifying a lower bound on the fraction of processors that are matched with other processors in each round. Below we describe each of these components of the problem.
We note that one could consider separate parameters for the number of adversarial deletions and insertions, allowing the adversary to make a different number of each. In this paper we will consider both to be bounded by a single parameter .
We consider agents with bounded memory. Each agent can be in one of possible states, so we can think of agents as having bits of memory. Agents can communicate by message-passing as specified by the connectivity structure of the system, which will be discussed further below. In our setting we will have , so each individual agent has insufficient memory to count the total population, to posses a unique ID, or to address messages to a particular recipient. Each agent has the ability to flip unbiased coins, to split into two identical agents, or to self destruct. That is, in any round an agent may choose to split into two daughter agents which both inherit specified state from the parent agent, or it can decide to delete itself from the system.
As discussed above, we consider a synchronous version of the population model of Angluin et al. [AADFP04, AAE07], where we assume the existence of a global clock. We assume that the pairs of agents that are able to communicate in each round are selected by choosing a random matching of at least a fraction of surviving agents. We think of the parameter as a constant (e.g. ). That is, each agent is matched with at most one other agent (that we call its neighbor) in each round, and there is no consistency from round to round, since connectivity in different rounds is determined by sampling independently random matchings. The schedule of these matchings is unknown to the adversary in advance.
We consider a worst-case, computationally unbounded adversary that can arbitrarily choose which agents to delete in each round and can insert agents with arbitrary state in each round. The adversary also can observe the entire history of agent interactions, including the memory contents of every agent. While the initial state of inserted agents is determined by the adversary, the newly inserted agents are assumed to follow the protocol (that is, the agents introduced by the adversary do not behave maliciously).
The goal in the population stability problem is to maintain a number of agents within a small interval around the initial population size . That is, initially the system consists of agents. In each round some agents may be removed or inserted by the adversary, and some agents may decide to replicate or to self-destruct. Let denote the number of agents in the system after the th round. The adversary wins in round if at the end of the round. We say that protocol is a population stability protocol if for any polynomial and any adversary, the probability that the adversary wins in at most rounds is negligible in .
3 The protocol
We now provide a formal specification of the main protocol (Algorithm 1). Agents continually run the protocol throughout their lifetime. We will think of time as partitioned into epochs of rounds. Each epoch consists of three phases, the leader selection phase, the recruitment phase, and the evaluation phase. The recruitment phase consists of subphases each consisting of roughly rounds.666More generally, we want for any . The first and last subphase will each be shorter by one round to account for the leader selection and evaluation phases. We will elaborate on each of these phases below.
Recall that agents have the ability to toss coins, to send and receive a message upon encountering another agent, to reproduce by splitting into two identical copies of itself, and to self-destruct. These capabilities are notated by the following functions. We denote flipping an unbiased coin by . Agent splitting and death are implemented in commands ) and ). Finally, the command sends message to the neighboring agent in the present round, if any, simultaneously receiving in response message .777Recall that an agent’s neighbor is the other agent the agent is randomly matched with in this round and that matchings in each round are independent and uniformly random. If the agent is unmatched in this round and has no neighboring agent, then the return value is assumed to be . In the protocol below, messages will consist of four boolean values . Upon receiving message , the value of each of these variables can be accessed by writing , , and so forth. If then we will follow the convention that each of these components will also have value .
The main variables that describe the state of an agent are and four boolean variables , , and . The variable stores the most recent message received, consisting of four boolean values . An additional variable, , is not necessary for the protocol itself but is used in the analysis. We emphasize that these variables are local to a single agent, so that different agents have independent copies of each of these variables. Initially, at the onset of the system, for each agent we will have that all variables are set to zero.
The variable keeps track of which round it is within the epoch. The variable is incremented modulo after each round, ensuring that agents begin and end each phase and each epoch at the same time.
The variables and specify whether an agent has been activated and colored, as well as the color of the agent. In the first round of each epoch, some of the agents will designate themselves as leaders and will become active, choosing at random a color , while the rest of the agents remain inactive. During the recruitment phase, additional agents will become active and will receive colors in , as inherited from a leader. The value of variable is only relevant for active agents.
The variable specifies whether or not an active agent is trying to recruit in the present subphase. Each active agent should recruit only one additional agent in a single subphase, so this variable specifies whether or not the agent is still looking for an inactive agent to recruit in the subphase.
The variable specifies whether the agent is currently in the evaluation phase, which is true exactly when .
Finally, the variable specifies the number of additional followers an active agent is tasked with recruiting directly, which is the logarithm of the total number of agents that should be activated as a result of the given agent. When a new leader first becomes active, it sets , indicating that it is tasked with recruiting a total of agents. Each time a new agent is recruited, the value of is decremented and shared with the newly recruited agent, indicated that each of the two is responsible for recruiting only half of the total. For instance, after the first time a leader recruits another agent, both agents will have , and so each of the two agents subsequently are responsible for recruiting only agents. In this way a leader can induce the recruitment of agents in roughly a logarithmic number of rounds. Although the variable is not used by the algorithm itself, we will refer to it in the analysis.
We first present the main procedure run by each agent in every round. Each agent first exchanges messages with its neighboring agent in this round, if any. The agent then performs a consistency check on its on state and the state of its neighbor. Then, depending on the value of variable modulo , the program calls the appropriate subroutine for the corresponding phase.
We now describe the subroutine that exchanges messages with the neighboring agent, if any. An agent simply computes the indicator value of whether it is in the evaluation phase, and sends this information along with its activation state, color, and recruiting status. It receives a corresponding message from its neighbor, or the value if it has no neighbor in this round.
We now describe the leader selection subroutine, which comprises the first round of each epoch. This is an entirely non-interactive process in which each agent becomes a leader with some fixed probability by tossing its own coins, entirely independently of each other agent. With overwhelming probability, if the total number of agents is , the number of leaders chosen in this phase will be . Each newly activated leader chooses a random color in and is tasked with recruiting agents and assigning them this color.
The leader selection phase, as well as the evaluation phase below, requires the ability to flip biased coins with bias , in particular , where bias refers to the probability of the coin having value (so that a fair coin has bias ). Recall that we assume only the ability to toss unbiased coins. We now give a simple procedure to obtain the desired bias using only bits of memory, assuming that is an even integer. More generally, we show how to obtain bias for any integer using bits of memory. We note that it is sufficient to toss coins and report if they all landed heads and otherwise. This requires counting to , which can be done with memory.
We now describe the recruitment phase, which is the second phase executed by each agent in every epoch and is the main source of interaction in the protocol. The phase lasts for rounds, consisting of subphases each of length rounds. As discussed above, during this phase each leader is tasked with finding inactive agents (i.e. with ) and coloring each of them with the color of the leader. We note again that the leader will not directly meet each of these inactive agents, but rather that this is done by propagation, where the leader will activate some agents and each of these will activate additional agents. In each subphase each active agent will attempt to recruit a single nonactive agent, which will then start to recruit in the following subphase. Since there are subphases, if each attempt to recruit is successful, then a single leader will result in the activation of a total of agents.
The final phase of the algorithm is the evaluation phase, which occurs on the last round of each epoch. In this phase, each matched active agent compares its color to that of its neighbor and makes a decision of whether to replicate itself or to self-destruct.
Finally, we give the subroutine invoked at the very beginning of each round, that performs a consistency check on the round values of the agent and its neighbor. In the absence of adversarial insertions this subroutine is unnecessary, since agents will always have the correct round value in the epoch. However, it is necessary if the adversary is allowed to insert agents with an incorrect round value. If left to increase unchecked, the presence of many agents with different round values would interfere with the operation of the protocol. We will prevent this from happening by causing agents to self-destruct as soon as they encounter an agent with a different round value. However, implementing this exactly would require -bit messages, since agents would need to exchange their round numbers. To avoid this, we will instead have agents exchange only an indicator variable for whether or not they are in the evaluation phase. An agent will self-destruct if it is in the evaluation phase and its neighbor is not, or if its neighbor is in the evaluation phase and it is not. This process deletes a small number of agents with the correct round number along with agents with the incorrect round number. We will show in the analysis below that this will ensure that there are few agents with the incorrect round number and that only a small number of agents with the correct round number will self-destruct as a result of this procedure.
In this section we prove the main theorem.
Let be positive constants, where is a lower bound on the fraction of agents that is matched in each round. Then Algorithm 1 is a population stability protocol using states per agent and three-bit messages888A straightforward implementation of the protocol described above would use states and four-bit messages. We describe below how to achieve the improved bounds stated here. guaranteeing that if the adversary inserts and deletes at most agents in each round, then with all but negligible probability the population will remain between and for any polynomial number of rounds.
For ease of presentation, the version of the protocol described above uses four-bit messages. However, we can reduce the message size to three bits, as follows. If the agent is in the evaluation phase (i.e. ) then the message must contain the values and , but need not contain . If , then the message must contain the value but not if and the value but not if . Consequently, the desired information can be encoded in only three bits.
For the memory requirements, bits are needed to store the variable , and the other variables stored by each agent consist of eight boolean values. The invocations of the subroutine ) require bits of local memory. However, the subroutine is only invoked in two rounds of each epoch, the leader selection round and the evaluation round. Consequently, using additional indicator bits to specify whether an agent is in each of these those two rounds, the memory used to store the variable can be used as the helper memory for the subroutine, and so additional memory is not necessary. For , the total number of states is therefore . However, it suffices to have for any , and so we can reduce the number of states to .
Roadmap to proof
We must show that the population size will remain close to the target value . We will do this by means of two key steps.
The first step is to show that in any single epoch the population size will be relatively stable. That is, we show that the population in the middle or end of an epoch will not be too much larger or smaller than the population at the beginning of the epoch. We achieve this by showing that irrespective of the adversary’s actions, with overwhelming probability the number of agents of each color in the evaluation phase will be concentrated around one-sixteenth of the total number of agents. This step is formalized in Lemmas 6 and 7 in Section 4.2.
The second step is to show that the population size will tend to correct itself if it has deviated too far from the target value . More precisely, we show that if the population is far from , then in expectation the population at the end of the phase will be substantially closer to than the population at the start of the phase. This step highlights a key tension of the proof, namely the difficulty analyzing a system that contains both random components in the matching schedule and worst-case components in the adversary’s insertions and deletions. Indeed, it is not even clear a priori what it means to discuss expectation in a system with a worst-case adversary. This step is formalized in Lemma 8 in Section 4.3.
Putting these two steps together will enable us to conclude the proof of the theorem. Consider the first epoch in which the population size lies outside the interval . Since the population size does not deviate too much in a single epoch, the population at the start of the next epoch will be close to . But for each epoch in which the population remains outside the interval , in expectation the change in population will be in the direction of the target value . Considering the next epochs, a Chernoff-Hoeffding bound then implies that with overwhelming probability the population will return to the interval . Since the population does not change much in each epoch, it follows further that the population will remain inside the interval during these epochs. We will conclude that with all but negligible probability, the population will remain in the interval for any polynomial number of rounds.
We now outline the remainder of the section. In Section 4.1 we prove some preliminary lemmas about the protocol. These lemmas provide us with invariants that we will need in order to prove the two key steps above. In particular, we show that nearly every agent knows the correct round in the epoch, that at least half of the agents are inactive (i.e. have ) at any point in the execution of the protocol, and that any leader selected in the first round of the epoch will succeed in recruiting a full cluster of size unless either the adversary deletes some agent in that cluster or an agent with the wrong round number interferes with the recruitment.
In Section 4.2 we prove the first of the key steps, showing that with high probability the population size does not deviate too much in a single epoch. In Section 4.3 we prove the second of the key steps, showing that if the population has drifted too far from the target value, then in expectation the population will correct itself. Finally, in Section 4.4 we conclude the proof of the main theorem.
4.1 Bookkeeping lemmas
We will first prove several bookkeeping lemmas to guarantee that with overwhelming probability, certain invariants continue to hold throughout the execution of the protocol. We will subsequently use these invariants to prove stronger statements that will enable us to conclude the correctness the protocol.
Recall that during the protocol, agents keep track of the current round within the epoch, that is, the round number modulo . However, the adversary is empowered to insert new agents into the system with arbitrary initial state, and in particular may insert agents with the incorrect round number. Our first lemma provides a bound on the number of agents with the incorrect round number. Recall that is a lower bound on the fraction of parties in each round that are matched. We can assume that , since the desired statement is stronger for smaller .
For any , suppose that the population size remains above for the first rounds of the protocol. Then there exists some negligible function such that conditioning on this event, with probability , all but of the agents will have the same value for variable in each of these rounds.
We prove this by induction on the round number. Initially all agents have . Assume for induction that at round all but of the agents have variable . We will show that with high probability, the same statement holds at the start of round . The adversary may add an additional agents in each of these rounds, for a total of agents. Note that each agent with the wrong round value may split at most once in this epoch of rounds, when it reaches its evaluation phase. If there are agents which differ from the majority value for variable , then the probability of such an agent being matched with another such agent in its evaluation phase is at most . It follows that with high probability, the number of agents with the wrong round value that split during this epoch is at most . Consequently at any point in this epoch of rounds, the number of agents with the wrong round value will be at most . Each agent with the wrong round value at the start of the majority evaluation phase (i.e. round ) has probability at least of being matched with an agent starting the evaluation phase, so with all-but-negligible probability, at most of these agents will not be matched with an agent with different value and will survive the round. It follows that at the start of round , at most agents have value different from . Consequently, by induction we have that with overwhelming probability, the number of agents with variable in any round is at most . Since we showed above that the number of agents with the wrong round number that can be added during the epoch is small with overwhelming probability, the lemma follows. ∎
With high probability, if the population is in the interval at the start of an epoch, at any point in the epoch, at most of the agents have .
Let be the population at the start of the epoch. With all but negligible probability the number of leaders chosen will be by a Chernoff-Hoeffding bound. Each leader may induce the activation of at most total agents. During the epoch, the adversary may insert an additional agents, which may each induce the activation of total agents. Consequently at any point in the epoch, the number of active agents will be at most . Prior to the evaluation step, the adversary can have killed at most agents. By the previous lemma, at most agents at the start of the epoch can have the wrong value for , and at most additional such agents can be introduced during the protocol, so at most agents can be killed in the ) procedure. Consequently the population throughout the epoch until the evaluation phase will be at least . The conclusion follows. ∎
Suppose the population is in the interval at the start of an epoch. Then with high probability, in the last round of the epoch, every active agent entering the evaluation phase that was not inserted by the adversary during this epoch will have .
By Lemma 4, at most half of the agents are active in each round, so the probability in each round of encountering an inactive agent at each round is at least . With overwhelming probability, in any sequence of steps an agent will encounter an inactive agent and will be able to recruit it. Applying a union bound, we have that in each cycle of steps, each of the active agents attempting to recruit will be successful in finding an inactive agent to recruit. Consequently each agent will be able to recruit the desired number of additional agents, and the lemma follows. ∎
4.2 Bounded deviation
In this section we show that with high probability, the population size does not change by too much in any single epoch.
Let be the population at the start of an epoch. With high probability, the number of agents with each color at the start of the evaluation phase will be .
Let be the population at the start of the epoch. With all but negligible probability, the number of leaders selected with color at the beginning of the epoch will be , and similarly for the number of leaders selected with color . In the absence of adversarial deletions, each leader will recruit followers with the same coin value, inducing the presence of agents of each color by the final round of the epoch.
The adversary may insert or delete agents over the course of the epoch. Each inserted agent can induce the activation of at most additional agents, and similarly each removed agent could have activated up to additional agents. Additionally, by Lemma 3, at most agents will be removed in procedure ) upon encountering an agent with a different value for variable . Overall the actions of the adversary can affect the number of agents of each color by at most . It follows that despite adversarial action, the number of agents of each color at the start of the evaluation phase will be with all but negligible probability. ∎
With all but negligible probability, if the population is in the interval at the start of an epoch, the population will have deviated by at most by the end of the epoch.
Let be the population at the start of the epoch. By the previous lemma, at the start of the evaluation phase the number of agents with color will be with all but negligible probability, and likewise the number of agents with color will be . We condition on these events. Since the adversary can insert at most agents in each of the rounds for a total of , and no other new agents with the correct value of variable can be produced until the evaluation phase, Lemma 3 implies that the total number of agents at the start of the evaluation phase is at most . Similarly, since the adversary can have directly removed at most agents, and at most agents with the correct value of may have been removed as a result of procedure ) after encountering an agent with a different value of , it follows that for , the total number of agents at the start of the evaluation phase is .
Consequently the communication graph for the evaluation phase is a random matching of size . Sample such a matching by first choosing a set of left vertices and a set of right vertices, and then associating corresponding vertices on the left and right. Let and . With high probability the number of left (respectively, right) vertices with color will be , and similarly for vertices of color .
It follows that with all but negligible probability, the number of left-vertices of color that split after being matched with a right-vertex of color is for each . Similarly, the number of left-vertices of color that self-destruct after being matched with a right-vertex of color is . Consequently will all but negligible probability, noting that , we have that the change in population during the evaluation phase is
4.3 Correcting population drift
In this section we show that if the population has drifted too far from the target value , in expectation it will tend to correct itself.
If the population is in the interval at the start of an epoch, then for any adversarial strategy, in expectation the population will increase by by the end of the epoch. If the population is in the interval at the start of an epoch, then in expectation the population will decrease by at least by the end of the epoch.
The behavior of the system during the evaluation phase depends on the distribution of coin values of active agents in this phase. We would like to argue that each pair of clusters has its colors assigned by independent, fair coin flips. This is clearly true in the absence of an adversary. However, in our setting, adversarial instertions and deletions can bias the joint distribution of the colors of a pair of agents in different clusters.999For instance, in the first round of the epoch, the adversary can instert additional leaders all with color , or can delete several leaders that have color . This difficulty arises because we allow the adversary to observe the internal memory of all agents, including the results of coin tosses. Nonetheless, we will argue that the adversary’s influence is limited, and that for most pairs of agents in different clusters, we can think of the joint distribution of colors as unbiased and uniform. We will label clusters as honest or adversarial, where the colors of a pair of honest clusters can be regarded as independently sampled, and no assumption is made on the colors of the adversarial clusters.
Consider any fixed adversarial strategy. Note that the adversary can insert or delete a total of no more than agents during the epoch, and consequently can influence no more than this many clusters. Any strategy of the adversary during this epoch can be emulated by deferring any deletions of colored agents (with the correct value) to the beginning of the evaluation phase, deleting instead an inactive agent. Recall that by Lemma 5, each cluster not affected by the adversary will consist of agents at the beginning of the evaluation phase. At the beginning of the evaluation phase, we allow this new adversary not only to delete the specified agent, but also to set for any subset of the other agents in the cluster. Note that any attack that could be accomplished by the original adversary can still be carried out by this new adversary that defers all of its deletions of colored agents to the evaluation phase but is subsequently allowed to modify up to agents in different clusters.
Since we now defer deletions of colored agents to the beginning of the evaluation phase, each cluster induced by a leader selected in the first round of the epoch will have the full members, and consequently these clusters are indistinguishable except for their color. Consequently as long as the adversary can modify agents in clusters of each color, which specific cluster of each color is irrelevant. Consider an arbitrary indexing of the agents before the first round of the epoch, and consider the first leaders chosen in this round. With all but negligible probability, this set will contain at least agents of each color , and so the strategy of the adversary can be carried out by manipulating only the clusters of agents in this set. Let the clusters induced by these agents be the adversarial clusters along with the clusters induced by any agents inserted by the adversary, and let the remaining clusters be the honest clusters.
Now we have reduced to a setting in which we have achieved the desired property that each honest cluster has size , and the coin flips of any two honest clusters are independent. However, we are now dealing with a modified adversary that can affect a larger overall number of agents. Let be the number of agents at the start of the evaluation phase. By Lemma 6, it follows that with all but negligible probability, the number of agents in honest clusters is , and the number of agents in clusters influenced by the adversary is .
Pick a random pair of vertices at the start of the evaluation phase. Then with probability both agents will belong to honest clusters, with probability one agent will belong to an honest cluster and the other to an adversarial cluster, and with probability both will belong to adversarial clusters.
A pair of vertices belonging to honest clusters will have the same color if they belong to the same cluster, and independently random colors if they belong to different clusters. Consequently the probability that such a pair of vertices will have the same color is . It follows that the expected change in population resulting from the matching of a pair of vertices belonging to honest clusters is
Recalling that is a fixed constant, for this quantity is , and for this quantity is negative, with magnitude .
The honest clusters consist of nearly the same number of agents of each color ( of each). It follows that the expected change in population resulting from the matching of a vertex in an honest cluster and a vertex in an adversarial cluster has magnitude . Making no assumption about the distribution of colors in the adversarial clusters, the matching of two vertices in an adversarial cluster can have a change in population of .
Consider a random pair of vertices at the start of the evaluation phase. For , the expected change in population resulting from matching this pair of vertices is . Since the number of matched pairs of vertices in the evaluation phase is , it follows from linearity of expectation that the expected change in population during the evaluation phase is . Similarly, for we have that the expected change in population during the evaluation phase is . The adversary can delete or insert only agents during the epoch, and Lemma 3 implies that agents will self-destruct during procedure ) during the epoch, so the other terms dominate, and the conclusion follows. ∎
4.4 Putting everything together
We now show that if the population size leaves the interval during an epoch, with high probability it will return to this interval during one of the next few epochs. With this final lemma, we then conclude the proof of the theorem.
Consider an epoch in which the population has drifted outside the interval . With all but negligible probability, the population will once again be in the interval at the start of one of the next epochs.
Assume to the contrary, and let epoch 0 denote the first epoch after the population has exceeded the interval . For concreteness, suppose the population has dropped below . By Lemma 7, with all but negligible probability the population is still above . For , let
be the random variable denoting the difference in population between the start of epochand the start of epoch , and let . By Lemma 7, with all but negligible probability, each random variable is bounded in the range . Since by assumption the population is below at the start of each epoch, Lemma 8 implies that . It follows by a Chernoff-Hoeffding bound that with all but negligible probability, for any constant , , so with all but negligible probability we have that and . Consequently the population at the start of epoch exceeds . The argument is identical when the population has exceeded . ∎
We now conclude the proof of Theorem 8.
Proof of Theorem 8.
Consider any polynomial , and suppose that for some adversarial strategy the population deviates from the interval in rounds with non-negligible probability. It follows that for some pair of epochs , with non-negligible probability the population deviates from interval for the first time in epoch after deviating from the interval for the last time in epoch . We condition on this event. Lemma 7 implies that until epoch , with all but neligible probability the population will deviate in each epoch by at most . But then Lemma 9 implies that with all but negligible probability the population will return to interval within epochs, which is a contradiction. Consequently the population will remain between and with high probability for any polynomial number of rounds, as desired. ∎