Fundamental Performance Limitations for Average Consensus in Open Multi-Agent Systems

04/14/2020 ∙ by Charles Monnoyer de Galland, et al. ∙ Université catholique de Louvain 0

We derive fundamental performance limitations for intrinsic average consensus problems in open multi-agent systems, which are systems subject to frequent arrivals and departures of agents. Each agent holds a value, and the objective of the agents is to collaboratively estimate the average of the values of the agents presently in the system. Algorithms solving such problems in open systems are poised to never converge because of the permanent variations in the composition, size and objective pursued by the agents of the system. We provide lower bounds on the expected Mean Square Error of averaging algorithms in open systems of fixed size. Our derivation is based on the analysis of an algorithm that achieves optimal performance on some model of communication within the system. We obtain a generic bound that depends on the properties of such communication schemes, and instantiate that result for all-to-one and one-to-one communication models. A comparison between those bounds and algorithms implementable with those models is then provided to highlight their validity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Multi-agent systems are a powerful tool that allows modelling and solving a wide variety of problems in various domains. Those include learning or tracking in sensor networks [ApMAS:WSN, ApMAS:TrackingWSN], vehicle coordination [ApMAS:vehicle] or formation control through consensus [ApMas:Consensus_based_formation_control:CDC2019], social phenomena such as the study of opinion dynamics [ApMAS:CTAvgOpinionDynamics, ApMAS:Opinion&Confidence]

, and many more. Whereas some often cited properties of multi-agent systems are their flexibility, their scalability and their robustness, most results stated around those systems stand for asymptotic properties by assuming that their composition remains unchanged throughout the whole process. This apparent contradiction is justified if the process and the arrivals and departures of agents take place at different time scales. However, both the characteristic length of a process, and the probability for an agent within the system to leave or enter it tend to increase with the system size. Hence, arrivals and departures of agents cannot be neglected in the study of large systems. Moreover, there are processes that are slow or chaotic by nature, and where communications can be difficult or naturally happen at a time scale similar to that of the arrivals and departures. Think of vehicles sharing a stretch or road for some time before heading to different destinations in multi-vehicle systems for instance.

We call “Open multi-agent systems” systems subject to arrivals and departures of agents. Such configurations raise many challenges as those repeated arrivals and departures imply important differences in analyses and algorithm design. Results for closed systems indeed do not easily extend to open ones, see e.g. [OMAS:Gossip, OMAS:MAX]. First, frequent arrivals and departures influence the size and state of the system, which vary as agents get in and out, making their analysis challenging. Moreover, in some scenarios, those incessant perturbations can impact the objective pursued by the agents in the system, additionally to the system itself. It can then be required to design algorithms able to deal with a variable objective, which can prevent them to achieve the usual convergence.

I-a State of the art

Most results around multi-agent systems stand for closed systems, and those do not easily extend to open ones. There is little analysis on open multi-agent systems. Analyses on social phenomena considering arrivals and departures were performed in [OMAS:sociophysics], but those are mostly simulation-based. Convergence in open systems was also studied for the median consensus problem in [OMAS:Median_consensus:CDC2019] where the median was shown to be tracked with bounded error under some conditions, and for Gossip interactions in [OMAS:Gossip] through size-independent descriptors whose evolution was shown to be described by a dynamical system asymptotically converging to some steady states. Algorithm design has been explored for MAX-consensus with arrivals and departures in [OMAS:MAX] through additional variables, where the performance was measured with the probability for the estimates to converge if the system closes. Some applications also considered openness, such as the THOMAS architecture designed to maintain connectivity into P2P networks [ApOMAS:OpenP2P], or VTL for autonomous cars to deal with cross-sections [ApOMAS:VTL].

However, efficiently studying the performance of algorithms, and the analysis of open systems in general, remains a challenge. Since agents cannot instantaneously react to perturbations, and since the objective they pursue may be varying, converging to an exact result is not achievable. An alternative way to measure the performance of algorithms consists in ensuring the error remains within some range, as the usual convergence is no more a relevant criterion. Hence, a first step towards understanding open systems is the derivation of fundamental performance limitations, i.e., lower bounds on the performance that can be achieved by algorithms. Those limitations can then be used as a quality criterion for algorithm design in open systems, and highlight the main bottlenecks that could arise in their analysis.

I-B Contribution

We establish fundamental performance limitations for average consensus problems (namely, collaboratively estimating the average of the values owned by the agents presently in the system at some time) in open multi-agent systems. We assume information is exchanged within the system according to some communication mechanism allowing the agents to build an estimate. Moreover, in this first work we focus on open systems of fixed size by assuming that each departure is instantaneously followed by an arrival, so that the system only suffers replacements and its size remains constant, see Section II for a formal definition. We first derive performance limitations for a generic model of the information exchange in Section III. Instantiating this model then allows defining a proper bound for any algorithm that can be implemented with it. In Section IV, we compare our bounds with the performance of the well-known Gossip algorithm [Avg:Gossip] for solving average consensus in open systems. We define two models of the information exchange that allow implementing it in order to instantiate the bound previously derived. Those will thus be valid quality criteria for the performance of the Gossip algorithm, and for any other algorithm that can be implemented with those models.

Since consensus problems are commonly building blocks for various multi-agent complex applications (such as decentralized optimization [DO:dualAvg] or several control applications [ApMAS:vehicle]), we expect the techniques we use for deriving those limitations to be extendable to more advanced tasks on open multi-agent systems. Moreover, the results presented here can be used as a starting point for the analysis of such more advanced applications.

A preliminary version of these results was presented in [OMAS:CDC2019:FPL_intrAVG]. Those focused on steady state results under the assumption that the system has been running since for strictly pairwise communications mechanisms, and were based on relaxations of the constraints on a given pairwise communication model. The main new contributions with respect to these preliminary results are the following. Firstly, this work provides a better bound for pairwise communication schemes (i.e., the Gossip model in Section. IV-B) in comparison with the Infection model presented in [OMAS:CDC2019:FPL_intrAVG], since it relies on constraints closer to what is really observed with such scheme. Moreover, the bounds we derive are valid along the time and asymptotically, whereas the preliminary results only considered steady state. More generally, instead of using bounds on probability functions to relax the constraints defining a communication model, our approach for this work relies on the definition of knowledge sets (see Section. II-C) which represent the information made potentially available to an agent through such model: those allow for simpler and clearer proofs, lead to a general result that can be instantiated to suit to any type of communication model, and allow the derivation of our results.

Ii Problem statement

Ii-a System description

We consider a system of initially agents labelled from to . Each agent holds an i.i.d. constant value

randomly drawn from some distribution of constant known mean, assumed here to be zero without loss of generality, and with variance

. The system is initialized at time , where the initial agents are given their values. Starting from there, each agent leaves the system at random times following a Poisson clock, and a new agent then enters the system with its own intrinsic value randomly drawn from the same distribution. Those events constitute replacements of agents, and the system is assumed to have a constant size.

Let be the set of the labels of the agents in the system at time (with thus ): the goal of the agents is to collaboratively estimate the average of the values of the agents presently in the system:

(1)

For this purpose, we assume each agent holds an estimate to evaluate the average . The usual convergence is not achievable in our open system subject to replacements of agents because of the time-varying nature of the objective due to replacements, and of the time it takes for agents to react to perturbations. Hence, we study an alternative way of measuring the performance of algorithms building such estimate through the analysis of the Mean Square Error

(2)

The aim of this work is to derive lower bounds on the performance of algorithms estimating the average as defined above, that are implementable in the setting described in this section. We will specifically focus on the particular case of algorithms implementable in a pairwise interaction feature, allowing for instance the implementation of Gossip interactions [Avg:Gossip].

Ii-B Simplifying assumptions

Our goal is to derive lower bounds on the performance of algorithms implementable in our setting. This can be done by studying an algorithm that achieves optimal performance under more favorable conditions than what our setting actually allows. For this purpose, we assume agents have an extended knowledge about the system, and we consider several simplifying assumptions.

We assume the agents know the size of the system , the labels of the agents, and the distribution defining the values they hold (although we will see later that they do not need to know the variance to obtain our results). They also know the dynamics of the system, namely the parameter defining the replacement events, as well as any parameter related to the way they exchange information with each other. Moreover, we assume they have access to a unique identifier that they can use, to a common universal time, and that they have unlimited memory.

We assume that when an agent replaces , it inherits the label (which is equivalent to simply assume that knows the label of the agent it replaces). A replacement can then be modelled as the (random) selection of a new value for agent , and the erasure of all other variables it holds. The value thus becomes a time-varying value that is constant between replacements.

These simplifying assumptions allow restating the MSE defined in (2), i.e., the criterion describing the performance of an algorithm implementable in the above setting, as follows:

(3)

where we also redefine . In the sequel, we derive fundamental performance limitations by computing lower bounds on the expected value of .

Ii-C Knowledge sets

In our analysis, we use the notion of knowledge set to model the information potentially available to an agent at some time with a model of communication. As an example, we provide in Definition 1 the description of the knowledge set of an agent performing Gossip interactions, noted .

Definition 1 (Gossip knowledge set).

When an agent enters the system at time with the value , through a replacement or at the beginning of the process, its knowledge set is initialized as follows

(4)

Moreover, each time agent interacts with an agent at time , it updates its set as follows:

(5)

The description above means that, at its arrival in the system, an agent only knows itself; then at each interaction with another agent, they share and learn everything they know. The last term of the union in (1), namely , is used to keep track of the interaction between agents and at time ; it is then deducible from (4) that it corresponds to an arrival. This knowledge set actually depicts the total amount of information agents can gather through pairwise interactions. It allows, among others, implementing the Gossip interactions.

A graphical representation of the evolution of some knowledge sets is provided in Fig. 1. In particular, Fig. (a)a depicts the evolution of one possible realization of the Gossip knowledge sets of three agents if those are subject to two consecutive interactions, and then one replacement. Fig. (b)b depicts the evolution of an alternative knowledge set in a similar situation, namely the Ping knowledge set (see Section IV-A, Definition 3 for a formal definition).

(a) Gossip knowledge set (Definition. 1).
(b) Ping knowledge set (Definition. 3, Section IV-A).
Fig. 1: Example of the evolution of a realization of both the Gossip knowledge set and the Ping knowledge set with three agents. ((a)a): Evolution of the Gossip knowledge set (Definition. 1) subject to two consecutive interactions followed by one replacement of agent . ((b)b): Evolution of the Ping knowledge set (Definition. 3, Section IV-A) subject to two consecutive Ping updates followed by one replacement of agent . The Ping knowledge set can be an alternative to the Gossip knowledge set in a pairwise interactions scheme. In that case, notice that it contains the Gossip knowledge set, and thus allows the computation of anything that is feasible with the Gossip knowledge set.

Standard results in computer science guarantee that knowledge sets as they are defined above contain all the information potentially available to an agent, and that can thus be used by an algorithm in a pairwise interactions scheme. In particular, the result of any (deterministic) algorithm at any time can be computed based solely on the knowledge set. See e.g. [MISC:ViewsInAGraph_Part1, MISC:ViewsInAGraph_Julien] for a similar application, namely views in a graph that model the information available to a node about a graph by exchanging messages.

We aim not only at studying the performance of the Gossip interactions, but also at deriving lower bounds on the performance of any algorithm for a given model of communication. This has two main interests: (i) generalizing the results to various communication schemes, and (ii) allowing those results to be directly reusable in general for different problems. For that purpose, we will need a more general formulation of the knowledge set applicable to different communication schemes. The generic formulation of the knowledge set is provided in Definition 2. It is defined on a collection of events that depend on the communication model, denoted by the superscript . For the Gossip knowledge set defined previously, those events are replacements and pairwise exchanges of information.

Definition 2 (Knowledge set function).

The knowledge set function is a function defined for an agent at some time , from a collection of events to a set of information made potentially available through those events, such that one realization of contains at least information of the type , and satisfies the two conditions below.

For any :

  1. For any :

    where denotes the event of a replacement of agent between times and , and the event that no information about agent lies in (with its complementary event).

  2. For any : :

    where we define the age of the most recent information about an agent lying in (if it exists) as follows:

    (6)

The first property states that if a replacement of an agent happened, then the new value of that agent is independent of any knowledge prior to that replacement. Formally, an agent’s value at some time and a knowledge set of any agent at an anterior time are conditionally independent given an event of replacement of that agent in between. The same holds if there is no information about that agent in the knowledge set: the value of that agent and any prior knowledge are then independent. This ensures that (i) no information can be deduced from the previous values held by an agent, and (ii) the absence of such information does not teach anything either, no matter the amount of time spent with no information. This property certifies the i.i.d. nature of the values held by the agents. Indeed, allowing the value of an agent to be dependent of its previous values is an explicit contradiction with that property, and allows agents to learn more than what our model actually allows. Similarly, if the value held by an agent depends on the time spent since the last information about it, then it becomes possible to improve the estimation of the average only based on that time, and possibly on the absence of such information. This also contradicts the way we modelled our system.

The second property ensures that, provided a knowledge set at some time, the probability for an agent to have been replaced since an anterior time is entirely characterized by the age of the most recent information about that agent in the set. That probability is thus also independent of the agent’s label and value. Satisfying this property legitimates the Poisson clock defining the replacement times, and prevents situations where each agent would have its own replacement rate for instance. Moreover, it avoids scenarios where information is ignored or deleted for reasons related to the observed value or label, which would weaken the quality of the estimation of the average. Furthermore, it provides an expression for the probability of replacement of an agent given a knowledge set.

Iii General lower bound

In this Section, we obtain a generic lower bound on the expected Mean Square Error given by that is valid for any model of communication, and thus for any instantiation of a knowledge set as defined in Definition 2.

Provided a communication model, one can instantiate a corresponding . Then, for a random collection of events following that model, the age of the most recent information defined by equation (6) in Definition 2

is a random variable. This random variable is well defined only if there exists information about an agent

in the knowledge set . Hence, we call “pseudo-CDF” of the function because it allows

if there exists no information about in (namely the event in Definition 2). Similarly, we call “pseudo-PDF” of the function .

We now provide the generic lower bound valid for any instantiation of a knowledge set in Theorem 1.

Theorem 1.

For any algorithm implementable on a knowledge set , there holds

(7)

where refers to the MSE defined in equation (3).

The result above is also valid for any instantiation of with another pseudo-PDF that bounds it (i.e., such that it defines a random variable smaller than in the usual stochastic order [MISC:UsualStochasticOrder]). See [OMAS:CDC2019:FPL_intrAVG] for an example application of this from the preliminary version of this result.

Corollary 1 particularizes equation (7) if the communication model does not make any distinction between the agents, and if we assume the system has been running for a long time.

Corollary 1.

With a knowledge set such that for some , there holds

(8)

In addition, when , there holds

(9)

The use of in the Corollary above is justified by the possibility that the limit does not exist as . The remainder of this section is devoted to to proving Theorem 1.

Iii-a Optimal estimate description

Proposition 1 below shows that an optimal estimate is obtained by choosing the expected value of the average provided a knowledge set. The performance of any algorithm implementable on this knowledge set is thus by definition lower bounded by that of this estimate. Notice however that the implementation of an algorithm producing this optimal estimate is likely to be challenging or impossible.

Proposition 1.

For a knowledge set , let

(10)

Then, for any algorithm implementable on , there holds

(11)

Therefore, we refer to as the “optimal estimate”.

Proof.

For a realization of , there holds

Hence, the result of any other algorithm satisfies by definition

The relation above is true for any realization . Therefore, for any algorithm, there holds

which concludes the proof. ∎

Iii-B Decomposition of the contributions

The next proposition builds on the description of the optimal estimate in Proposition 1 to highlight a convenient property of the system, which allows reducing the analysis of to that of the MSE of estimation of a single agent’s value.

Proposition 2.

For a knowledge set with the optimal estimate (10), the expected MSE reduces to

(12)
Proof.

The estimate (10) can be rewritten as follows

Hence, it follows that

The crossed-product terms of the squared sum are then nullified due to the absence of correlation between the agents’ values, leading to the conclusion. ∎

Iii-C Single error analysis

From Proposition 2, we only need to analyze the MSE of estimation of a single agent’s value to characterize . Let us define the following criterion:

(13)

where the dependence of on is omitted. The expected value of is the MSE we need to analyze. We will need the following lemma, proved in Appendix -A.

Lemma 1.

Let be two i.i.d. zero mean random variables of variance , and define another random variable as follows

(14)

so that the event related to the probability is independent of and . The estimator that minimizes provided the value of is given by

(15)

and the error is then given by

(16)

The following proposition provides an expression for the expected value of .

Proposition 3.

For a knowledge set with the optimal estimate (10), there holds

(17)

where denotes pseudo-PDF of .

Proof.

Let us characterize

As in Definition 2, denote the event that there is no information about agent in a realization of , i.e.,

and its complementary event.

The random variable is defined only if there is such information (i.e., in the event ). There holds then

where we remind is the pseudo-PDF of .

In the event , the first property of Definition 2 guarantees that the MSE is given by

In the event , the value given is

where is the random value taken by agent if a replacement happened, and is thus a random variable independent of following the same distribution. The probability comes from the second property of Definition 2, and denotes the probability for a replacement of agent (that is independent of the value that it can hold) to happen.

Lemma 1 can then be applied to obtain

Moreover, there holds by definition of

Combining everything together then gives

which ultimately leads to the conclusion. ∎

This last proposition concludes the development to obtain the result (7) from Theorem 1. It is thus a lower bound on the performance that is achieved by any algorithm that can be implemented with an instantiation of through a given communication model. This result aims at being as general as possible. In the sequel, we provide several ways to instantiate it with models that allow the implementation of the Gossip interactions in order to derive lower bounds on its performance. More generally, one can instantiate this bound with any specific communication model (and thus any instantiation of ), by defining the corresponding probability function , and injecting it in (7).

Iv Instantiations of the bound

In this section, we provide two instantiations of the lower bound on given in Theorem 1 by defining two models of communication: the first model relies to the presence of a central unit and on the absence of memory erasure at replacements, and leads to a conservative bound; the second one relies on pairwise exchanges of information, and leads to a tighter bound. We then provide a comparison between those two bounds and the performance of the Gossip interactions, that can be implemented on the corresponding models.

Iv-a First model: Ping updates

We consider an instantiation of such that, at random times following a Poisson clock of rate , an agent obtains the exact state of the whole system at that time. We call such events “Ping updates”. Moreover, we assume that agents do not forget what they know at replacements.

Definition 3 (Ping knowledge set).

At the beginning of the process at time , when agent is assigned the value , a realization of is initialized as follows

(18)

At a replacement of agent at time , a new value is assigned to and its knowledge set updates as follows

(19)

Each time an agent performs a Ping update at time , its knowledge set updates as follows

(20)

A graphical representation of the evolution of one possible realization of this knowledge set for three agents subject to two Ping updates followed by a replacement is provided in Fig. (b)b in Section II-C. Moreover, one shows that it satisfies the two properties of knowledge set functions stated in Definition 2.

This model, to which we refer as the “Ping model”, corresponds to the existence of a central unit that perfectly knows the state of the system at all times. A Ping update then corresponds to a request performed by an agent to that central unit: at the time it performs this request, an agent thus exactly knows the value of the average .

Since information circulates instantaneously at a Ping update, and since memory losses are neglected, the only actual challenge to handle with the Ping model is the possibility for an agent to have been replaced since the last update, and thus the presence of outdated information. The bound derived with this model is thus conservative.

To apply Theorem 1 and obtain our bound, we first need to compute the pseudo-PDF .

Proposition 4.

With the Ping model defined by the knowledge set , the random variable admits the following pseudo-PDF for any :

(21)
Proof.

With the Ping model, the age of the most recent information about an agent held by another one is the age of the last time it performed a Ping update. Since those happen according to a Poisson clock of rate , there holds

The conclusion follows that . ∎

Since the pseudo-PDF obtained in Proposition 4 does not depend either on the agents nor the time , the lower bound on can be derived from applying Corollary 1.

Theorem 2.

For any algorithm that can be implemented on , there holds

(22)
Proof.

The conclusion follows the instantiation of in Corollary 1 with from Proposition 4. The bound is then obtained from the integration that follows:

A few algebraic operations conclude the proof. ∎

Taking the limit as , or similarly by applying the second result of Corollary 1, we get the following corollary.

Corollary 2.

For any algorithm that can be implemented , there holds

(23)
Fig. 2: Time-dependent bound derived with the Ping model (22) with 10 agents and . The blue line is the MSE at the initialization of the system (), the red line is the bound obtained when (23), and the dashed black lines are the bounds obtained at some times in between.

We depict in Fig. 2 the bound (22) in terms of the rate ratio (i.e., the average number of Ping updates experienced by an agent before leaving the system) for 10 agents at different times after initializing the system, until it converges when . When the system is initialized, the only information available to the agents is their own value, and the MSE is exactly no matter the rate ratio. With time passing, this MSE tends to be maintained if the communications are rather rare (), since the amount of available information tends to remain the same. By contrast, the MSE tends to decay to as the replacements become more frequent (), with the system behaving progressively like a close system as increases. This behavior is enhanced as the time passes, to ultimately converge to a time-independent bound as .

Interestingly, the only dependence of the bound on the size of the system lies in the first factor of (22), , namely the error at the initialization of the system. The decay of that MSE is then entirely defined by the communication and replacement rates. As , that decay gets only characterized by the rate ratio i.e., the expected number of Ping updates experienced by an agent before leaving the system.

Iv-B Second model: pairwise interactions

We consider a second instantiation of which relies on strictly pairwise interactions happening according to a Poisson clock of individual rate , so that on average a given agent interacts times per unit of time. At those interactions, agents are assumed to exchange everything they know.

This actually corresponds to the Gossip knowledge set provided in Definition 1 in Section II-C. A graphical representation is also provided in Fig. (a)a for a realization of this knowledge set for three agents subject to two interactions followed by a replacement. One can show that it satisfies the two properties of knowledge set functions stated in Definition 2.

This model raises more challenges than the Ping model does. Additionally to handling outdated information from unknown replacements, memory losses are considered as well as propagation time of information that is not instantaneously transmitted to all the agents when an agent interacts. The bound that we will derive with this model is thus tighter than that derived with the Ping model, and is also expected to be closer to the performance achieved by most realistic algorithms implementable in a one-to-one communication scheme.

To apply Theorem 1 and obtain our bound, we first need to compute the pseudo-PDF .

Proposition 5.

With the Gossip model defined by the knowledge set , the random variable admits the following pseudo-PDF for any

(24)

where , and where is a tridiagonal matrix with

  • ;

  • ;

  • .

Proof.

The pseudo-CDF is defined as

Denote by the number of agents having at least one information more recent that at time , then there holds

The factor comes from the absence of distinction between the agents during the interactions and replacements.

The value is constant between events, and potentially modified at replacements and interactions. Indeed, if , then this value

  • increases by one with rate through interactions;

  • decreases by one with rate through replacements.

The value

thus follows a continuous-time Markov chain, and more precisely and birth-death process.

Denote such that for , where the dependence on is omitted. Then, from the birth-death Markov process properties, there holds

with the transition rate matrix of the process defined as the tridiagonal matrix such that

  • ;

  • ;

  • .

The solution of this ODE system is given by

with

Re-injecting the above expression into that of leads to the following

where . One has then

which concludes the proof. ∎

It is interesting to notice that this is exactly the behavior of an SIS infection process, where the disease spreading is an information about a given agent. New infections occur when an agent knowing that information interacts with one that does not, and healing happens through replacements of agents that know the information.

Since the pseudo-PDF obtained in Proposition 5 does not depend either on the agents nor the time , the lower bound on can be derived by applying Corollary 1.

Theorem 3.

For any algorithm that can be implemented on , there holds

(25)

where , and where is the tridiagonal matrix such that , and .

Proof.

We inject from Proposition 5 into equation (8) from Corollary 1 in order to derive the bound. The integral then becomes

Using the commutativity between and , there holds

One can show that as long as , then is invertible, and one has

Injecting that result into (8) concludes the proof. ∎

Taking the limit as , or similarly by applying the second result of Corollary 1, we get the following corollary.

Corollary 3.

For any algorithm that can be implemented on , there holds

(26)

The following corollary provides an analytic approximation of (26). It gets more accurate as the rate ratio increases, and as the size of the system decreases. Ultimately, as the approximation gets more accurate, it becomes a lower bound on (26), and thus a valid bound for the performance of algorithms implementable on .

Corollary 4.

For any algorithm that can be implemented on , there holds

(27)

where , and where is some known polynomial term in .

Proof.

The proof is available in Appendix -B. ∎

Fig. 3: Time-dependent bound derived with the Gossip model (25) with 10 agents with . The dotted blue line is the MSE at the initialization of the system, the plain red line is bound (26) obtained when , the thick yellow dashed line is the approximation (4) of that bound, and the black dashed lines are the bounds at some times in between.

We depict in Fig. 3 the bound (25) in terms of the rate ratio (i.e. the expected number of interactions experienced by an agent before leaving the system) in the same settings as for the Ping model: 10 agents at different times from initialization until convergence to a time-independent bound as . The same preliminary observations as for the Ping model hold. The MSE is at the initialization, and remains around that value for small values of . By contrast, the MSE decays to zero as gets large and as the behavior of the system becomes that of a closed system.

However, the size of the system has here also an impact on the decay of the MSE, in opposition with what was observed with the Ping model. This can be due to the propagation time of the information within the system, and at some extent to the memory losses at replacements, which are considered with the Gossip model and not with the Ping model. Hence, the bound tends to be less conservative than with the Ping model.

We also depict the approximation (4) from Corollary 4 of the bound that is expected to get more accurate for large values of when . With agents, it appears that the bound starts getting satisfyingly accurate around .

Iv-C Gossip interactions performance analysis

In this section, we provide a comparison between the performance of the Gossip interactions and the bounds derived in the two previous sections. The Gossip interactions is the simplest version of the Gossip averaging [Avg:Gossip]: an agent takes its own value as initial estimate at its arrival in the system, and each time two agents and interact, their estimates update as follows

(28)

Since Gossip interactions rely on pairwise interactions, it can by definition be implemented on . Similarly, a Ping update as defined in Section IV-A contains all the information currently in the system, including that necessary to perform Gossip interactions. Hence, by assuming those updates happen each time two agents interact, Gossip interactions can be implemented on (with ). The bounds derived with those models are thus valid for the performance of the Gossip interactions. To avoid confusion, we refer to the second bound (in a pairwise interactions scheme) as the “SIS bound”.

Fig. 4: Performance of the Gossip interactions (in dotted blue line) obtained through simulations compared with both Ping bound (in plain red line) and SIS bound (in yellow dashed line), for several values of the system size . The simulation was performed over events (either replacements of communications) for realizations.

In Fig. 4, we compare both Ping and SIS bounds with the performance of the simulated Gossip interactions, respectively for , and agents. We provide those bounds in a situation where we assume the system has been running for a long time. The bounds that are depicted are thus the bounds (23) and (26), respectively from Corollaries 2 and 3.

As expected from the derivation of the bounds, it appears that the SIS bound (25) is tighter than the Ping one (22), since it is related to a more constrained communication scheme. More precisely, the SIS bound stands for the exact communication model on which the Gossip interactions are defined. It is thus closer to the actual performance of the Gossip interactions than the Ping bound.

Nevertheless, there is still an important gap between the performance depicted by both bounds and that observed for the simulated Gossip interactions. Interestingly, the size of the system has barely no impact on the performance of the Gossip interactions, especially small values of . In opposition, the bounds depict a smaller MSE as the system size increases. This is highlighted by Fig. 5, which depicts the SIS bound and the performance of the Gossip interactions with respect to the size of the system for several values of . This highlights how knowing the size of the system (which is assumed to be the case for the bounds, but is not for the Gossip interactions) can impact the performance of algorithms. Indeed, the MSE depicted by the bounds is scaled using the size of the system, whereas it has no influence on the MSE of the Gossip interactions.

Fig. 5: Performance of the Gossip interactions (in plain line) and of the SIS bound (25) (in dashed line) with respect to the size of the system for several values of the rate ratio .

Nonetheless, the behavior of the performance of the Gossip interactions is qualitatively well captured by both bounds. This is surprising, especially because the Gossip interactions is a particularly naive algorithm: it only relies on one variable, and does not make use of any identifier nor any of the information related to the openness of the system or to the distribution of the intrinsic values of the agents, whereas our bounds do. This questions the interest of such parameters in the design of efficient algorithms, and the exact impact they have on their performance.

V Conclusion

In this work, we analyzed open multi-agent systems, where agents can join and leave the system during the process. We highlighted several challenges arising from that property, in particular the impossibility for algorithms to converge due to the variations of size, state and objective experienced by the system, and to the time it takes for information to propagate.

We focused on the derivation of lower bounds on the performance of algorithms trying to perform average consensus in open multi-agent systems of constant size. Those were obtained through the analysis of the Mean Square Error of an estimate that achieves optimal performance on some communication model defining the way information is exchanged within the system. We derived a general expression for that bound that depends on the communication model that is considered. This bound was then instantiated with two such models to obtain two bounds valid for each of these models. Those were then compared with the performance of the Gossip interactions algorithm that can be implemented on the corresponding models.

This work intends at opening ways for many further works. The methodology used for deriving our results can be applied to more complex objectives (including decentralized optimization [DO:dualAvg] or formation control [ApMas:Consensus_based_formation_control:CDC2019]), or to more structured or restrictive communication models for the interactions between the agents. Moreover, investigating the actual impact of parameters that were assumed to be known in the derivation of our bounds is an interesting follow-up: in particular, the impact of identifiers is questionable in regards of the performance of an anonymous algorithm compared to our bounds. Finally, extending our analysis to continuous-time algorithms reveals to be challenging, as our derivation relies on essentially discrete-time ideas.

References