. Among the most cited properties of multi-agent systems are their flexibility, their scalability, or their robustness. Yet, most results around multi-agent systems stand for asymptotic properties under the convenient assumption that their composition remains unchanged. The increasing size of the systems challenges this assumption as it implies slower processes and higher probabilities of arrivals and departures, making those non-negligible. It is also challenged by the chaotic nature of some systems, where communications can be difficult, or happen at a time-scale highly comparable to that of the arrivals and departures: vehicles can for instance share a stretch of road before heading to different destinations in collaborative multi-vehicles systems.
All those reasons bring up the emergence of the study of Open Multi-Agent Systems. Results about closed systems do not easily extend to open ones: repeated arrivals and departures imply important differences in the design and analysis of such systems, and result in several challenges, see e.g. [6, 7]. First, with the frequent arrivals and departures of agents, the size of an open system changes with time, making the analysis of its state challenging. Moreover, incessant perturbations impact the state, but also in some cases the objective pursued by the agents of the system: algorithms then have to adapt to that variable objective, making their design challenging, and the usual convergence cannot be achieved anymore.
I-a State of the art
There is little theoretical analysis of open multi-agent systems, as most multi-agent results rely on the assumption that systems are closed. However, some results considered arrivals and departures, such as simulation-based analyses of social phenomena performed in . Also, it has been shown in  that systems subject to gossip interactions in open features can be analyzed through size-independent descriptors, whose evolution described by a dynamical system is shown to asymptotically converge to some steady states. Moreover, algorithm design has been explored for MAX-consensus problems with arrivals and departures in  through additional variables, and where performance was measured by the probability for the estimate to eventually converge if the system closes. Similarly, THOMAS architecture was designed to maintain connectivity into P2P networks . Openness was also considered in applications such as VTL for autonomous cars to deal with cross-sections .
Nevertheless, up to now, efficiently studying performances of algorithms and analyzing open systems remains a challenge in general. As agents cannot instantaneously react to perturbations, obtaining an exact result is often out of range. Hence, the goal should be to remain near a time-varying objective, and usual convergence is no more relevant to be studied as it cannot be achieved. A step towards understanding open systems is the derivation of fundamental performance limitations: lower bounds on the performances that are possible to achieve. Through those limitations, one can obtain some quality criterion for algorithm design in open systems, but also get a better understanding of the possible bottlenecks that could arise.
We establish fundamental performance limitations in open multi-agent systems for intrinsic averaging consensus problems (i.e. estimating the average of all intern values owned by agents present in the system at that time), where information is exchanged between agents through random pairwise communications. In this analysis, we focus on systems of fixed size: we assume that each departure of an agent is instantaneously followed by an arrival, so that only replacements occur, and the system size remains constant, see Section II for a formal definition. The approach used for deriving the fundamental limitations is detailed in Section III, and consists in evaluating the performances of an algorithm that is provably optimal under favorable settings where agents have access to more resources than decentralized problems typically allow. These performances depend on some complicated distribution related to information propagation. Bounds on this distribution were obtained by considering relaxed conditions for the information exchange, namely the Ping model in Section IV and the Infection model in Section V, to properly define the performance limitations.
Since consensus problems are commonly a building block for various multi-agent complex applications (such as decentralized optimization  or several control applications ), we expect the techniques we use for deriving those limitations to be extendable to more advanced tasks on open multi-agent systems.
Ii Problem statement
Ii-a System description
We consider a set of agents, labelled from to . Each agent holds an i.i.d. intrinsic value
randomly selected from a distribution which we assume without loss of generality to be zero mean and of variance. This value remains constant except at a replacement, which we model as the complete erasure of the agent’s memory and the attribution of a new value drawn from the same distribution. Those replacements happen according to a Poisson clock defined by an individual replacement rate , so that on average replacements happen in the whole system per unit of time.
Agents interact via pairwise communications: every two agents interact with each other at random times determined by a Poisson clock with rate . There are thus on average communications taking place in the system per unit of time. Agents are deterministic and have unlimited memory and computation power and there is no restriction on the size or nature of the messages they can exchange. They have a unique identifier that they can use and they have access to a common universal time. The way we model replacements as a memory erasure also means agents know the correspondence between the agents having left and those having replaced them since they share the same label. We assume moreover that they know the number of agents , the parameters , , and the distribution of the (although we will see that the results would be the same if they knew only the expected value of , assumed here to be ). Agents have thus access to significantly more information than in many works on multi-agent systems, but our lower bounds on the algorithm performance will of course also apply to these more usual situations, since our setting allows implementing any algorithm that could be implemented under more restrictive conditions.
In intrinsic averaging, agents try to estimate the average of the intern values of the agents present in the system at that time, denoted . For that purpose, every agent maintains its own estimate of that average based on its knowledge about the system, and updates it in continuous time. Under ideal conditions, one would expect that estimate to become .
This is not achievable in open systems because of the variable objective and the delays to transmit information in the system, leading to the absence of usual convergence. We thus need a quantitative measure of the performance of an algorithm solving this problem in open systems, and choose the classical Mean Square Error (MSE) of the estimation of all the agents at time , presented in equation (1).
In this analysis, we focus on the steady state-error: we assume the system has been running since , so that the effect of initial conditions has disappeared, and can be considered independent of the time (one can verify that our processes remain well defined). We thus derive fundamental performance limitations for all algorithms that can be implemented in our setting as a (time-invariant) lower bound on (2).
Ii-C Preliminary notions
Before stating our first result, we need to introduce certain notions related to the information available to an agent when computing its estimate, and to the age of its information.
We first formalize in the next definition all the information about events and values to which an agent could possibly access, and thus influence its estimate .
At its arrival at time , an agent whose value is set to owns a knowledge set such that . If agents and interact at time , then .
The last expression of the union defining an interaction denotes that have interacted at time and confirms their values at that time.111This confirmation is included for simplicity, but is actually redundant, as the values at that time could be obtained from and . In other words, the knowledge sets after interaction consist of the union of the knowledge sets, to which is added the information about the interaction and confirmation of the values. Standard results in distributed computation show that any estimate that an agent can compute in our settings can actually be computed based only on and the time .
Observe that may contain various values for different . However, we will see later that only the most recent known values about the other agents are needed to build the estimate .
The most recent value known by the agent about at time is denoted , where is the time at which that most recent value was first obtained. We denote the age of that information by .
By convention, if no value lies in , we set to and the corresponding age to . Moreover, the estimate of an agent about itself is always correct, and the corresponding age of the information is .
In steady state, and due to the symmetry between the agents, the distribution of is independent of , and . Hence, all agents share a common cdf and pdf for that variable, respectively denoted and . Moreover, with a small abuse of language, we say that another pdf bounds when the corresponding cdf satisfies
, which is always satisfied for a random variable, in the usual stochastic order .
Iii Bound in terms of the information spreading mechanism
We present in Theorem 1 a general lower bound on (2) depending on the way information spreads in the system. To properly define a bound, we will thus need to find relaxations of that information spreading to instantiate the expression.
The bound derived in the above theorem is obtained by studying an optimal algorithm in the setting of Section II (Section III-A). It relies on the decomposition of the contributions of all the agents for the estimation of the average to obtain the MSE for some knowledge (Sections III-D, III-C and III-B), and then on the derivation of a steady state through the analysis of the way information spreads in the system (Sections III-F and III-E).
Iii-a Optimal algorithm definition
We first build an algorithm that is optimal for solving intrinsic averaging based on a knowledge set.
Conditional to the knowledge set , algorithm (4) is optimal since it minimizes the MSE:
Hence, since any other algorithm only depends on and , the result of any of them is such that
Considering the expected value on all , it follows:
Iii-B Individual contribution
We show in the following proposition that expression (5) conveniently reduces to the analysis of a single agent.
With the optimal algorithm (4), one has
It follows that the error given is written
The absence of correlation between the agents values finally allows to nullify the crossed-product terms of the squared sum, to obtain the final expression.
Iii-C Single agent estimate
We can then explicitly write an expression for the estimate of a single agent .
Hence, the estimate only depends on the most recent information about the agent (note that the time-dependence of is removed to lighten the notations).
The most recent information we know about is given by Definition 2, and we have no information about whether it was replaced since then. Hence, denoting the event that has been replaced, and that it has not,
By definition, . Then, from Poisson properties:
and the conclusion follows from the definition of the age of the most recent information.
The result above also stands when an agent estimates its own value or that of an agent for which it has no information, leading respectively to and .
Iii-D Individual error
For concision matters, we denote
The MSE when estimating a single agent for algorithm (4) conditional to the knowledge set is
and is thus entirely characterized by the age of the most recent information about it.
Denoting the event of at least one replacement of the agent during and by the event of no replacement, we can develop (8) with a case-by-case analysis:
We develop each term to obtain the final result:
This last result allows to write the following:
Iii-E Global expected value
We have developed an expression for the error in terms of the age of the most recent information about the other agents in (10). We can now obtain an expression for criterion (2) by computing the expected value of that result.
The MSE defined in (2) is given by
where we remind that and . It is thus entirely characterized by the distribution of the age of an information, and knowing leads to a proper bound.
Iii-F Relaxation of the communication process
The previous result is actually that of Theorem 1 where the pdf is exactly . Since this pdf reveals to be hard to compute, we show in the next proposition that a pdf bounding as defined in Section II-C leads to a proper lower bound on (11), which concludes the development of Theorem 1. In the next two Sections, we will then present two possible relaxations of the information spreading mechanism that will both lead to an appropriate to instantiate the bound.
Given some value where is a cdf, and where is a positive non-decreasing function, then for any other cdf ,
By defining with , one has
thanks to the properties of the cumulative distribution functions. Using then the positivity ofand , the conclusion is direct.
Iv Strong assumption: Ping model
Iv-a Assumption description
The first relaxation we consider relies on a strong simplification of the communication mechanism. We assume that each time it communicates, an agent acquires all the information about all the agents presently in the system, and never forgets it, even at replacements. We refer to this assumption as the Ping model, in reference to the ping software used to test the reachability of machines in a network. The most recent value known by the agent about at time is then given by where is the time of the very last communication in which agent was involved, and the age of that information is the time spent since then.
In steady state, , the age of the most recent information about held by , noted follows
which bounds as defined in Section II-C.
The random variable defines the time spent since the last communication involving the agent , and thus follows by definition of the communications a Poisson process of rate . Its cdf is then given by
The pdf is then obtained with
is actually the minimal value that can take . Hence , and bounds .
The performances of any algorithm in the setting defined in Section II are bounded from below as
Applying Theorem 1 with the pdf , the bound is given by performing the integration
A few algebraic operations lead to the conclusion.
Several observations on the evolution of the error in terms of the ratio of the rates and arise from Fig. 1. When communications are rather rare (), the error tends to , i.e. the error that an agent would obtain only considering itself in the estimation. As the communications become more frequent, the second factor of (14) makes the bound decrease, to ultimately bring it to as , which is the error to which we converge in a close system.
Observe that is the communication rate for one agent. Hence, the bound is actually separated into two factors: and the second one that only depends on which characterizes the expected number of interactions involving one agent before it is replaced.
V Weaker assumption: Infection model
V-a Assumption description
For concision matter, we provide an informal description of the assumption. The random variable denoting the age of the most recent value agent knows about can also be interpreted as the time it took for that information to reach . The main difficulty in computing the corresponding pdf lies then in the information losses through replacements. Hence, for this second relaxation, we assume there is no more replacement in the travel of the information, or similarly that there is no more memory erasure at replacements. This relaxation reduces to assuming the information spreads as an infection process without healing, and we refer to it as the Infection model. The most recent value is then that of the time when the infection started to be emitted from at , and the age of that information is the time it took for to be infected.
In steady state, the pdf of , noted , bounds as defined in Section II-C.
Considering one realization of communications, there exists a sequence of events where is such that it did not suffer replacements. Since that sequence always also exists with the Infection model, .
In steady state, , the age of the most recent information agent has about agent with the Infection model, denoted , follows the cdf
where is defined by an ODE system:
such that and .
Let be the set containing the labels of the infected agents at time , its size, and denote . Then, one has
From the symmetry between the agents, and since they are all interchangeable, one has
Finally, the ODE system defining
is a standard result from continuous time Markov chains theory.
Equation (16) defines a linear system with
whose solution is given by
It follows from (15) that the cdf and pdf are given by
Injecting that result in the integral from Theorem 1, which we will see is well defined, one has
Since and commute, the integral reduces to
Since , then
is invertible and has strictly negative eigenvalues, leading to
The expression below is equivalent to (17).
Corollary 1 presents an algebraic expression for the bound (17) from Theorem 1, and is proven in Appendix -A. Detailed empirical explorations suggest that . Using results on bounds in numerical integration, we can then find a compact bound on that expression, and derive the following empirical expression for bounding from below.
Fig. 3 compares the bounds derived with the Ping and Infection models, and the expression (20) based on the Infection bound, in logarithmic scale. The first observation is the improvement of both bounds from the Infection model, which relies on a weaker relaxation, in comparison with the Ping model. That improvement is even more apparent when the rate ratio increases, since the impact of the corresponding factor gets more important. In particular, the expression (20) is actually exactly that of the bound from the Ping model multiplied by a logarithmic factor that scales to when is large. This could relate to the fact that information propagates in one hop in the Ping model, whereas it needs time with the Infection model.
Observe also in Fig. 3 that the bound follows two distinct regimes: when is very small, it has a very low impact on the bound, which stays around ; whereas after some threshold, the bound decays according to .
Finally, we have compared our bound with the results of the simplest version of average gossip : agents take their own value as initial estimate , and whenever two agents communicate, their estimates become
This algorithm computes the average in closed system. Observe that it is consistent with our setting presented in Section II, and that we did not make any adaptation to the open character. Fig. 3 compares the performances of the algorithm (21) with the bound from the Infection model (17). It is interesting to notice that even though there is a gap between the curves, the behavior of the error in terms of is well captured by the bound, especially when that ratio gets large enough. Interestingly, the algorithm relies on only one variable, and does not make use of identifiers nor of any other information provided to the agents as described in Section II, while our bound is valid for algorithms potentially using this information. Hence one could wonder about the precise impact of that information and the efficiency gains it allows.
We considered in this paper the possibility of arrivals and departures of agents in the study of multi-agent systems, and highlighted several challenges arising from this property. In particular, it prevents algorithms to converge for many problems, making their analysis challenging.
We focused on the analysis of the performances of intrinsic averaging algorithms for open systems of constant size, for which we derived fundamental performance limitations as lower bounds on the Mean Square Error of estimation in steady state. This was done by studying the performances of an algorithm optimal in a context more favorable to the agents than what is usually assumed in multi-agent systems, and relied then on the relaxation of the information spreading mechanism in the system. Two of such relaxations were studied, leading to a conservative but readable bound, and to a tighter but more complex bound.
Possible extensions include the study of more complicated problems (e.g. decentralized optimization ), more structured constraints on the inter-agent communications, and the consideration of variable-size systems. More fundamentally, our bounds purely rely on limitations on the information propagation, and are valid even if the agents have some strong knowledge on the system and use identifiers or heterogeneous algorithms. Stronger lower bounds can be obtained under more restrictive assumptions, e.g. by improving the Infection model from Section V with healing. Yet, it would be interesting to investigate what factors significantly impact the bound: in particular, an anonymous algorithm was already shown in Section V-C to exhibit performances not too different from our bound, questioning the impact of anonymity of the agents.
-  J. Predd, S. Kulkarni, and H. V. Poor, “Distributed learning in wireless sensor networks,” Signal Processing Magazine, IEEE, vol. 23, pp. 56–69, 07 2006.
-  R. Olfati-Saber and N. F. Sandell, “Distributed tracking in sensor networks with limited sensing range,” pp. 3157 – 3162, 07 2008.
-  Y. Cao, W. Yu, W. Ren, and G. Chen, “An overview of recent progress in the study of distributed multi-agent coordination,” IEEE Transactions on Industrial Informatics, vol. 9, 07 2012.
-  V. Blondel, J. M. Hendrickx, and J. Tsitsiklis, “Continuous-time average-preserving opinion dynamics with opinion-dependent communications,” SIAM Journal on Control and Optimization, vol. 48, 07 2009.
-  R. Hegselmann and U. Krause, “Opinion dynamics and bounded confidence models, analysis and simulation,” Journal of Artificial Societies and Social Simulation, vol. 5, 07 2002.
-  J. M. Hendrickx and S. Martin, “Open multi-agent systems: Gossiping with deterministic arrivals and departures,” in Proceedings of the 56th IEEE CDC, pp. 1094–1101, 09 2016.
-  M. Abdelrahim, J. M. Hendrickx, and W. M. Heemels, “Max-consensus in open multi-agent systems with gossip interactions,” in Proceedings of the 56th IEEE CDC, pp. 4753–4758, 12 2017.
-  P. Sen and B. K. Chakrabarti, Sociophysics: an introduction. Oxford University Press, 2013.
-  A. Giret, V. Julián, M. Rebollo, E. Argente, C. Carrascosa, and V. Botti, “An open architecture for service-oriented virtual organizations,” pp. 118–132, 09 2010.
-  O. K. Tonguz, “Red light, green light—no light: Tomorrow’s communicative cars could take turns at intersections,” IEEE Spectrum, vol. 55, pp. 24–29, Oct 2018.
-  J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: Convergence analysis and network scaling,” IEEE Transactions on Automatic Control, vol. 57, 05 2010.
-  A. Müller and D. Stoyan, Comparison methods for stochastic models and risks. Chichester ; New York, NY : Wiley, 2002.
-  S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE/ACM Trans. Netw., vol. 14, pp. 2508–2530, June 2006.