1 Introduction
Many modern distributed communication networks such as ad hoc wireless, sensor, and mobile networks, overlay and peertopeer (P2P) networks are inherently dynamic (suffer from a high rate of connections and disconnections) and bandwidthconstrained. Hence, understanding the possibilities and limitations of distributed computation in dynamic networks has been a major goal in recent years.
In this paper, we study the fundamental problem of information spreading on (synchronous) dynamic networks. This problem was analyzed for static networks by Topkis [39], and was in particular studied on dynamic networks by Kuhn, Lynch, and Oshman [32]. In the information spreading problem (also called gossip or token dissemination), there are pieces of information (tokens) that are initially present in some nodes and the problem is to disseminate the tokens to all the nodes in the network, under the bandwidth constraint that one token can go through an edge per round. This problem is a fundamental primitive for distributed computing; indeed, solving gossip, where each node starts with exactly one token, allows any function of the initial states of the nodes to be computed, assuming the nodes know [32].
The dynamic network models that we consider in this paper allow a worstcase adversary known as strongly adaptive that can choose any communication links among the nodes for each round, with the only constraint being that the resulting communication graph be connected in each round; this adversary can choose the links with the knowledge of the tokens that any node can send in that round as well as its random choices (in one of the results we also consider an oblivious adversarial model). Our adversarial models are closely related to those adopted in recent studies (e.g., see [8, 16, 22, 26, 32, 37]). We distinguish two variants of the basic model, depending on whether nodes communicate by local broadcast (i.e., a node always sends the same message to all its neighbors) or whether we allow nodes to do unicast communication (i.e., nodes can possibly send different messages to different neighbors in the same round). For more information on the model, we refer to Section 1.3. We note that most of the prior work (e.g., [26, 32, 37]) only considered communication by local broadcast.
The focus of the present paper is on tokenforwarding algorithms, which do not manipulate tokens in any way other than storing, copying, and forwarding them. Tokenforwarding algorithms are simple and easy to implement and have been widely studied (e.g, see [35, 38]). The paper investigates the message complexity of tokenforwarding algorithms for information spreading. Message complexity—the total number of messages sent by all nodes during the course of an algorithm—is an important performance measure. It directly relates to the cost of communication, which is a dominant cost in many realworld settings (e.g., it is correlated to energy, power, etc. in wireless networks). While information spreading in dynamic networks have been studied intensively over the last years, almost all of the existing work (e.g., [5, 8, 22, 26, 30, 32]) solely focuses on the time (round) complexity of distributed algorithms. (However, some works that focus on time complexity imply bounds on messages — see e.g., [14, 18, 21].) In many cases, the currently best algorithms for information spreading in adversarial dynamic networks have a high message complexity and in many cases, a high time complexity as well. In contrast, in this paper, we are interested in the amortized message complexity of information spreading, i.e., the average message cost of spreading tokens (when is large) in a dynamic network. To the best of our knowledge, this aspect has not been studied in prior works on information spreading in dynamic networks (cf. Section 1.2).
In any node static network, a simple tokenforwarding algorithm that pipelines token transmissions up a rooted spanning tree, and then broadcasts them down the tree completes gossip in rounds [38], which is clearly asymptotically tight because the diameter of the network might be and because every node has to receive different tokens. In fact, rounds are even sufficient if in each round, each node forwards an arbitrary not yet forwarded token to each of its neighbors [39]. In a dynamic network, it is known that under a strongly adaptive adversary and if the communication is via local broadcast, the bound cannot be achieved; Dutta et al. [26] (see also [30]) showed that rounds are necessary. This bound is essentially tight (up to a logarithmic factor), since one can easily achieve an upper bound of by flooding. We do not know any tight bounds on the time complexity for unicast communication.
With regard to messages, we are interested in the amortized (average) message complexity of spreading a token. In a static network, one can first build a spanning tree (which can take as much as messages^{1}^{1}1This bound is true in the KT0 model where nodes do not have initial knowledge of their neighbors’ IDs. On the other hand, in the KT1 model, where each node has initial knowledge of the IDs of their respective neighbors, it is possible to build a spanning tree in messages [31]. Note that this distinction is not very important in the amortized setting in a static network, since in both cases the amortized message complexity is if . In the dynamic setting, we essentially assume the KT1 model for unicast communication, whereas for broadcast communication, the distinction is not important, see Section 1.3 for more details. in graphs with edges [34]), and then using the spanning tree edges to disseminate the tokens to all nodes; this takes messages overall or amortized messages per token. If is sufficiently large^{2}^{2}2There are natural applications where is large, e.g., if all nodes have tokens to broadcast or if some node has a stream of messages as, for example, in audio/video transmissions., say at least , then the above bound gives amortized messages per token, which is optimal (since each node has to receive the token). On the other hand, for dynamic networks, the situation is far less clear. In the case of local broadcast communication (where each broadcast is counted as one message^{3}^{3}3This is reasonable, especially, in the context of wireless networks where nodes communicate by local broadcast.), an amortized message upper bound per token is straightforward to obtain by using flooding (each node broadcasts each token for rounds). For unicast communication (cf. Section 1.3), again an amortized upper bound is easy to obtain (each node sends each token at most once to each other node; note that for unicast communication each message to a neighbor is counted as one message). In both cases, nontrivial lower bounds are not known. Thus, the central question that we seek to address in this work is whether one can achieve or even asymptotically optimal amortized message complexity when is large (for both the local broadcast and the unicast settings). We note that prior works (including [5, 8, 22, 26, 30, 32]) do not address this question.
1.1 Our Main Results
In the local broadcast setting, we give a negative answer to the above question and show that with a strongly adaptive adversary, the amortized message complexity bound of the naive algorithm is indeed necessary (cf. Section 2). This “bad” bound for local broadcast is a motivation for considering the (more challenging) unicast setting. For the unicast setting, we study how the message complexity behaves as a function of the number of dynamic changes in the network. To facilitate this, we introduce a new and natural complexity measure for analyzing dynamic networks called adversarycompetitive message complexity (cf. Definition 1.3). While the adversary is free to change the topology arbitrarily from round to round, this measure allows one to intuitively assume that it has to pay some price for every connection and reconnection and we allow an algorithm a “free” communication budget of comparable size. This measure has natural realworld motivation. For example, in realworld communication networks, due to the actions of the lower layer link protocol (that is responsible to establish the connection when a physical link comes up), one can assume that whenever a new edge is created, some information is exchanged anyhow by the link layer. Thus, it is reasonable to assume that there is some cost to be paid in establishing or reestablishing a link (say, after the link is down for a while). Our new measure formalizes this intuition.
Under the new complexity measure (defined formally in Section 1.3), we show that if is sufficiently large, we obtain an optimal amortized message complexity of (cf. Section 3). In case the dynamic network topology satisfies some natural additional properties, we also show that the algorithm terminates in rounds. We present two algorithms in this setting depending on how the tokens are initially distributed: (1) a singlesource case, where all the tokens start at the same node and (2) a multisource case, where the initial token distribution is arbitrary.
When the number of tokens is not very large, say (i.e., gossip), the amortized bound does not hold. In this setting, we are able to show a subquadratic amortized message complexity under an oblivious adversary, which is same as the worstcase adversary, except that it is oblivious to the random choices made by the algorithm and the execution history (cf. Section 3.2.2). Our algorithm is randomized and is based on random walks.
Our analysis of the unicast communication under the adversarycompetitive model is a main contribution of this paper. We believe that the adversarycompetitive model can be an useful alternative to the current models in analyzing various other important problems such as leader election and agreement in dynamic networks (see e.g., [6, 7]).
Our work raises several key open questions that are discussed in Section 4.
1.2 Related Work and Comparison
Information spreading (or dissemination) in networks is a fundamental problem in distributed computing with a rich literature. The problem is generally wellunderstood on static networks, both for interconnection networks [35] as well as general networks [4, 36, 38]. In particular, the gossip problem can be solved in rounds on any node static network [39]. There are also several papers on broadcasting, multicasting, and related problems in static heterogeneous and wireless networks (e.g., see [3, 12, 13, 17]).
Dynamic networks have been studied extensively over the past three decades. Early studies focused on dynamics that arise when edges or nodes fail (but, generally don’t consider edges/nodes recovering from failures). A number of fault models, varying according to extent and nature (e.g., probabilistic vs. worstcase) of faults and the resulting dynamic networks have been analyzed (e.g., see [4, 36]). There are several studies that constrain the rate at which changes occur or assume that the network eventually stabilizes (e.g., see [1, 25, 27]).
To address highly unpredictable network dynamics, models with stronger adversaries have been studied by [8, 32, 37] and others; see the recent survey of [16] and the references therein. Unlike prior models on dynamic networks, these models and ours do not assume that the network eventually stops changing; the algorithms are required to work correctly and terminate even in networks that change continually over time.
The model of [26, 30, 32] allows for a much stronger adversary than the ones considered in past work [9, 10, 11]. In particular, the work of [26] (also see [30]), showed that every token forwarding information spreading algorithm that uses local broadcast for communication under a strongly adaptive adversary (the same as considered in this paper — cf. Section 2) requires rounds to complete. The survey of [33] summarizes recent work on dynamic networks (see also the early works of [20, 19]).
Recent work of [28, 29] presents information spreading algorithms based on network coding [2]. As mentioned earlier, one of their important results is that the gossip problem on the adversarial model of [32] can be solved using network coding in rounds assuming the token sizes are sufficiently large ( bits).
It is important to note that all the above results deal with the time complexity of information spreading in dynamic networks (i.e., the number or rounds needed) and not with the message complexity. The focus here is on amortized message complexity for spreading tokens. We note that there is an important difference between the two measures. In particular, algorithms with efficient time complexity need not necessarily be messageefficient and viceversa and hence prior time complexitybased results do not directly imply the results of this paper. Indeed, one can exchange up to messages (in a graph with edges) in just one round, and since one needs at least rounds for information spreading (in the worstcase), the total message complexity can be as high as (for unicast). In other words, a messageefficient algorithm can take a longer time but exchanging less total number of messages, e.g., by sending messages only along a few edges and/or by using silence. However, as we show in Section 2, the amortized message complexity lower bound (even) for local broadcast (where a node’s local broadcast to all its neighbors is counted as just one message) is close to the worst possible, i.e., . The proof for this lower bound is inspired by the time complexity lower bound of [26], although the two proofs differ in their details. The “bad” lower bound for local broadcast motivates considering unicast communication which is the main focus of this paper. It is important to point out another difference between amortized time complexity and amortized message complexity. While amortized time complexity can be as low as (where is the network diameter, which can be much smaller than ), the amortized message complexity is at least (trivially) , since a token has to reach all the nodes. There has not been much progress on improving time complexity (total or amortized) in dynamic networks (both for unicast and local broadcast) in the oblivious adversary model in general networks, although prior works [5, 22] has achieved improved (subquadratic in ) total time complexity under additional assumptions on the dynamic network model (these are different from what is considered here). In particular, the work of [22] considers a dynamic network and presents an information spreading algorithm that can have subquadratic time complexity under some restricted conditions, e.g., when the dynamic mixing time (defined in [22]) is small. The work of [22] does not address amortized message complexity at all and the result in the oblivious adversary setting of this paper do not follow from the results of [22]. Both papers use techniques based on random walks (which are very useful in the oblivious setting) which were originally developed in [23, 24], but the algorithms are quite different.
While the work of [26] adopts the strongly adversarial model and local broadcast communication (we adopt the same model here for the local broadcast communication —cf. Section 2), the work of [5, 22] adopts the oblivious adversary model (we also adopt the oblivious model here for unicast communication in Section 3.2.2), a novel aspect of this paper is introducing and adopting a new communication cost model that measures the communication cost of an algorithm as a function of the amount of topological changes that occur in a given execution and a new message complexity measure called adversarycompetitive message complexity (Section 1.3). A main contribution of this paper shows that under this new complexity measure, one can obtain an efficient amortized message complexity for unicast communication that is significantly better than the worstcase bound of . Our new measure is inspired by and related to the notion of resource competitive algorithms [15], although the details are different. The previous measure does not address an adversary in the context of dynamic networks.
1.3 Dynamic Network, Communication, and Cost Model
In the following, we formally define the dynamic network model, the
communication models we consider, as well as the way in which we
measure the communication cost (or message complexity) of a given
token dissemination algorithm.
Dynamic Network Model: We model the network as a synchronous
dynamic graph with a fixed set of nodes . Nodes communicate in
synchronous rounds where round starts at time and ends at
time . For any integer , we use to denote
the graph of round . Throughout, we use to denote the
number of nodes and to denote the number of edges in
round . For convenience, we define and thus
is the empty graph . For every , we call
the set of edges inserted in round
and we call the set of edges
removed in round .
In order to always allow progress when globally broadcasting a message, we assume that each graph is connected for .
We sometimes also need the property that every edge which gets inserted remains in the graph for at least a given number of rounds. For an integer , we call a graph edge stable if for every and every edge , there exists a round such that . Hence, after it appears, every edge remains in the graph for at least consecutive rounds. Note that every dynamic graph is edge stable.
We assume that the dynamic topology is provided by a worstcase
adversary. There are adversaries of different strengths, depending on
the capability of adaptively reacting to random choices of a given
algorithm. In this paper, we distinguish between a strongly
adaptive adversary and an oblivous adversary. The strongly
adaptive adversary knows the algorithm’s randomness of the current
round in order to determine the dynamic topology for that
round^{4}^{4}4In comparison, a weakly adaptive adversary only
knows the algorithm’s randomness up to the round before the current round.. The
oblivious adversary is oblivous to any randomness used by the
algorithm and to any decision made by the algorithm, i.e., it has to commit to the sequence of network
topologies before the execution of a distributed algorithm
starts. Note that for deterministic algorithms, both adversaries are
the same.
Communication Model: Throughout the paper, we assume that each
node has a unique bit identifier and
that in each round, each node can send messages containing a constant
number of tokens and additional bits to its neighbors.
We distinguish different modes of communication, depending on whether the message exchange among
neighbors is based on local broadcast or on unicast.
1. Local Broadcast Communication:
In each round , each node can locally broadcast a message
which is received by all neighbors of . Node learns the set
of neighbors in round when receiving the round messages from
them.
2. Unicast Communication:
At the beginning of each round ,
each node is informed about the IDs of its neigbors in round
. Node can then send a different message to each neighbor.
Note that if the neighborhood information is not available instantaneously, it can be obtained by exchanging messages. As a consequence, in a edge stable dynamic graph, the known neighborhood information and unknown neighborhood information are equivalent with a cost of extra messages.
Communication Cost: The communication cost of a protocol
is measured by its message complexity, i.e., by the total
number of messages sent by all the nodes throughout the whole
execution.
Definition 1.1 (Message Complexity).
The message complexity of a distributed algorithm is the total number of messages sent in a worstcase execution. If communication is by local broadcast, each local broadcast by some node counts as one message. If communication is by unicast, messages to different neighbors are counted separately.
The main focus of this article is to study the message complexity of solving the token dissemination problem.
Definition 1.2 (Token Dissemination Problem).
For some positive integer , distinct tokens are initially placed at some nodes in the network. The goal is to disseminate all the tokens to all the nodes in the network.
As discussed in Section 1, we are particularly interested in understanding to what extent dynamic topology changes affect the communication cost of token dissemination. We thus consider a cost model that measures the communication cost of an algorithm as a function of the number of topological changes. We formally define the number of topological changes of an execution as the total number of edges that are inserted throughout an execution, i.e., for an round execution with dynamic graph , we have .^{5}^{5}5Note that since we assume that at time we start with an empty graph, the total number of edge deletions is always upper bounded by the total number of edge insertions. Hence we only count the edge insertions and not the edge deletions. The following definition captures the notion that for each dynamic change caused by the dynamic network adversary, a distributed algorithm is allowed to send a given number of messages “for free”.
Definition 1.3 (AdversaryCompetitive Message Complexity).
Given a parameter , we say that a distributed algorithm has adversarycompetitive message complexity if for every execution , the total message complexity of the algorithm is upper bounded by .
To capture the progress of an algorithm, one way is to count how many new tokens have been received so far by the nodes.
Definition 1.4 (Token Learning).
A token learning is an event that occurs in some round execution if and only if node receives token for the first time in round , where . Then, we say learns in round .
Based on the above definition, if each of the tokens is initially given to exactly one of the nodes, it is trivial that token learnings must occur during an algorithm execution solving token dissemination.
2 Local Broadcast Model
Before we go to the unicast setting, which is the main focus of this paper, we present a tight quadratic (in ) lower bound for the amortized message complexity of disseminating tokens in the local broadcast setting.
We assume that each of the tokens can initially be given to an arbitrary subset of the nodes with the only restriction that the nodes initially have at most tokens on average. We further assume that is at most polynomially large in . Our lower bound is an extension of the time complexity lower bound, which was developed by Dutta et al. in [26] and which was slightly generalized and simplified in [30]
. The main idea of the lower bound is as follows. If initially, each token is given to each node independently with a constant probability, the lower bound shows that in each round of any
token dissemination algorithm execution, a strongly adaptive adversary can enforce that in total at most tokens are learned by the nodes. Because by the end of an execution, the nodes together need to learn tokens (each node needs to learn the tokens it does not know initially), this directly implies a time complexity lower bound. Here, we adapt the technique of the lower bound of [26, 30] to show that in any round with at most broadcasting nodes^{6}^{6}6Throughout this section, we call a node that performs a local broadcast in some round , a broadcasting node in round ., a strongly adaptive adversary can prevent any new tokens from being learned. Because the nodes together need to learn tokens, together with the upper bound of on the number of tokens learned in a single round, this implies that a strongly adaptive adversary can force any token dissemination algorithm to require at least rounds with at least broadcasting nodes. This leads to the overall message complexity of at least .To prove our lower bound, we mostly use the notation in [30]. Let denote the set of tokens, and for each node , let be the set of tokens that node knows by time . In each round , let denote the token broadcast by node if is a broadcasting node in round . If is not a broadcasting node in round , we define . Note that a strongly adaptive adversary can determine the dynamic graph topology of round after each node has chosen the token to locally broadcast. Generally, a collection of pairs , where and is called a token assignment.
In addition, the adversary determines a token set for each node . The sets are just used for the analysis. Informally, one can think of as an additional set of tokens that node knows at time . Formally, we do not assume that node knows the tokens in initially, but whenever learns a token from , we do not count this as progress (i.e., for node , we only count how many tokens from it has learned). To formally measure the progress, we define a potential function . Recall that we assume that initially on average, each node knows at most tokens, i.e., . The adversary chooses the sets in such a way that . In order to solve the token dissemination problem, the potential has to grow to . The choice of the sets therefore guarantees that the potential needs to grow by at least throughout the execution of a token dissemination protocol.
To study the growth of the potential function, the following notion is used. An (potential) edge is called free in round , if and only if the communication over does not contribute to , i.e., is free if and only if and . Otherwise, the edge is called nonfree. When determining the topology of round , a strongly adaptive adversary can always add all free edges to the graph without causing any increase of the potential function. If after adding all free edges, the graph has connected components, the adversary needs to add additional edges “nonfree” edges in order to make connected. The potential function can then grow by at most because over each of these additional edges, one token can be learned in each direction. In [26, 30], it is shown using a probabilistic method that the sets can be chosen such that and such that in each round, the graph induced by only the free edges has at most connected components. Every algorithm therefore needs at least rounds for the potential to grow to .
The following lemma from [30] shows that if each token is randomly added to each set independently with probability , adding all free edges reduces the number of components to for all rounds with constant probability.
Lemma 2.1.
(Lemma 1 of [30]) If each set contains each token independently with probability , with probability at least , for all rounds and all possible token assignments in round , the graph induced by all free edges in round has at most connected components.
We next show that if the number of broadcasting nodes is small, adding all free edges leaves only one connected component. For a constant , we define a token assignment to be sparse if at most of the nodes are broadcasting nodes (i.e., for at most nodes, we have ).
Lemma 2.2.
There is a constant such that if each set contains each token independently with probability , with probability at least , for all rounds and all possible sparse token assignments , the graph induced by all free edges in round consists of a single connected component.
Proof.
We first bound the probability for a fixed sparse token assignment . The claim of the lemma will then follow by a union bound over all the possible sparse token assignments. Let denote the set of broadcasting nodes, i.e., the nodes for which . Further, let and let . Clearly, all the edges among the nodes in are free. It is therefore sufficient to show that for each node in , there is a free edge connecting to a node in . Then, all the free edges induce a connected graph over all the nodes (also see Figure 1).
Consider an edge , where , , and is locally broadcasting token . Edge is a free round (for every round ) if . This happens with probability (independently for every node ). The probability that has no free edge to some node in is thus at most . Thus, the probability that there exists at least one node in that has no free edge to is at most . Considering a union bound over all ways to choose a set of nodes and all at most ways to choose the tokens to be sent out by these nodes, the probability that there exists a token assignment for which there is a node in that has no free edge to can therefore be upper bounded by
Hence, with probability at least , for each possible token assignment (and for each round), each node has a free edge connected to some node in . ∎
Theorem 2.3.
In any always connected dynamic network, if initially each node on average knows at most half of the tokens, the amortized message complexity of solving the token dissemination problem against a strongly adaptive adversary is at least in the local broadcast communication model.
Proof.
Using the probabilistic method, we show that the adversary can choose the sets such that at time , and such that for every possible strategy of the algorithm, the adversary can choose the graph of each round such that (1) the graph is connected, (2) the number of connected components after adding all free edges is at most , and (3) if there are at most broadcasting nodes, for a sufficiently large constant , the free edges induce a connected graph. The theorem then follows because (a) the potential needs to grow by in order to solve the token dissemination problem and (b) the potential increase per round is always at most and it is if the number of broadcasting nodes is less than .
To apply the probabilistic method, we let each set contain each token independently with probability . First note that by a standard Chernoff argument, the probability that is exponentially small in and thus the probability that is also exponentially small in . Further, by Lemma 2.1 and Lemma 2.2, for every round , and every token assignment , the graph induced by all the free edges has the following two properties with probability at least : (1) contains at most connected components, (2) is connected over all the nodes if there are at most broadcasting nodes. This shows that (for sufficiently large ), there is a way to choose the sets sets such that , the potential increase per round is at most , and if there are at most broadcasting nodes, the potential increase is and the claim of the theorem follows. ∎
3 Unicast Model
We want to solve the token dissemination problem where the tokens are initially distributed (arbitrarily) over the network and the goal is to disseminate all the tokens to all the nodes with as few messages as possible. To solve this problem, it turns out that it is first easier to consider a special instance — called the Single Source Case — where all the tokens are initially located in a single source node. We use the Single Source Algorithm (Section 3.1) as a subroutine to solve the more general MultiSource case (Section 3.2).
3.1 Single Source Node
Consider the token dissemination problem such that all the tokens are initially given to a single source node. Let us now present a deterministic algorithm to solve this problem with message complexity of against a strongly adaptive adversary. Hence, the algorithm has 1adversarycompetitive (total) message complexity of (cf. Def. 1.3). In other words, if the algorithm is provided with a budget that equals to the number of topological changes, then for sufficiently large , the amortized message complexity to disseminate the tokens is linear in . Note that even in a static graph, the cost to disseminate a single token is . Hence, if the number of tokens is at least linear in , the amortized message complexity is asymptotically best possible. Before we present the algorithm and its analysis, consider the following definitions.
Definition 3.1 (Complete and Incomplete Node).
We say that node is complete at time if it has all the tokens at this time. Otherwise, is incomplete.
Definition 3.2 (Bridge Node).
In each round, any incomplete node that has a complete neighbor is called a bridge node for that round.
3.1.1 SingleSource Unicast Algorithm
The source node considers an arbitrary order of the tokens and assigns integer to its token as its token ID. In the algorithm, only complete nodes send tokens during an execution. To this end, each complete node announces its completeness to its neighbors. In each round, each incomplete node sends token requests to (some of) its complete neighbors. Then, in the very next round, each complete node sends back the requested tokens to the requesting nodes if it is still connected to them. Although the general idea is simple, a careful strategy is needed to avoid redundant communication.
Each complete node informs each node about the time of ’s completeness at most once by remembering which nodes informed before. Each node also remembers all the complete nodes it is informed by about their completeness. Each incomplete node chooses among its complete neighbors for sending token requests to, based on a priority defined by the following categorization of its adjacent edges.
Consider an edge such that is incomplete and is complete. Then is called new in round if the edge is inserted at the beginning of round or . Edge is called contributive if it is not new, but a new token is sent over it between the last insertion of the edge and the end of round , i.e., it contributes to the dissemination. Otherwise, if is neither new nor contributive, it is called idle in round .
Based on the above definitions, if has missing tokens, it creates token requests, one for each missing token. Then, assigns exactly one distinct token request to each of the new edges (if any). Afterwards, if there are still token requests left to be assigned, assigns exactly one request to each of the idle edges (if any). Finally, does the same for the contributive edges. Note that as each edge has at most one assigned token request, there might be token requests that are not assigned in the current round. At the end, sends the assigned token requests in round over the corresponding edges.
Note that for categorizing an adjacent edge , an incomplete node might need to know whether it learns a token over in round or not. However, if sends a token request over in round , and , then knows that it learns a token over in round . Moreover, to avoid sending redundant token requests, node needs to know whether it learns some requested token in round or not. However, knows the token requests it sent over its adjacent edges in round . Then, by knowing the adjacent edges in round , and the fact that complete nodes immediately respond to requests, knows what tokens it learns in round . The pseudocode is given in Algorithm 1.
3.1.2 Analysis
First, let us argue the message complexity of the algorithm. Then, we show that with a natural stability assumption the time complexity is also small.
Theorem 3.1.
Given tokens to disseminate in a dynamic network against a strongly adaptive adversary, the SingleSource Unicast Algorithm has 1adversarycompetitive message complexity of .
Proof.
There are three different types of messages sent by nodes during the algorithm execution; (1) token, (2) completeness announcement, and (3) token request. Each node sends the request of each distinct token to at most one neighbor in a round. If the connection to that complete neighbor remains for the very next round, then the requested token will be successfully received by the node and the node stops sending this token request. Therefore, each distinct token is received by each node once, and hence there are at most sent messages of type 1 throughout the execution.
Each of the nodes informs at most other nodes about its completeness throughout the execution. Since each node avoids informing the same node more than once, at most messages of type 2 are sent throughout the execution.
It remains to show that the number of sent messages of type 3 is at most during execution . In each round where a token request is sent by some node, a new token is received in the next round unless the edge is removed. Therefore, we can say that the number of token requests sent at any time is at most plus the number of edge deletions. comes from the fact that there exist tokens and each token is received by at most nodes, each token once. Furthermore, since we assume that the initial graph is an empty graph, the number of edge deletion is upper bounded by . ∎
In the following, we argue that with a natural stability assumption, the algorithm disseminates all the tokens and terminates fast. The following two lemmas show that prioritization of sending token requests over different edge types ensures fast dissemination.
Definition 3.3 (Futile Round).
Round is a futile round, if no token request is sent over a contributive edge in round , and no token learning occurs in rounds and .
Lemma 3.2.
Let be an arbitrary futile round in any execution of the SingleSource Unicast Algorithm on a edge stable dynamic network. Then, if there exist bridge nodes in round , at least idle edges are removed at the end of round .
Proof.
First, let us show that every bridge node has an adjacent idle edge in round . If there exists a new edge in round , due to the 3edge stability property and the higher priority of sending requests on new edges, a token is learned in at least one of rounds or . Hence, there exists no new edge in round . Now, for the sake of contradiction, let us assume that there exists a bridge node in round that does not have an adjacent idle edge. Since cannot have an adjacent new edge either, it must have at least one contributive edge. Therefore, sends a request over at least one of its contributive edges in round , contradicting the assumption that is a futile round.
Since every bridge node has an idle edge and no new edge, due to the mentioned priority rules, a bridge node sends a request over at least one of its idle edges. Since no new token is learned in round , the idle edge carrying a request must be removed. Hence, from each bridge node at least one idle edge is removed at the end of round . ∎
Lemma 3.3.
In any execution of the SingleSource Unicast Algorithm on a 3edge stable node dynamic network, there are at most futile rounds until the last token request is sent.
Proof.
Let us first argue that it is not possible for a new edge to become idle. For any round , consider an arbitrary new edge , where is complete and is incomplete. Then in round , either is contributive or is complete. Because, the only case that does not send a token request over in rounds or is when sends all its left token requests over its other new edges in rounds or . Then, due to edge stability, will receive its requested tokens by the end of round and becomes complete. Otherwise, sends a token request over in rounds or , and hence becomes contributive by the end of round .
Then, the only case when an edge becomes idle in round , is when both endpoints are incomplete in round and only one of them becomes complete in round . Since each node becomes complete only once, the number of ’s idle edges never increases throughout the execution after ’s completion.
Now consider an arbitrary futile round where the largest number of idle edges of any complete node in a futile round is . Hence, there exist at least bridge nodes in that round. Thus, by Lemma 3.2, at least idle edges are removed at the end of that futile round. As a result, one can see that there cannot be any idle edges, and hence any futile rounds, after having futile rounds. This shows that the number of futile rounds is at most until the last token request is sent. ∎
Theorem 3.4.
Given tokens to disseminate, if the dynamic graph is edge stable, the SingleSource Unicast Algorithm terminates in rounds and all the nodes receive all the tokens.
Proof.
Consider any time during an arbitrary execution of the SingleSource Algorithm that is not terminated yet. Let denote the number of token learnings in . Let us show that the number of periods of two consecutive rounds in in which no token is learned is at most . This leads to running time for the algorithm.
Let and be arbitrary two consecutive rounds in , where no token is learned. Hence, there is no new edge in round , otherwise, a token would have been learned in round or due to the 3edge stability property and the higher priority of sending token requests on new edges. Then, there are two possibilities:

Case 1: At least one contributive edge carries a token request in round . Since it is assumed that no token is learned in round , the edge must be removed by the adversary at the end of round . Therefore, we can map one of the removed contributive edges to round . Doing so, for any such round , a distinct token learning in is mapped to (i.e., one of the token learnings that happened on the removed contributive edge after its last insertion). Therefore, since there is a one to one mapping between such rounds and a subset of token learnings in , the number of such rounds (i.e., ) is not more than the number of token learnings in .

Case 2: No contributive edge carries a token request in round . Therefore, round is a futile round. Then, based on Lemma 3.3, the number of such rounds (i.e., round ) is at most throughout the execution.
∎
3.2 Multiple Source Nodes
Let us consider a more general case where the tokens are initially given to more than one source node. Assume that there are source nodes such that for , is initially given tokens. Hence, in total tokens need to be disseminated.
3.2.1 Strongly Adaptive Adversary
To solve this problem against a strongly adaptive adversary, we present a deterministic algorithm with message complexity. It extends the SingleSource Unicast Algorithm, and has the same running time if the network has the same stability assumption (i.e., 3edge stability). However, it has a higher message complexity than the Single Source Unicast Algorithm since each node needs to announce its completeness regarding different source nodes to other nodes in its neighborhood throughout the algorithm execution.
Since there are more than one source nodes, we need to include the intended source node in the definitions of Section 3.1.
So we say a node is complete with respect to source node , if it has received all the tokens originated at .
Similarly, a node is called a bridge node with respect to source node , if it is an incomplete node with respect to and is connected to a node which is complete with respect to .
MultiSourceUnicast Algorithm
The algorithm considers a priority over the dissemination of tokens from different sources.
To do so, in each round, all nodes give the highest priority to the dissemination of the tokens from the minimum known source node whose dissemination is not yet complete.
In the sequel, we explain the details of implementing this idea.
Initially, each source node considers an arbitrary order of its tokens and assigns a token identifier containing its own ID and an integer (i.e., ) to its token. Moreover, we assume that each source node becomes complete with respect to itself at time . To avoid redundant communication, each node keeps some information about the execution history by constantly updating the following sets. is the set of all nodes that are informed by about the ’s completeness with respect to . is the set of nodes that informed about their completeness with respect to . is the set of all source nodes with respect to which is complete. Then each node in each round of the execution does the following three tasks in parallel: (1) For each edge , if there is any source node such that and , it picks the minimum such and sends “completeness announcement with respect to ” to ; (2) For each edge , if received a request for token from in the previous round, then it sends to ; (3) Node picks the minimum such that and . Then, regarding sending token requests, it acts similarly to the SingleSource Unicast Algorithm as there exists only the single source in the network.
Theorem 3.5.
To disseminate tokens which are initially distributed among source nodes, MultiSource Unicast Algorithm has a 1adversarycompetitive message complexity of .
Proof.
Arguing the message complexity of MultiSource Unicast Algorithm is almost similar to the proof of Theorem 3.1. Similarly, we consider the three different types of messages throughout the algorithm execution; (1) token, (2) completeness announcement, and (3) token request. The number of tokens of type 1 and 3 is exactly the same as running the Single Source Unicast Algorithm. However, the number of messages of type 2 differs. In case of running the Single Source Unicast Algorithm, each node needs to inform any other node in its neighborhood about its completeness once throughout the algorithm execution. The reason is that there is only one source node, and each node achieves completeness just regarding the only source node in the network. But in case of running MultiSource Unicast Algorithm, each node becomes complete regarding different source nodes. Therefore, each node should announce its completeness regarding each of the source nodes to every other node in its neighborhood throughout the algorithm execution, which leads to messages in total. As a result, messages of type 1, messages of type 2, and messages of type 3 proves the 1adversarycompetitive message complexity of for MultiSource Unicast Algorithm. ∎
Theorem 3.6.
Given tokens to disseminate, if the dynamic graph is 3edge stable MultiSource Unicast Algorithm terminates in rounds and all the nodes have received all the tokens.
Proof.
Theorem 3.4 states when all the tokens are initially given to one source node, by running SingleSource Unicast Algorithm, token dissemination is complete in at most rounds. MultiSource Unicast Algorithm guarantees that the minimum ID source node that its token dissemination is not complete yet runs the SingleSource Unicast Algorithm without any interference until its token dissemination is complete. It is guaranteed by having all the nodes giving the highest priority to the token dissemination of the the minimum ID source node with incomplete token dissemination.
Therefore, if the SingleSource Unicast Algorithm solves token dissemination in rounds for some constant , then the token dissemination of the first minimum ID source node is complete after rounds and the second one after the next rounds and so on. Hence, the whole running time is , where .
∎
3.2.2 Oblivious Adversary
In case the ratio of the number of disseminated tokens to the number of source nodes is large enough, i.e., , the algorithm presented in Section 3.2.1 has an efficient linear amortized message complexity. However, for example, in case of having source nodes and tokens to be disseminated, the amortized message complexity of the algorithm would be due to Theorem 3.5. In this section, we focus on instances with large number of source nodes and tokens in total are distributed arbitrarily among the source nodes. Assume that the number of source nodes and the total number of tokens are initially known to the nodes. Then, we show that by weakening the adversary from an adaptive one to an oblivious one, a better amortized message complexity can be achieved when the ratio is small. Hence in the sequel we assume that and .
The key idea is to efficiently reduce the number of source nodes and then simply run the MultiSourceUnicast algorithm for this smaller set of sources. Hence, the algorithm runs in two phases. In the first phase, a (small) subset of nodes is chosen as new source nodes, and all the tokens are efficiently sent to these new source nodes. Let us call the new source nodes centers. Then, in the second phase, the MultiSourceUnicast algorithm is executed with the centers as the source nodes.
Let us now explain the first phase in details. If the number of source nodes is less than , nothing is done in the first phase and the second phase is started right away by running the MultiSourceUnicast algorithm (by considering all the source nodes as centers). Therefore, in the sequel, let us assume that the number of source nodes is more than . We aim to reduce the number of source nodes from to , where parameter denoting the number of centers will be determined later. Then, the centers own all the tokens at the end of the first phase.
Each node independently marks itself as a center with probability . Therefore, in expectation, there are centers. Then, each token owned by any source node (which is not marked as a center) needs to reach to some center. The tokens owned by one source node may reach different centers. However, each token is owned by exactly one center at the end of the first phase. To have this new token assignment, each of these tokens performs a random walk (in parallel) until they reach a center. Once a token reaches a center, it stops there and the center owns the token. Since in expectation, there are uniformly random centers among the nodes, any fixed set of distinct nodes must have at least one center with high probability (w.h.p.). That is, each random walk token has to visit distinct nodes to guarantee that it hits a center w.h.p. For this, we apply a known random walk visit bound (see Lemma 3.7 below) for the dynamic setting [22].
To perform the desired random walks, we construct a virtual regular multigraph by adding an appropriate number of selfloops to the network at each round. To do so, for any round , each node with degree in the graph adds virtual selfloops as its adjacent virtual edges. Note that a random walk step on a virtual edge is not count in the message complexity, but it increases the time complexity. Due to the assumed bandwidth restriction (i.e., congestion) of the actual edges, not necessarily all the tokens perform a random walk step in each round. Therefore, we say a token is active in a round when it performs a random walk step whether it traverses an actual or virtual edge. Otherwise, we say that the token is passive. Consider as a predefined degree threshold. We call a node with degree larger than a highdegree node; otherwise it’s a lowdegree node. Recall that a highdegree node must have at least one center among its neighbors with high probability.
Consider an arbitrary lowdegree node with degree , and let be the set of tokens at node at the beginning of round . Node processes each token in as follows. With probability , token traverses a selfloop, i.e., it remains at node . With probability , chooses one of its adjacent edges uniformly at random, and if has not yet sent any token over in round , token is sent over . Therefore, a token at a lowdegree node might be passive in a round because of the congestion for the edges. Now consider a highdegree node with degree in round . Then w.h.p. node has at least one center among its neighbors. To each of its neighboring centers, sends one of the tokens owned by node (if any) at the beginning of round . Since the number of ’s neighboring centers might be less than the number of tokens at node , not necessarily all the tokens at node are sent to the neighboring centers in the round . Therefore, a token at is passive until it is either sent to one of ’s neighboring centers, or the degree of becomes lower than the threshold and the token resumes the random walk. This way a token continues walking until it reaches a center. The pseudocode is given in Algorithm 2.
Analysis. Consider the random walk of an arbitrary token in the given dynamic graph . As explained in the algorithm description, token is not necessarily active in all rounds throughout the algorithm execution. Let denote the (not necessarily consecutive) subsequence of such that is active in each and every graph in . In each graph in (except the last one), token is sent from a node to a node such that is a lowdegree node. Therefore, all the nodes visited by in have actual degree at most .
Lemma 3.7 (Lemma 6.7 in [22]).
Let be a regular dynamic graph controlled by an oblivious adversary. Let denote the number of visits of a random walk to vertex by time , given that the random walk started at node . could be zero or a positive number. Then for any nodes , and for all , where is the (dynamic) mixing time of , , for any constant .
The above lemma holds for any random walk with an arbitrary graph sequence provided by an oblivious adversary. We refer to [22] for more details. It states that a random walk of length on a regular dynamic graph visits at least i.e., distinct nodes with high probability (for ). Since only token traversal over the actual edges increases the message complexity, regarding Lemma 3.7, (to analyze the worst case message complexity) we only consider the upper bound for the actual degree of all the visited nodes by , which is . To have performing actual steps, the walk takes at least steps w.h.p. on the constructed regular multigraph (using standard Chernoff bound). Therefore, due to Lemma 3.7, visits distinct nodes. As we discussed earlier, to have visiting a center during its walk w.h.p., it is enough that visits at least distinct nodes. Thus, we get , by setting and . This implies that each token performs a random walk of length at least to guarantee that it reaches a center w.h.p. Since this is true for an arbitrary random walk token w.h.p, by union bound, it is also true for all the tokens.
The following theorem shows that by setting the parameters properly, the desired message complexity is achieved.
Theorem 3.8.
There is an algorithm with message complexity to disseminate tokens from source nodes in a dynamic network, in which the topology is controlled by an oblivious adversary. Hence, the amortized message complexity of the algorithm is .
Proof.
In the first phase, at most tokens perform random walks of (actual) steps each to reach some center. Note that this excludes message cost for the selfloop (virtual) edges. Therefore, it costs messages in the first phase. In the second phase, we run MultiSourceUnicast algorithm with source nodes. Due to Theorem 3.5, therefore, the message complexity of the second phase is . Thus, the total message complexity is . Parameter is sublinear in , and . Hence, is larger than , and consequently . The message complexity is . To fix parameter , let us optimizing the sum as follows.
Thus, the total message complexity is .
Therefore, the amortized message complexity to disseminate tokens is
∎
The following table highlights the amortized message cost for different sizes of the token set. Recall that, by our assumption and , and always.
Number of disseminated tokens ()  Amortized message complexity 

Remark. As mentioned before, in case of having less than source nodes, MultiSourceUnicast algorithm is executed. It is a deterministic algorithm, and hence works properly against an oblivious adversary. The total message cost of MultiSourceUnicast Algorithm is (cf. Theorem 3.5). Therefore, the amortized message complexity is , which is upper bounded by , since the number of tokens is always larger than the number of source nodes, i.e., . Therefore, when the number of source nodes is less than , MultiSourceUnicast algorithm is more efficient.
Now let us analyze the running time of the algorithm. Since there are total tokens and at least source nodes, a source node may have as many as tokens to disseminate in the beginning. Further, since the dynamic graph is regular, as many as tokens from each node can be executed in parallel with at most congestion over an edge. The reason is that if each node starts random walks in parallel, in expectation, each edge carries at most walks (from both ends) in each round, and hence there will be at most congestion over an edge with high probability. Therefore, to perform random walks (corresponding to tokens from a source node) in parallel, there would be at most delay per step w.h.p. Another reason for a delay in the random walk of a token is that the token is at a highdegree node in some round and the number of neighboring centers is less than the number of tokens at that node in that round. Note that the number of such rounds is at most , since in each such (delay) round there is at least one token that is being sent to a center.
Since the length of the random walks (including virtual steps^{7}^{7}7The virtual steps are counted towards running time of the algorithm.) is (assuming the worst case actual degree for the running time), the total time of the first phase is rounds. Since the second phase is the execution of MultiSourceUnicast algorithm, it takes time with the additional natural condition that the dynamic graph is edge stable, as follows from Theorem 3.6. Hence, the total running time in phase 1 and phase 2 is rounds. The time bound becomes , as and .
4 Conclusion and Open Problems
We studied the message complexity of information spreading in dynamic networks. While time complexity has been studied more intensely, understanding the message complexity in various dynamic network models is likely to shed light on the time complexity as well. Several open questions arise from our work. One key question is that we do note have tight bounds on the amortized message complexity of unicast under the strongly adaptive adversary (when not charging the adversary for topological changes). The only known bounds are the trivial upper and lower bounds.
A contribution of our work is introducing the adversarycompetitive message complexity which is useful for studying algorithmic performance in dynamic networks as a function of the dynamism. We were able to show an optimal amortized message bound for unicast in this model for both the singlesource and multisource setting, when the number of tokens is large. However, when the number of tokens is small (say ) and they start from multiple sources (an important special case is one token starts from each node), we do not have a good bound. We were able to show only a amortized bound under a weaker (oblivious) adversary. Improving this bound for oblivious adversary is an interesting open problem or showing a nontrivial bound for the strongly adaptive adversary is an interesting open problem. In the case of oblivious adversary, we assumed the number of source nodes and the number of tokens as inputs. It would nice if one can try to relax the assumptions. Also, developing efficient protocols for dynamic networks that perform well under the adversarycompetitive measure for various problems is an interesting research goal.
References
 [1] Y. Afek, B. Awerbuch, and E. Gafni. Applying static network protocols to dynamic networks. In IEEE FOCS, pages 358–370, 1987.
 [2] R. Ahlswede, N. Cai, S. Li, and R. Yeung. Network information flow. Transactions on Information Theory, 46(4):1204–1216, 2000.
 [3] N. Alon, A. BarNoy, N. Linial, and D. Peleg. A lower bound for radio broadcast. Computer and System Sciences, 43(2):290–298, 1991.
 [4] H. Attiya and J. Welch. Distributed Computing: Fundamentals, Simulations and Advanced Topics. John Wiley Interscience, 2004.
 [5] J. Augustine, C. Avin, M. Liaee, G. Pandurangan, and R. Rajaraman. Information spreading in dynamic networks under oblivious adversaries. In DISC, pages 399–413, 2016.
 [6] J. Augustine, G. Pandurangan, and P. Robinson. Fast byzantine agreement in dynamic networks. In PODC, pages 74–83, 2013.
 [7] J. Augustine, G. Pandurangan, and P. Robinson. Fast byzantine leader election in dynamic networks. In DISC, pages 276–291, 2015.
 [8] C. Avin, M. Koucký, and Z. Lotker. How to explore a fastchanging world (cover time of a simple random walk on evolving graphs). In ICALP, pages 121–132, 2008.
 [9] B. Awerbuch, P. Berenbrink, A. Brinkmann, and C. Scheideler. Simple routing strategies for adversarial systems. In IEEE FOCS, pages 158–167, 2001.
 [10] B. Awerbuch, A. Brinkmann, and C. Scheideler. Anycasting in adversarial systems: Routing and admission control. In ICALP, pages 1153–1168, 2003.
 [11] B. Awerbuch and F. T. Leighton. Improved approximation algorithms for the multicommodity flow problem and local competitive routing in dynamic networks. In ACM STOC, pages 487–496, 1994.
 [12] A. BarNoy, S. Guha, J. Naor, and B. Schieber. Message multicasting in heterogeneous networks. SIAM Journal on Computing, 30(2):347–358, 2000.
 [13] R. BarYehuda, O. Goldreich, and A. Itai. On the timecomplexity of broadcast in radio networks: an exponential gap between determinism and randomization. In ACM PODC, pages 98–108, 1987.
 [14] H. Baumann, P. Crescenzi, and P. Fraigniaud. Parsimonious flooding in dynamic graphs. Distributed Computing, 24(1):31–44, 2011.
 [15] M. A. Bender, J. T. Fineman, M. Movahedi, J. Saia, V. Dani, S. Gilbert, S. Pettie, and M. Young. Resourcecompetitive algorithms. SIGACT News, 46(3):57–71, 2015.
 [16] A. Casteigts, P. Flocchini, W. Quattrociocchi, and N. Santoro. Timevarying graphs and dynamic networks. CoRR, abs/1012.0009, 2010. Short version in ADHOCNOW 2011.
 [17] A. Clementi, A. Monti, and R. Silvestri. Distributed multibroadcast in unknown radio networks. In ACM PODC, pages 255–264, 2001.
 [18] A. E. F. Clementi, P. Crescenzi, C. Doerr, P. Fraigniaud, M. Isopi, A. Panconesi, F. Pasquale, and R. Silvestri. Rumor spreading in random evolving graphs. In Proc. of 21st Annual European Symposium on Algorithms (ESA), pages 325–336, 2013.
 [19] A. E. F. Clementi, C. Macci, A. Monti, F. Pasquale, and R. Silvestri. Flooding time in edgemarkovian dynamic graphs. In Proc. of the 27th Annual ACM Symposium on Principles of Distributed Computing (PODC), pages 213–222, 2008.
 [20] A. E. F. Clementi, F. Pasquale, A. Monti, and R. Silvestri. Communication in dynamic radio networks. In Proc. of the 26th Annual ACM Symposium on Principles of Distributed Computing (PODC), pages 205–214, 2007.
 [21] A. E. F. Clementi and R. Silvestri. Parsimonious flooding in geometric randomwalks. J. Comput. Syst. Sci., 81(1):219–233, 2015.
 [22] A. Das Sarma, A. R. Molla, and G. Pandurangan. Distributed computation in dynamic networks via random walks. Theoretical Computer Science, 581:45–66, 2015.
 [23] A. Das Sarma, D. Nanongkai, G. Pandurangan, and P. Tetali. Efficient distributed random walks with applications. In PODC, pages 201–210, 2010.
 [24] A. Das Sarma, D. Nanongkai, G. Pandurangan, and P. Tetali. Distributed random walks. J. ACM, 60(1):2:1–2:31, 2013.
 [25] S. Dolev. Selfstabilization. MIT Press, 2000.
 [26] C. Dutta, G. Pandurangan, R. Rajaraman, Z. Sun, and E. Viola. On the complexity of information spreading in dynamic networks. In ACMSIAM SODA, 2013.
 [27] E. Gafni and B. Bertsekas. Distributed algorithms for generating loopfree routes in networks with frequently changing topology. IEEE Transactions on Communications, 1981.
 [28] B. Haeupler. Analyzing network coding gossip made easy. In ACM STOC, pages 293–302, 2011.
 [29] B. Haeupler and D. Karger. Faster information dissemination in dynamic networks via network coding. In ACM PODC, pages 381–390, 2011.
 [30] B. Haeupler and F. Kuhn. Lower bounds on information dissemination in dynamic networks. In DISC, pages 166–180, 2012.
 [31] V. King, S. Kutten, and M. Thorup. Construction and impromptu repair of an MST in a distributed network with o(m) communication. In PODC, pages 71–80, 2015.
 [32] F. Kuhn, N. Lynch, and R. Oshman. Distributed computation in dynamic networks. In STOC, pages 513–522, 2010.
 [33] F. Kuhn and R. Oshman. Dynamic networks: Models and algorithms. SIGACT News, 42(1), 2011.
 [34] S. Kutten, G. Pandurangan, D. Peleg, P. Robinson, and A. Trehan. On the complexity of universal leader election. Journal of ACM, 62(1):7:1–7:27, 2015.
 [35] F. T. Leighton. Introduction to Parallel Algorithms and Architectures: Arrays, Trees, and Hypercubes. MorganKaufmann, 1991.
 [36] N. A. Lynch. Distributed Algorithms. Morgan Kaufmann, 1996.
 [37] R. O’Dell and R. Wattenhofer. Information dissemination in highly dynamic graphs. In DIALMPOMC, pages 104–110, 2005.
 [38] D. Peleg. Distributed computing: a localitysensitive approach. SIAM, Philadelphia, PA, USA, 2000.
 [39] D. M. Topkis. Concurrent broadcast for information dissemination. IEEE Transactional Software Engineering, 11(10):1107–1112, 1985.
Comments
There are no comments yet.