A Computational Approach to Organizational Structure

06/14/2018 ∙ by Bernhard Haeupler, et al. ∙ Carnegie Mellon University 0

An organizational structure defines how an organization arranges and manages its individuals. Because individuals in an organization take time to both perform tasks and communicate, organizational structure must take both computation and communication into account. Sociologists have studied organizational structure from an empirical perspective for decades, but their work has been limited by small sample sizes. By contrast, we study organizational structure from a computational and theoretical perspective. Specifically, we introduce a model of organizations that involves both computation and communication, and captures the spirit of past sociology experiments. In our model, each node in a graph starts with a token, and at any time can either perform computation to merge two tokens in t_c time, or perform communication by sending a token to a neighbor in t_m time. We study how to schedule computations and communications so as to merge all tokens as quickly as possible. As our first result, we give a polynomial-time algorithm that optimally solves this problem on a complete graph. This result characterizes the optimal graph structure---the edges used for communication in the optimal schedule---and therefore informs the optimal design of large-scale organizations. Moreover, since pre-existing organizations may want to optimize their workflow, we also study this problem on arbitrary graphs. We demonstrate that our problem on arbitrary graphs is not only NP-hard but also hard to approximate within a multiplicative 1.5 factor. Finally, we give an O( n ·OPT/t_m)-approximation algorithm for our problem on arbitrary graphs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Large-scale organizations accomplish otherwise prohibitively difficult tasks by distributing work. For example, companies distribute tasks among employees. Moreover, not only do organizations delegate work, but they also determine how members communicate with one another—both in how they choose to structure themselves and in how they instruct employees to utilize this structure [32, 14]. Considering how these two facets of organizations—computation and communication—interact is paramount to ensuring organizational efficiency. For instance, taking both into account enables powerful optimizations such as pipelining, a technique that parallelizes computation and communication [37].

Sociologists have studied how organizational structure—both how an organization is arranged and how an organization is managed—impacts the performance of organizations for decades [19, 58, 56]. An especially influential experiment was conducted by sociologists Alex Bavelas and Harold Leavitt [43, 6, 7]. In this experiment, groups of five participants were placed in one of four communication patterns, depicted in Figure 1. Each participant was given a set of five distinct items chosen from a set of six total items. The goal of all participants was to determine which item all five participants had in common by communicating via written notes over the specified communication pattern and repeatedly taking set intersections. Each group was evaluated based on the total time it took for the participants to complete the task. Notably, both the time it took participants to communicate messages and the time it took participants to perform computation on the messages they received counted towards the total time taken by the solution.

(a) Chain (line)
(b) Y
(c) Circle
(d) Wheel (star)
Figure 1: Communication patterns.

Many other sociologists have further studied organizational structure through empirical experimentation [66, 12, 33, 62, 63, 29, 64, 67, 28, 68, 65, 49, 50, 47, 15, 16, 41, 42]. Many of these studies observe a strong pattern in which “more peripheral members of a network [channeled] information to the most central node …who then decided what the correct answer was” [10].

However, sociology experiments on organizational structure suffer from two serious shortcomings. First, these empirical studies are limited in size by physical and logistical constraints. For example, the Bavelas-Leavitt experiment studied groups of only five participants, and most subsequent experiments did not consider groups of more than five people [12, 33, 62, 63, 29, 64, 67, 28, 68, 65, 49, 50, 47, 15, 16, 41, 42]. These small sample sizes have led to reproducibility issues and generally inconclusive experiments as observed by Burgess [12]: “Unfortunately, [this research] has not produced consistent and cumulative results. Indeed, the results are contradictory as well as inconclusive”. Second, although “these works initiated a plethora of empirical research… [they] were not accompanied by much theoretical development” [13]. Moreover, the few theoretical results that have been given [6] are not only rudimentary but also old and so fail to capitalize on modern advances in theoretical computer science.

In this paper, we provide a modern algorithmic perspective on organizational structure. Our main research question is twofold:

  1. How should one construct a large-scale organization?

  2. How should an organization coordinate communication and computation?

We present a model of organizations, directly inspired by the Bavelas-Leavitt experiment, which takes both communication and computation into account. With this model we give a formal treatment of the tendency of members of networks to channel information to centrally located nodes. Examining organizational structure through a computational lens—with tools from approximation algorithms—allows us to develop a formal understanding of organizational structure. Moreover, our results apply to organizations of any size.

1.1 Our Model and Problem

The Token Network Model.

We now introduce our model, the Token Network model. A Token Network is given by an undirected graph with parameters 111We assume throughout this paper. which describe the time it takes nodes to do computation and communication, respectively.

Time proceeds in synchronous rounds during which nodes can compute on or communicate atomic tokens. Specifically, in any given round a node is busy or not busy. If a node is not busy and has at least one token it can communicate: any node that does so is busy for the next rounds, at the end of which it passes one of its tokens to a neighbor in . If a node is not busy and has two or more tokens, it can compute: any node that does so is busy for the next rounds, at the end of which it combines (a.k.a. aggregates) two of its tokens into a single new token.222Throughout this paper, we assume for ease of exposition that the smaller of and evenly divides the larger of and . At a high level, this means that communication takes rounds and computation takes rounds.

The Token Computation Problem.

We use our Token Network model to give a formal treatment of the tendency of members of networks to channel information to centrally located nodes. In particular, we study the Token Computation problem. Given an input Token Network, an algorithm for the Token Computation problem must output a schedule which directs each node when to compute and when and with whom to communicate. A schedule is valid if after the schedule is run on the input Token Network where every node begins with a single token, there is a single token in the entire network; i.e., there is one node that has aggregated all the information in the network. We use to notate the length of —i.e., the number of rounds takes—and measure the quality of an algorithm by the length of the schedules that it outputs. So as to leave no ambiguity as to the precise problem that we are solving, we give a more formal definition of the Token Computation problem in Appendix A, though we note that this formalization is exactly what one would expect.

Discussion of Modeling Choices.

Our Token Network model and the Token Computation problem are designed to formally capture the challenges articulated by Bavelas and Leavitt of an organization collectively accomplishing a task. In particular, combining tokens can be understood as applying some commutative, associative function to the private input of all organization members. For instance, summing up private inputs, taking a minimum of private inputs, or computing the intersection of input sets in an organization can all be cast as instances of the Token Computation problem. Computing the intersection of input sets is, in fact, exactly the setting studied by the Bavelas-Leavitt experiment. Additionally, unlike most models of distributed computing, we assume that both the computation and the communication by members take time, as time to compute over received messages was factored into that experiment. We consider how a centralized agent can coordinate an organization rather than a distributed model both because it reflects how organizations operate in practice—managers coordinate employees—and because it was an intended object of study of the Bavelas-Leavitt experiment—“there is an understandable urge to proceed with the business of coordination” [6]. We assume that the computation time is the same for every operation and that the output of a computation is roughly the size of each of the inputs as a simplifying assumption; however, we do note that the sets computed over in the Bavelas-Leavitt experiment stayed roughly of the same size, so this assumption, too, is motivated by the Bavelas-Leavitt experiment. Note that unlike the Bavelas-Leavitt experiment, in the Token Computation problem, we only require a single arbitrary node in the network to compute the final answer. We purposefully make this choice as even if we were to require that all nodes in a network know the final answer, once one node has aggregated all the information in the network, it is well-understood how to spread said value in the network [60, 40, 35]. Moreover, we feel that this sort of workflow, in which only a single node must determine the answer, captures the structure of companies—often, lower-level employees perform tasks and propagate information up a network, but they may not always be told the final result after higher-level management integrates their input. We allow nodes to receive information from multiple neighbors in each round to reflect the structure of the Bavelas-Leavitt experiment. Lastly, it is not hard to see that our assumption that a node can send its token to only a single neighbor rather than multiple copies of its token to multiple neighbors is without loss of generality: One can easily modify a schedule in which nodes send multiple copies to neighbors to one of equal length in which a node only ever sends one token per round.

1.2 Results, Discussion and Techniques

We now give a high-level description of our results for solving the Token Computation problem.

Optimal Schedule on Complete Graphs (Section 4).

We begin by considering how to construct the optimal schedule for the complete graph for given values of and . The principal challenge in constructing such a schedule is formalizing how to optimally pipeline computation and communication and showing that any valid schedule needs at least as many rounds as one’s constructed schedule. We overcome this challenge by showing how to modify a given optimal schedule into an efficiently computable one in a way that preserves its pipelining structure. Specifically, we show that one can always modify a valid optimal schedule into another valid optimal schedule with a well-behaved recursive form. We show that this well-behaved schedule can be computed in polynomial time. Even stronger, we show that the edges over which communication takes place in this schedule induce a tree which generalizes Fibonacci trees [34]. It is important to emphasize, then, that this result has implications beyond producing the optimal schedule for a complete graph; it shows how one ought to construct a network over which both computation and communication occur (if one had the freedom to include any edge), thereby answering the first of our two main research questions.

Hardness Results for Arbitrary Graphs (Section 5).

We next consider the hardness of producing good schedules efficiently for arbitrary graphs for given values of and . The challenge in demonstrating the hardness of our problem is that the optimal schedule for an arbitrary graph does not have well-behaved structure. Our insight here is that by forcing a single node to do a great deal of computation we can impose structure on the optimal schedule in a way that makes the optimal schedule reflect the minimum dominating set of the graph. Using this insight, we show that an approximation algorithm is the best one can hope for in our problem, by demonstrating that the problem is NP-complete. We then show that no polynomial-time algorithm can produce a schedule of length within a multiplicative factor of the optimal schedule unless . This result shows that one can only coodinate a pre-existing network over which computation and communication occur so well.

Approximately-Optimal Schedule for Arbitrary Graphs (Section 6).

Given that an approximation algorithm is the best one can hope for, we next give an algorithm which in polynomial time produces an approximately-optimal Token Computation schedule. Our algorithm is based on the simple observation that after repetitions of pairing off nodes with tokens, having one node in each pair route a token to the other node in the pair and then having every node compute, there will be a single token in the network. The difficulty in this approach lies in showing that one can route pairs of tokens in a way that is competitive with the length of the optimal schedule. We show that by considering the paths in traced out by tokens sent by the optimal schedule, we can get a concrete hold on the optimal schedule. Specifically, we show that a polynomial-time algorithm based on our observation produces a valid schedule of length

with high probability,

333Meaning at least henceforth. where is the length of the optimal schedule. Using an easy bound on , this can be roughly interpreted as an -approximation algorithm. This result shows how one can coordinate a pre-existing network over which both computation and communication occur fairly well.

We believe that these results motivate interesting empirical directions. Previous sociology experiments have been performed in networks with decentralized coordination. Running an experiment similar to that of Bavelas and Leavitt using our optimal networks with a centralized authority would measure to what degree optimal network structure and a centralized authority expedite the time groups take to solve tasks.

Lastly, while we have thus far framed the applications of our work to organizations of people, they just as well apply to distributed systems of computers. In particular, our results give algorithms for scheduling computation and communication on networks of computers. Furthermore, it is not hard to see that when and or and , our problem is trivially solvable in polynomial time. However, we show hardness for the case where , which gives a formal sense in which computation and communication cannot be considered in isolation, as assumed in previous models of distributed computation.

2 Related Work

Though we have reviewed the primary related work in sociology in Section 1, problems related to ours have been studied in both theoretical and applied computer science and economics. We describe some of this related work here and discuss additional related work in Appendix B.

2.1 Related Theoretical Work

The previous theoretical work which most closely resembles our own is a line of work in centralized algorithms for scheduling information dissemination [60, 40, 35]. In this problem, an algorithm is given a graph and a model of distributed communication, and must output a schedule that instructs nodes how to communicate in order to spread some information. For instance, in one setting an algorithm must produce a schedule which, when run, broadcasts a message from one node to all other nodes in the graph. The fact that these problems consider spreading information is complementary to the way in which we consider consolidating it. However, we note that computation plays no role in these problems, in contrast to our Token Computation problem.

Of these prior models of communication, the model which is most similar to our own is the telephone broadcast model. In this model in each round a node can “call” another node to transmit information or receive a call from a single neighbor. Previous results have given a hardness of approximation of [21] for broadcasting in this model and logarithmic as well as sublogarithmic approximation algorithms for broadcasting [22]. The two notable differences between this model and our own are (1) in our model nodes can receive information from multiple neighbors in a single round444See above for the justification of this assumption. and (2) again, in our model computation takes a non-negligible amount of time. Note, then, that even in the special case when , our model does not generalize the telephone broadcast model; as such we do not immediately inherit prior hardness results from the telephone broadcast problem. Relatedly, (1) and (2) preclude the possibility of an easy reduction from our problem to the telephone broadcast problem.

In the setting of radio networks, researchers have examined the Convergecast scheduling problem, which resembles the aggregation aspect of the Token Computation problem [23, 24]. However, this work does not consider the cost of computation. Additionally, researchers have studied algorithms for the general problem of multi-level aggregation, which shares many of the same applications as our work [9, 11, 52]. Again, these works do not consider the cost of computation, and, moreover, they only examine the online setting in which requests for information aggregation arrive over time.

2.2 Related Applied Work

There is a significant body of related work in resource-aware scheduling, sensor networks, and high-performance computing that considers both the relative costs of communication and computation, often bundled together in an energy cost. However, these studies have been largely empirical rather than theoretical, and much of the work considers distributed algorithms.

The most closely related applied work is in the high-performance computing space, where many researchers have studied the problem of AllReduce. In this problem one must compute a function of the input of all nodes in a network and then distributes the result to all other nodes in the network [59, 27]. However, while there has been significant research on communication-efficient AllReduce algorithms, there has been relatively little work that explicitly considers the cost of computation, and even less work that considers the construction of optimal topologies for efficient distributed computation. Researchers have empirically evaluated the performance of different models of communication [51, 39] and have proven (trivial) lower bounds for communication without considering computation [53, 54]. Indeed, to the best of our knowledge, the extent to which they consider computation is through an additive penalty that consists of a multiplicative factor times the size of all inputs at all nodes, as in [36]; crucially, this penalty is the same for any schedule and cannot be reduced via intelligent scheduling. Therefore, there do not seem to exist theoretical results for efficient algorithms that consider both the cost of communication and computation.

3 Definitions and Notation

Before moving on to our formal results, we introduce the definitions we use in the remainder of the paper.

Definition 1 (Idle).

A node is idle in round if it is not computing or communicating in round .

Definition 2 (Contains).

A token contains token if or was created by combining two tokens, one of which contains . For shorthand we write to mean that contains .

Definition 3 (Singleton token).

A singleton token is a token that only contains itself; i.e., it is a token with which a node started. We let be the singleton token with which vertex starts and refer to as ’s singleton token.

Definition 4 (Size of a token).

The size of a token is the number of singleton tokens it contains.

Definition 5 (Terminus).

Let be the last token of a valid schedule ; i.e., contains all singleton tokens. The terminus of is the node at which is formed by a computation.

4 Optimal Algorithm for Complete Graphs

In this section we provide an optimal polynomial-time algorithm for the Token Computation problem on a complete graph. The schedule output by our algorithm ultimately only uses the edges of a particular tree, and so, although we reason about our algorithm in a fully connected graph, in reality our algorithm works equally well on said tree. This result, then, informs the design of an optimal organization structure.

4.1 Binary Trees (Warmup)

We build intuition by considering a natural solution to Token Computation on the complete graph: naive aggregation on a rooted binary tree. In this schedule, nodes do computations and communications in lock-step. In particular, consider the schedule which alternates the following two operations until only a single node with tokens remains on a fixed binary tree: (1) every non-root node that has a token sends its token to its parent in the binary tree; (2) every with at least two tokens performs one computation. Once only one node has any tokens, that node performs computation until only a single token remains. After iterations of this schedule, the root of the binary tree is the only node with any tokens, and thereafter only performs computation for the remainder of . However, does not efficiently pipeline communication and computation: after each iteration of (1) and (2), the root of the tree gains an extra token. Therefore, after repetitions of this schedule, the root has tokens. In total, then, this schedule aggregates all tokens after essentially rounds. See Figure 2.

(a) Round
(b) Round
(c) Round
(d) Round
Figure 2: The naive aggregation schedule on a binary tree for and after rounds. tokens are represented by blue diamonds; a red arrow from node to node means that sends to ; and a double-ended blue arrow between two tokens and means that and are combined at the node. Notice that the root gains an extra token every rounds.

For certain values of and , we can speed up naive aggregation on the binary tree by pipelining the computations of the root with the communications of other nodes in the network. In particular, consider the schedule for a fixed binary tree for the case when in which every non-root node does exactly what it does in but the root always computes. Since the root is always computing in , even as other nodes are sending, it does not build up a surplus as in . Thus, this schedule aggregates all tokens after essentially rounds when .

However, as we will show, binary trees are not optimal even when they pipeline computation at the root and . In the remainder of this section, we generalize this pipelining intuition to arbitrary values of and and formalize how to show a schedule is optimal.

(a) Round
(b) Round
(c) Round
(d) Round
Figure 3: The aggregation schedule on a binary tree for and after rounds where the root pipelines its computations. Again, tokens are represented by blue diamonds; a red arrow from node to node means that sends to ; and a double-ended blue arrow between two tokens and means that and are combined at the node. Notice that the root will never have more than 3 tokens when this schedule is run.

4.2 Complete Graphs

We now describe our optimal polynomial-time algorithm for complete graphs. This algorithm produces a schedule which greedily aggregates on a particular tree, . In order to describe this tree, we first introduce the tree . This tree can be thought of as the largest tree for which greedy aggregation aggregates all tokens in rounds given computation cost and communication cost . We will overload notation and let denote for some fixed values of and . Let the root of a tree be the node in that tree with no parents. Also, given a tree with root we define as but where also has as an additional subtree. We define as follows.

We give an example of for and in Figure 4. Notice that for the tree just is a Fibonacci tree [34] of recursion depth ; , then, can be thought of as a generalization of Fibonacci trees.555Echoing the fact that only graphs containing binomial heaps have -length telephone broadcast schedules [21].

Figure 4: An example of where , .

Since an input to the Token Computation problem consists of nodes, and not a desired number of rounds, we define to be the minimum value such that . We again overload notation and let denote . Formally,

We let denote . For ease of presentation throughout this section we assume that .666If , then we could always “hallucinate” extra nodes where appropriate.

The schedule produced by our algorithm will simply perform greedy aggregation on . We now formally define greedy aggregation and establish its runtime on the tree .

Definition 6 (Greedy Aggregation).

Given an -rooted tree, let the greedy aggregation schedule be defined as follows. In the first round, every node except for sends its token to its parent. In subsequent rounds we do the following. If a node is not busy and has at least two tokens, it performs a computation. If a non-root node is not busy, has exactly one token, and has received a token from every child in previous rounds, it forwards its token to its parent.

Lemma 7.

Greedy aggregation on terminates in rounds.

Proof.

We will show by induction on that greedy aggregation results in the root of having a token of size after rounds. The base cases of are trivial, as nothing needs to be combined. For the inductive step, applying the inductive hypothesis and using the recursive structure of our graph tells us that the root of has a token of size at its root in rounds, and the root of the child has a token of size at its root in rounds. Therefore, by the definition of greedy aggregation, the root of sends its token of size to the root of at time , which means the root of can compute a token of size by round , as desired. ∎

See Figure 5 for an illustration of the as a function of for specific values of and . Furthermore, notice that that and are constructed in such a way that greedy aggregation pipelines computation and communication. We now state our optimal algorithm, which simply outputs the greedy aggregation schedule on , and give the main theorem of this section.

(a) ,
(b) ,
Figure 5: An illustration of the optimal schedule length for different sized trees. The solid red line is the number of rounds taken by greedy aggregation with pipelining on a binary tree (i.e., ); the dashed blue line is the number of rounds taken by greedy aggregation on ; and the dotted green line is the trivial lower bound of rounds. Note that though we illustrate the trivial lower bound of rounds, as we prove the true lower bound is given by the number of rounds taken by greedy aggregation on .
Input: , ,
Output: A schedule for Token Computation on with parameters and
Arbitrarily embed into
return Greedy aggregation schedule on embedded in
Algorithm 1 OptComplete(, , )
Theorem 8.

Given a complete graph on vertices and any , OptComplete optimally solves Token Computation on the Token Network in polynomial time.

To show that Theorem 8 holds, we first note that OptComplete trivially runs in polynomial time (proof given in Appendix C). Therefore, we focus on showing that greedy aggregation on optimally solves the Token Computation problem on . We start by defining as follows.

Definition 9.

Let be the number of nodes in the maximum size complete graph on which one can solve the Token Computation problem in at most total rounds. Equivalently, is the size of the largest token produced by a schedule of length in a sufficiently large complete graph.

We show that , thereby showing that greedy aggregation on is optimal. Roughly, we do this by showing that and satisfy the same recurrence.

First notice that the base case of is trivially .

Lemma 10.

For we have that for .

Proof.

That for follows immediately from the fact that if there are not enough rounds to send and combine a token and so the Token Computation problem can only be solved on a graph with one node. ∎

We now show that for the recursive case is always at least as large as , which is the recurrence that defines .

Lemma 11.

For we have that for .

Proof.

Suppose . Let be the optimal schedule on the complete graph of nodes with terminus and let be the optimal schedule on the complete graph of size with corresponding terminus . Now consider the following solution on the complete graph of nodes. Run and in parallel on and nodes respectively, and once has completed, forward the token at to and, once it arrives, have perform one computation. This is a valid schedule which takes rounds to solve Token Computation on nodes. Thus, we have that for . ∎

It remains to show that this bound on the recursion is tight. To do so, we case on whether or .

4.2.1

In this case, where computation takes at least as much time as communication, it is straightforward to show that follows the same recurrence as .

Lemma 12.

When for it holds that for .

Proof.

Suppose that . By Lemma 11, it is sufficient to show that . Consider the optimal solution given rounds. The last action performed by any node must have been a computation that combines two tokens, and , at the terminus because, in an optimal schedule, any further communication of the last token increases the length of the schedule. We now consider three cases.

  • In the first case, and were both created at . Because both of or could not have been created at time , one of them must have been created at time at the latest. This means that .

  • In the second case, exactly one of or (without loss of generality, ) was created at . This means that must have been sent to at latest at time . This means that .

  • In the last case, neither nor was created at . This means that both must have been sent to at the latest at time . This means that .

Thus, in all cases we have . ∎

4.2.2

We now consider the case in which communication is more expensive than computation, . One might hope that the same sort of simple case-wise analysis as used when would prove the desired result for when . However, it is not difficult to see that that the preceding analysis breaks down for this case. Thus, we have to do significantly more work to show that . In particular, we have to establish some structure on the schedule which solves Token Computation on in rounds that will allow us to show that when . We do this by considering an arbitrary schedule and then performing successive modifications to this schedule that do not affect the validity or length of the schedule. In particular, we leverage the following insights—illustrated in Figure 6—to perform these successive modifications.

  • Combining insight: Suppose node has two tokens in round , and , and sends to node in round . Node can just as well aggregate and , treat this aggregation as it treats in the original schedule and can just pretend that it receives in round . That is, can “hallucinate” that it has token . Note that this insight crucially leverages the fact that , since otherwise the performed computation would not finish before round .

  • Shortcutting insight: Suppose node sends a token to node in round and node sends a token to node in a round in . Node can “shortcut” node and send to directly and can just not send.

(a) Combining insight
(b) Shortcutting insight
Figure 6: An illustration of the shortcutting and combining insights. Here, tokens are denoted by blue diamonds, and hallucinated tokens are denoted by striped red diamonds. As before, a red arrow from node to node means that sends to , and a double-ended blue arrow between two tokens and means that and are combined at the node. Notice that which nodes have tokens and when nodes have tokens are the same under both modifications (though in the combining insight, a node is only hallucinating that it has a token).

Using these insights, we establish the following structure on a schedule which solves Token Computation on in rounds when .

Lemma 13.

When , for all , there exists a schedule, , of length that solves Token Computation on such that the terminus of , , never communicates and every computation performed by involves a token that contains ’s singleton token, .

Proof.

Let be some arbitrary schedule of length which solves Token Computation on ; we know that such a schedule exists by definition of . We first show how to modify into another schedule, , which not only also solves Token Computation on in rounds, but which also satisfies the following four properties.

  1. only sends at time if at time has exactly one token for ;

  2. if sends in round then does not receive any tokens in rounds for ;

  3. if sends in round then is idle during rounds for ;

  4. the terminus never communicates.

Achieving property (1).

Consider an optimal schedule . We first show how to modify to an -round schedule that solves Token Computation on and satisfies property (1). We use our combining insight here. Suppose that (1) does not hold for ; i.e., a node sends a token to node at time and has at least one other token, say , at time . We modify as follows. At time , node combines and into a token which it then performs operations on (i.e., computes and sends) as it does to in the original schedule. Moreover, node pretends that it receives token at time : any round in which has compute on or communicate , now simply does nothing; nodes that were meant to receive do the same. It is easy to see that by repeatedly applying the above procedure to every node when it sends when it has more than one token, we can reduce the number of tokens every node has whenever it sends to at most one. The total runtime of this schedule is no greater than that of , namely , because . Moreover, it clearly still solves Token Computation on . Call the schedule .

Achieving properties (1) and (2).

Now, we show how to modify into such that properties (1) and (2) both hold. Again, is of length and solves Token Computation on . We use our shortcutting insight here. Suppose that (2) does not hold for ; i.e., there exists a that receives a token from node while sending another token to node . We say that node is bothering node in round if node communicates a token to in round , and node communicates a token to node in round . Say any such pair is a bothersome pair. Furthermore, given a pair of nodes and round such that node is bothering node in round , let the resolution of in round be the modification in which sends its token directly to the node to which sends its token. Note that each resolution does not increase the length of the optimal schedule because, by the definition of bothering, this will only serve as a shortcut; will receive a token from at the latest in the same round it would have received a token from in the original schedule, and nodes and can pretend that they received tokens from and , respectively. However, it may now be the case that node ends up bothering node . We now show how to repeatedly apply resolutions to modify into a schedule in which no node bothers another in any round .

Consider the graph where the vertices are the nodes in and there exists a directed edge if node is bothering node in round in schedule . First, consider cycles in . Note that, for any time in which has a cycle, we can create a schedule in which no nodes in any cycle in send their tokens in round ; rather, they remain idle this round and pretend they received the token they would have received under . Clearly, this does not increase the length of the optimal schedule and removes all cycles in round . Furthermore, this does not violate property (1) because fewer nodes send tokens in round , and no new nodes send tokens in round .

Therefore, it suffices to consider an acyclic, directed graph . Now, for each round , we repeatedly apply resolutions until no node bothers any other node during that round. Note that for every , each node can only be bothering at most one other node because nodes can only send one message at a time. This fact, coupled with the fact that is acyclic, means that is a DAG where nodes have out-degree 1. It is not hard to see that repeatedly applying resolutions to a node which bothers another node will decrease the number of edges in by 1. Furthermore, because there are total nodes in the network, the number of resolutions needed for any node at time is at most .

Furthermore, repeatedly applying resolutions to for times in order results in a schedule with no bothersome pairs at any time and that still satisfies property (1), and so schedule satisfies properties (1) and (2). Since each resolution did not increase the length of the schedule we also have that is of length . Lastly, clearly still solves Token Computation on .

Achieving properties (1) - (3).

Now, we show how to modify into which satisfies properties (1), (2), and (3). We use our shortcutting insight here as well as some new ideas. Given , we show by induction over from to , where is the length of an optimal schedule, that we can modify such that if a node finishes communicating in round (i.e., begins communicating in round ), it remains idle in rounds in the modified optimal schedule. The base case of is trivial: If a node communicates in round , it must remain idle in round because the entire schedule is of length .

Suppose there exists a node that finishes communicating in round but is not idle in some round in ; furthermore, let round be the first round after in which node is not idle. By property (1), node must have sent its only token away in round , and therefore node must have received at least one other token after round but before round . We now case on the type of action node performs in round .

  • If node communicates in round , it must send a token it received after time but before round . Furthermore, as this is the first round after in which is not idle, cannot have performed any computation on this token, and by the inductive hypothesis, must remain idle from round on. Therefore, receives a token from some node and then forwards this token to node at time . One can modify this schedule such that sends directly to instead of sending to .

  • If node computes in round , consider the actions of node after round . Either eventually performs a communication after some number of computations, after which point it is idle by the inductive hypothesis, or only ever performs computations from time on.

    In round , must combine two tokens it received after time by property (1). Note that two distinct nodes must have sent the two tokens to because, by the inductive hypothesis, each node that sends after round remains idle for the remainder of the schedule. Therefore, the nodes and that sent the two tokens to must have been active at times , where , after which they remain idle for the rest of the schedule. Call the tuple a switchable triple. We can modify the schedule to make idle at round by picking the node that first sent to and treating it as while the original stays idle for the remainder of the schedule. In particular, we can modify such that, without loss of generality, sends its token to and performs the computation that originally performed in . Note that this now ensures that will be idle in round and does not increase the length of the schedule, as takes on the role of . Furthermore, node ’s new actions do not violate the inductive hypothesis: Either only ever performs computations after time , or it eventually communicates and thereafter remains idle.

    We can repeat this process for all nodes that are not idle after performing a communication in order to produce a schedule in which property (3) is satisfied.

First, notice that these modifications do not change the length of : in the first case can still pretend that it receives at time even though it now receives it in an earlier round and in the second case takes on the role of at the expense of no additional round overhead. Also, it is easy to see that still solves Token Computation on .

We now argue that the above modifications preserve (1) and (2). First, notice that the modifications we do for the first case do not change when any nodes send and so (1) is satisfied. In the second case, because we switch the roles of nodes, we may potentially add a send for a node. However, note that we only require a node to perform an additional send when it is part of a switchable triple , and takes on the role of in the original schedule from time on. However, because satisfies (1), was about to send its only token away and therefore only had one token upon receipt of the token from . Therefore, because performs the actions that performs in from time on, and because at time , both and have exactly two tokens, (1) is still satisfied by . Next, we argue that (3) is a strictly stronger condition than (2). In particular, we show that since satisfies (3) it also satisfies (2). Suppose for the sake of contradiction that satisfies (3) but not (2). Since (2) is not satisfied there must exist some node that sends in some round to, say node , but receives a token in some round in . By (3) it then follows that is idle in all rounds after . However, also receives a token in round . Therefore, in round , two distinct nodes have tokens, one of which is idle in all rounds after ; this contradicts the fact that solves Token Computation. Thus, must also satisfy (2)

Achieving properties (1) - (4).

It is straightforward to see that also satisfies property (4). Indeed, by property (3), if the terminus ever sends in round , then the terminus must remain idle during rounds , meaning it must be idle in round which contradicts the fact that in this round the terminus performs a computation. Therefore, satisfies properties (1) - (4), and we know that there exists an optimal schedule in which is always either computing or idle.

Achieving the final property.

We now argue that we can modify into another optimal schedule such that every computation done at the terminus involves a token that contains the original singleton token that started at the terminus. Suppose that in , performs computation that does not involve . Take the first instance in which combines tokens and , neither of which contains , in round . Because this is the first computation that does not involve a token containing , both and must have been communicated to the terminus in round at the latest.

Consider the earliest time in which computes a token that contains all of , , and . We now show how to modify into such that computes a token at time that contains all of , , and and is at least the size of by having nodes swap roles in the schedule between times and . Furthermore, because the rest of the schedule remains the same after time , this implies that uses at most as many rounds as , and therefore that uses at most rounds.

The modification is as follows. At time , instead of having combine tokens and , have combine one of them (without loss of generality, ) with the token containing . Now, continue executing but substitute for the token containing from round on; this is a valid substitution because possesses at time . In round , computes a token ; the difference from the previous schedule is that the new schedule has one fewer violation of property (4), i.e., one fewer round in which it computes on two tokens, neither of which contains .

We repeat this process for every step in which the terminus does not compute on the token containing , resulting in a schedule in which the terminus is always combining a communicated token with a token containing its own singleton token. Note that these modifications do not affect properties (1) - (4) because this does not affect the sending actions of any node, and therefore still satisfies properties (1) - (4). It easily follows, then, that solves Token Computation on in rounds. Thus, is a schedule of length that solves Token Computation on in which every computation the terminus does is on two tokens, one of which contains , and, by (4), the terminus never communicates. ∎

Having shown that the schedule corresponding to can be modified to satisfy a nice structure when , we can conclude our recursive bound on .

Lemma 14.

When , for it holds that for .

Proof.

Suppose . We begin by applying Lemma 13 to show that . Let be the terminus of the schedule using rounds as given in Lemma 13. By Lemma 13, in all rounds after round of it holds that is either computing on a token that contains or busy because it did such a computation. Notice that it follows that every token produced by a computation at contains .

Now consider the last token produced by our schedule. Call this token . By definition of the terminus, must be produced by a computation performed by , combining two tokens, say and , in round at the latest. Since every computation that does combines two tokens, one of which contains , without loss of generality let contain .

We now bound the size of and . Since exists in round we know that it is of size at most . Now consider . Since every token produced by a computation at contains and does not contain it follows that must either be a singleton token that originates at a node other than , or was produced by a computation at another node. Either way, must have been sent to , who then performed a computation on in round at the latest. It follows that exists in round , and so is of size no more than .

Since the size of just is the size of plus the size of , we conclude that is of size no more than . Since, solves Token Computation on a complete graph of size , we have that is of size and so we conclude that for when .

Lastly, since for by Lemma 11, we conclude that for when . ∎

4.2.3 Putting Both Cases Together

The preceding lemmas have established that the recursive structure of is the same as that of for both cases. We now prove Theorem 8.

Proof of Theorem 8.

On a high level, we argue that the greedy aggregation schedule on combines nodes in rounds and is therefore optimal. Combining Lemma 10, Lemma 12, and Lemma 14 we have the following recurrence on for .

However, notice that this is just the recurrence which defines . Thus, for we have that , and by Lemma 7, the greedy aggregation schedule on terminates in rounds.

Thus, the greedy aggregation schedule on solves Token Computation on in rounds, and therefore is an optimal solution for . Since is the smallest with at most nodes, greedy aggregation on is optimal for and so OptComplete optimally solves Token Computation on . Finally, as stated above, a polynomial runtime is trivial. ∎

5 Hardness of the Token Computation Problem

We now consider the Token Computation problem on arbitrary graphs. In this section we demonstrate that not only is Token Computation NP-complete, but no polynomial time algorithm approximates Token Computation to a factor better than .

5.1 NP-Completeness

As a warmup for our hardness of approximation result, and to introduce some of the techniques we use for our hardness of approximation, we begin with our proof that the decision version of Token Computation is NP-complete. An instance of the decision version of Token Computation is given by an instance of Token Computation and a candidate . An algorithm must decide if there exists a schedule that solves Token Computation in at most rounds.

We reduce from -dominating set.

Definition 15 (-dominating set).

An instance of -dominating set consists of a graph ; the decision problem is to decide whether there exists where such that for all there exists such that .

Recall that -dominating set is NP-complete.

Lemma 16 (Garey and Johnson [25]).

-dominating set is NP-complete.

Given an instance of -dominating set, we would like to transform into another graph in polynomial time such that has a -dominating set iff there exists a Token Computation schedule of some particular length for for some values of and .

We begin by describing the intuition behind the transformation we use, R. Any schedule on graph in which every node only performs a single communication and which aggregates all tokens down to at most tokens corresponds to a -dominating set of ; in particular, those nodes that do computation form a -dominating set of . If we had a schedule of length which aggregated all tokens down to tokens, then we could recover a -dominating set from our schedule. However, our problem aggregates down to only a single token, not tokens. Our crucial insight, here, is that by structuring our graph such that a single node, , must perform a great deal of computation, must be the terminus of any short schedule. The fact that must be the terminus and do a great deal of computation, in turn, forces any short schedule to aggregate all tokens in down to at most tokens at some point, giving us a -dominating set.

Formally, R is as follows. R takes as input a graph and a value for and outputs . has as a sub-graph and in addition has auxiliary node where is connected to all ; is also connected to dangling nodes , where , along with a special dangling node .777 is the max degree of . Thus, . See Figure 7.

(a)
(b)
Figure 7: An example of R for a given graph and . Nodes and edges added by R are dashed and in blue. Notice that .

We now prove that the optimal Token Computation schedule on can be upper bounded as a function of the size of the minimum dominating set of .

Lemma 17.

The optimal Token Computation schedule on is of length at most for , where is the minimum dominating set of .

Proof.

We know by definition of that there is a dominating set of size on . Call this set and let map any given to a unique node in that dominates it. We argue that it must be the case that Token Computation requires at most rounds on for . Roughly, we solve Token Computation by first aggregating at and then aggregating at .

In more detail, in stage of the schedule, every sends to , every node sends to and sends to . This takes rounds. In stage , each node does the following in parallel. Node computes and sends its single token to . Each computes until it has a single token and sends the result to . Node combines all tokens from . Node takes rounds to do this. Each takes at most rounds to do this. Node takes rounds to do this since will receive ’s token after rounds (and without loss of generality). Thus, stage , when done in parallel, takes rounds. At this point has tokens and no other node in has a token. In stage , computes until it has only a single token, which takes rounds.

In total the number of rounds used by this schedule is . Thus, the total number of rounds used by the optimal Token Computation schedule on is at most . ∎

Next, we show that any valid Token Computation schedule on that has at most two serialized sends corresponds to a dominating set of size bounded by the length of the schedule.

Lemma 18.

Given and a Token Computation schedule for where , ,