I Introduction
In the future, robots with heterogeneous onboard capabi lities will be teaming up to complete complex tasks such as:

Image Covering: How swarms of tiny to large robots can collaboratively map (image cover) an unknown environment, such as an earthquakehit building? [mcguire2019minimal]

Area Monitoring: How swarms of ground and air robots can collaboratively monitor an environment to detect rare events, such as fires in a forest? [kumar2004robot]

Target Tracking: How a largescale distributed network of air and space vehicles can coordinate their motions to track multiple evading targets over a large area? [corah2021scalable]
The robots’ heterogeneous capabilities (speed, size, onboard cameras, etc.) offer tremendous advantages in all aforementioned tasks: for example, in the image covering scenario, the tiny robots offer the advantage of agility, being able to navigate narrow spaces in earthquakehit buildings; and the larger robots offer the advantages of reliability, being able to carry larger and higherresolution cameras, for longer.
ResourceAware Distributed DecisionMaking: A Paradigm  

Computations per Agent 


Communication Rounds  Proportional to the number of agents  
Memory per Message  Length of a real number or an action  
Communication Topology  Directed and even disconnected  
Suboptimality Guarantee 

But heterogeneity in capabilities also implies heterogeneity in onboard resources: for example, tiny robots have limited computation, communication, and memory resources [mcguire2019minimal]. Thus, mere distributed communication among the robots is insufficient for the success of their tasks. Instead, a holistic, resourceaware distributed collaboration is necessitated that respects each robot’s onboard capacity for computation, communication, and memory storage.
Current algorithms, such as consensusbased algorithms, are insufficient to meet the need for resourceawareness: they achieve distributed communication but at the expense of communication, computation, and memory overloads. Hence, robots with limited onboard resources, such as the tiny (27 grams) Crazyflies, cannot afford to use the algorithms [mcguire2019minimal]. Also, realtime performance is compromised by the latency caused by the computation and communication overloads.
In this paper, we shift focus from the current distributedcommunication only optimization paradigm to the resourceaware distributed optimization paradigm in Table I.
We focus on scenarios where the robots’ tasks are captured by an objective function that is monotone and “doubly” submodular [crama1989characterization, foldes2005submodularity], a diminishing returns property. Such functions appear in tasks of image covering [corah2018distributed] and vehicle deployment [downie2022submodular], among others. Then, the aforementioned tasks require the robots to distributively
(1) 
where is set of agents/robots, is agent ’s action, is agent ’s set of available actions (e.g., motion primitives), and is the objective function (e.g., total area covered by agents’ cameras at the current time step). In online settings, the robots may need to solve a new version of eq. 1 at each time step (e.g., in a recedinghorizon fashion).
Related Work. Problem 1 is NPhard, being combinatorial, even when is monotone and submodular [Feige:1998:TLN:285055.285059]. It has been actively researched in the last 40 years in the optimization, control, and operations research literature [fisher1978analysis, crama1989characterization, Feige:1998:TLN:285055.285059, foldes2005submodularity, krause08efficient, calinescu2011maximizing, wang2015accelerated, atanasov2015decentralized, mirzasoleiman2016distributed, ramalingam2017efficient, roberts2017submodular, sviridenko2017optimal, gharesifard2017distributed, grimsman2018impact, corah2018distributed, corah2019distributed, du2020jacobi, robey2021optimal, rezazadeh2021distributed, mokhtari2018decentralized, konda2021execution, downie2022submodular, chen2022higher, corah2021scalable]. Although nearoptimal approximation algorithms have been achieved, they focus on distributed communication only, instead of a holistically resourceaware optimization per Table I. For example, the sequential greedy [fisher1978analysis] and its variants [sviridenko2017optimal, gharesifard2017distributed, grimsman2018impact, corah2018distributed, konda2021execution], require increasing memory storage (agents act sequentially, and each agent passes the information about all previous agents to the next). Further the consensusbased distributed algorithms [mokhtari2018decentralized, robey2021optimal, rezazadeh2021distributed], although they achieve distributed communication, require excessive computations and communication rounds per agent. No current algorithm has suboptimality guarantees for even disconnected communication networks, nor captures the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements.
Contributions. We shift focus to holistically resourceaware distributed optimization. Along with novel definitions and theory, we provide the first algorithm balancing the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements.
1. ResourceAware Optimization Paradigm. Our first contribution is the definition of a resourceaware distributed algorithm. The definition is summarized in Table I.
2. ResourceAware Algorithm. We introduce the first resourceaware distributed algorithm for eq. 1 (Section III). According to Table I: (i) RAG has nearminimal computation, communication, and memory requirements (Section IV). (ii) Each agent can afford to run RAG by adjusting the size of its neighborhood, even if that means selecting actions in complete isolation. (iii) RAG enjoys a suboptimality bound that captures the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements (Section V). All agents can independently decide their contribution to the approximation performance by choosing the size of their neighborhood, and accounting at the same time for their onboard resources. RAG is the first algorithm to quantify this tradeoff.
3. Centralization of Information. We introduce the notion of Centralization of Information among nonNeighbors (COIN) to quantify RAG’s suboptimality. COIN captures the information overlap between an agent and its nonneighbors.
Evaluation on Robotic Application. We evaluate RAG in simulated scenarios of image covering with mobile robots (Section VI). We first compare RAG with the state of the art. Then, we evaluate the tradeoff of centralization vs. decentralization with respect to RAG’s performance. To enable the comparison with the state of art, we assume undirected and connected networks. RAG demonstrates superior or comparable performance, requiring, e.g., (i) orders of magnitude less computation time vs. the stateoftheart consensusbased algorithm in [robey2021optimal], (ii) order of magnitude fewer communication rounds vs. the consensus algorithm in [robey2021optimal], and comparable communication rounds vs. the greedy in [konda2021execution], and (iii) the least memory (e.g., less vs. the consensus algorithm in [robey2021optimal]). Still, RAG has the best approximation performance.
Ii Distributed Submodular Maximization:
A MultiRobot DecisionMaking Perspective
We define the Distributed Submodular Maximization problem of this paper (creftypecap 1). We use the notation:

is a communication network with nodes and edges . Nodes represent agents (e.g., robots), and edges represent communication channels.

, for all ; i.e., is the inneighbors of .

, for all , given graph ; i.e., is the outneighbors of . If is undirected, then , for all .

is a distance metric between an and a ; e.g., may be the Euclidean distance between and , when and are robots in the space.

given a collection of sets ; i.e., is the crossproduct of the sets in ;

, given a set function , , and ; i.e., is the marginal gain in for adding to .
The following preliminary framework is also required.
Agents. The terms “agent” and “robot” are used interchangeably in this paper. is the set of all robots. The robots cooperate towards a task, such as image covering. is a discrete set of available actions to each robot . For example, in image covering, may be the set of motion primitives that robot can execute to move in the environment.
Communication Network. The communication network among the robots may be directed and even disconnected. If , then a communication channel exists from robot to robot : can receive, store, and process the information from . The set of all robots that can send information to is , i.e., ’s inneighborhood. The set of all robots that can send information to is , i.e., ’s outneighborhood.
Remark 1 (Resourceaware inneighborhood selection based on information overlap).
In this paper, has been implicitly decided by robot given ’s onboard resources and based on a distance metric capturing the information overlap between and any robot within communication range. Particularly, is considered to increase as the information overlap drops. E.g., in image covering, since each camera’s field of view is finite, the distance metric can be proportional to the physical distance of the robots. When is sufficiently large, such that the field of view of robots and stop overlapping, then the robots know that their information is nonoverlapping, and may stop exchanging information to reserve onboard resources.
Remark 2 (Resourceaware outneighborhood selection).
In this paper, has been implicitly decided by robot given robot ’s onboard resources.
Objective Function. The robots coordinate their actions to maximize an objective function. In tasks, such as image covering, target tracking, and persistent monitoring, typical objective functions are the covering functions [corah2018distributed, robey2021optimal, downie2022submodular, corah2021scalable]. Intuitively, these functions capture how much area/information is covered given the actions of all robots. They satisfy the properties defined below (Definition 1 and Definition 2).
Definition 1 (Normalized and NonDecreasing Submodular Set Function [fisher1978analysis]).
A set function is normalized and nondecreasing submodular if and only if

;

, for any ;

, for any and .
Normalization holds without loss of generality. In contrast, monotonicity and submodularity are intrinsic to the function. Intuitively, if captures the area covered by a set of activated cameras, then the more sensors are activated , the more area is covered ; this is the nondecreasing property. Also, the marginal gain of covered area caused by activating a camera drops when more cameras are already activated ; this is the submodularity property.
Definition 2 (2ndorder Submodular Set Function [crama1989characterization, foldes2005submodularity]).
is 2ndorder submodular if and only if
(2) 
for any disjoint and () and .
The 2ndorder submodularity is another intrinsic property to the function. Intuitively, if captures the area covered by a set of cameras, then marginal gain of the marginal gains drops when more cameras are already activated.
Problem Definition. In this paper, we focus on:
Problem 1 (Distributed Submodular Maximization).
Each robot independently selects an action , upon receiving information from and about the inneighbors only, such that the robots’ actions solve the
(3) 
where is a normalized, nondecreasing submodular, and 2ndorder submodular set function.
Problem 1 requires each robot to independently choose an action to maximize the global objective , only based on local communication and information. Particularly, if after communication rounds (i.e., iterations of information exchange) robot decides to select an action , then
for a decision algorithm to be found in this paper.
Assumption 1.
Each robot may receive the action of an inneighbor as information, and, then, robot can locally compute the marginal gain of any of its own .
The assumption is common in all distributed optimization algorithms in the literature [gharesifard2017distributed, grimsman2018impact, mokhtari2018decentralized, robey2021optimal, downie2022submodular, du2020jacobi, rezazadeh2021distributed, corah2018distributed, konda2021execution, liu2021distributed, corah2019distributed, corah2021scalable]. In practice, robot needs to transmit to robot , along with ’s action, all other required information such that can compute locally the marginal gains of any of its own candidate actions.
In Section III we introduce RAG, an algorithm that runs locally on each robot , playing the role of .
Assumption 2.
The communication network is fixed between the executions of the algorithm.
That is, we assume no communication failures once the algorithm has started, and till its end. Still, can be dynamic across consecutive timesteps , when Problem 1 is applied in a recedinghorizon fashion (Remark 3).
Remark 3 (RecedingHorizon Control, and Need for Minimal Communication and Computation).
Image covering, target tracking, and persistent monitoring are dynamic tasks that require the robots to react across consecutive timesteps . Then, Problem 1 must be solved in a recedinghorizon fashion [camacho2013model]. This becomes possible only if the time interval between any two consecutive steps and can contain the required number of communication rounds to solve creftypecap 1. Thus, for faster reaction, the smaller the number of communication rounds must be, and the smaller the computation effort per round must be. Otherwise, realtime performance will be compromised by latency.
Iii ResourceAware distributed Greedy (Rag) Algorithm
We present the ResourceAware distributed Greedy (RAG) algorithm, the first resourceaware algorithm for creftypecap 1.
RAG’s pseudocode is given in Algorithm 1. The algorithm requires only local information exchange, among only neighboring robots and about only neighboring robots. Each agent starts by selecting the action with the largest marginal gain (lines 3–4). Then, instead of exchanging information with all other agents , exchanges information only with the inneighbors and outneighbors (line 5). Afterwards, checks whether , i.e., whether is the “best agent” among the inneighbors only, instead of comparing with all other agents in the network (line 6). If yes, then selects as its action (line 7), and lets only its outneighbors know its selection (line 8). Otherwise, receives the action(s) from the agent(s) that just selected action(s) in this iteration (i.e., set in line 10), and continues onto the next iteration (lines 9–15). Notably, may contain multiple agents, and may even be empty.
Iv Computation, Communication, and Memory Requirements of Rag
Continuous Domain  
Method  Du et al. [du2020jacobi]  Robey et al. [robey2021optimal]  Rezazadeh and Kia [rezazadeh2021distributed]  RAG (this paper) 
Computations per Agent  
Communication Rounds  
Memory per Message  
Communication Topology  connected, undirected  connected, undirected  connected, undirected  even disconnected, directed 
Suboptimality Guarantee  
Discrete Domain  
Method  Corah and Michael [corah2018distributed]  Liu et al. [liu2021distributed]  Konda et al. [konda2021execution]  RAG (this paper) 
Computations per Agent  
Communication Rounds  
Memory per Message  
Communication Topology  fully connected  connected, directed  connected, undirected  even disconnected, directed 
Suboptimality Guarantee 
We present RAG’s computation, communication, and memory requirements. The requirements are in accordance to the paradigm in Table I, and are summarized in Table II.
We use the additional notation:

and are the lengths of a message containing a real number or an action , respectively.

is the diameter of a network , i.e., the longest shortest path among any pair of nodes in [mesbahiBook];

is the cardinality of a discrete set .
Proposition 1 (Computation Requirements).
Each agent performs function evaluations during RAG.
Each agent needs to reevaluate the marginal gains of all , every time an inneighbor selects an action. Therefore, will perform evaluations in the worst case (and evaluations in the best case).
Proposition 2 (Communication Requirements).
RAG’s number of communication rounds is at most .
Each iteration of RAG requires two communication rounds: one for marginal gains, and one for actions. Also, RAG requires iterations in the worst case (when only one agent selects an action at each iteration). All in all, RAG requires at most communication rounds.
Proposition 3 (Memory Requirements).
RAG’s largest interagent message length is .
Any interagent message in RAG contains either a marginal gain, or an action . Thus, the message’s length is either or . The total onboard memory requirements for each agent are .
Remark 5 (NearMinimal Resource Requirements).
RAG has nearminimal computation, communication, and memory requirements, in accordance to Table I. (i) Computations per Agent: the number of computations per agent is indeed proportional to the size of the agent’s action set, and, in particular, is , i.e., decreasing as decreases. The number of computations would have been minimal if instead it was , since that is the cost for agent to compute its best action in . (ii) Communication Rounds: the number of communication rounds is indeed proportional to the number of agents, and, in particular, is at most . The number is nearminimal, since, in the worst case of a line communication network, communication rounds are required for information to travel between the most distant agents. (iii) Memory per Message: the length per message is indeed equal to the length of a real number or of an action.
Besides, each agent can afford to run RAG, in accordance to Table I, by adjusting the size of its in and out neighborhoods and . E.g., by decreasing the size of : (i) agent ’s computation effort decreases, since the effort is proportional to the size of (Proposition 1); (ii) the perround communication effort decreases, since the total communication rounds remain at most (Proposition 2) but the number of received messages per round decreases (RAG’s lines 5–6); and (iii) the onboard memorystorage requirements decrease, since the interagent message length remains constant (Proposition 3) but the number of received messages per round decreases (RAG’s lines 5–6).
Remark 6 (vs. StateoftheArt Resource Requirements).
RAG has comparable or superior computation, communication, and memory requirements, vs. the state of the art. The comparison is summarized in Table II.
Context and notation in Table II. We divide the state of the art into algorithms that optimize (i) indirectly in the continuous domain, employing the continuous representation multilinear extension [calinescu2011maximizing] of the set function [robey2021optimal, du2020jacobi, rezazadeh2021distributed], and (ii) directly in the discrete domain [corah2018distributed, konda2021execution, liu2021distributed]. The continuousdomain algorithms employ consensusbased techniques [robey2021optimal, rezazadeh2021distributed]
or algorithmic game theory
[du2020jacobi], and require the computation of the multilinear extension’s gradient. The computation is achieved via sampling; in Table II denotes that sample size. is equal to in [du2020jacobi], in [robey2021optimal], and in [rezazadeh2021distributed], in numerical evaluations with 10 or fewer agents. The computations per agent and communication rounds reported for [du2020jacobi] are based on the numerical evaluations therein, since a theoretical quantification is missing in [du2020jacobi] and seems nontrivial to derive one as a function of , , or any other of the problem parameters. Further, all continuousdomain algorithms’ resource requirements depend on additional problemdependent parameters (such as Lipschitz constants, the diameter of the domain set of the multilinear extension, and a bound on the gradient of the multilinear extension), which here we make implicit via the , , and notations. determines the approximation performance of the respective algorithms.Computations. Konda et al. [konda2021execution] rank best with computations per agent. RAG ranks 2ndbest with . The continuousdomain algorithms require a higher number of computations, proportional to or more.
Communication. For undirected networks, Konda et al. [konda2021execution] and RAG rank best, requiring in the worstcase the same communication rounds; but RAG is also valid for directed networks. For appropriate , Corah and Michael [corah2019distributed] may require fewer communication rounds but in [corah2019distributed], a preprocessing step with a fully connected network is required. The remaining algorithms require a significantly higher number of communication rounds, proportional to or more.
Memory. RAG ranks best when ; otherwise, it ranks after Robey et al. [robey2021optimal] and Rezazadeh and Kia [rezazadeh2021distributed], which then rank best (tie).
V Approximation Guarantee of Rag: Centralization vs. Decentralization Perspective
We present RAG’s suboptimality bound (Theorem 1). In accordance with Table I, the bound quantifies the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements.
We introduce the notion of Centralization of Information among nonNeighbors to quantify the bound.
We also use the notation:

is the set of agents beyond the inneighborhood of (see Fig. 1), i.e., ’s nonneighbors;

is the agents’ actions in ;

, i.e., is an optimal solution to Problem 1;

is RAG’s output for all agents;

for any agent and distance metric , i.e., the distance between an agent and its nonneighborhood is the minimum distance between and a nonneighbor .
Va Centralization of Information: A Novel Quantification
We use the notion of Centralization Of Information among Nonneighbors (COIN) to bound RAG’s suboptimality.
Definition 3 (Centralization Of Information among nonNeighbors (Coin)).
Consider a communication network , an agent , and a set function . Then, agent ’s Centralization Of Information among nonNeighbors is defined by
(4) 
If were entropy, then looks like the mutual information between the information collected by agent ’s action , and the information collected by agents ’s actions , i.e., the actions of agent ’s nonneighbors. , in particular, is computed over the best such actions (hence the maximization with respect to and in eq. 4).
captures the centralization of information within the context of a multiagent network for information acquisition: if and only if ’s information is independent from , i.e., if and only if the information is decentralized between agent and its nonneighbors .
Computing can be NPhard [Feige:1998:TLN:285055.285059], or even impossible since agent may be unaware of its nonneighbors’ actions; but upperbounding it can be easy. In this paper, we focus on upper bounds that depend on , quantifying how fast the information overlap between and its nonneighbors decreases the further away is from ; i.e., how quickly information becomes decentralized beyond agent ’s neighborhood, the further away the nonneighborhood is. For an image covering task, we obtain such a bound next.
Remark 7 (DistanceBased Upper Bounds for Coin: Image Covering Example).
Consider a toy image covering task where each agent carries a camera with a round field of view of radius (Fig. 2(a)). Consider that each agent has fixed its inneighbor , i.e., its communication range is just below the corresponding , that is,
for some, possibly arbitrarily small, ; e.g., in Fig. 2(b) the nonneighbor on the left of agent is just outside the boundary of ’s communication range. Then, is equal to the overlap of the field of views of agent and its nonneighbors, assuming, for simplicity, that the bound remains the same across two consecutive moves. Since the number of agent ’s nonneighbors may be unknown, an upper bound to is the gray ring area in Fig. 2(b), obtained assuming an infinite amount of nonneighbors around agent , located just outside the boundary of ’s communication range. That is,
(5) 
The upper bound in eq. 5 as a function of the communication range is shown in Fig. 2(c). As expected, it tends to zero for increasing , equivalently, for increasing . Particularly, when , the field of views of robot and of each of the nonneighboring robots are nonoverlapping, and, thus, .
In Section VB, we show that RAG enables each agent to choose its inneighborhood (equivalently, its nonneighborhood ) to balance both its onboard resources and its contribution to a nearoptimal approximation performance, as the latter is captured by .
Remark 8 (vs. Pairwise Redundancy [corah2018distributed]).
’s definition generalizes the notion of pairwise redundancy between two agents and , introduced by Corah and Michael [corah2018distributed]. The notion was introduced in the context of parallelizing the execution of the sequential greedy [fisher1978analysis], by ignoring the edges between pairs of agents in an a priori fully connected network. The comparison of the achieved parallelized greedy in [corah2018distributed] and RAG is found in Table II. Besides, captures the mutual information between the two agents and , defined as ; whereas captures the mutual information between an agent and all its nonneighbors, capturing directly the decentralization of information across the network.
VB Approximation Guarantee of Rag
Theorem 1 (Approximation Performance of Rag).
RAG selects such that , and
(6) 
Remark 9 (Centralization vs. Decentralization).
RAG’s suboptimality bound in Theorem 1 captures the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements:

Nearoptimality requires large , i.e., centralization: the larger is, the larger is. Thus, the smaller is the information overlap between and its nonneighbors , i.e., the smaller is, resulting in an increased nearoptimality for RAG.

Minimal onboard resource requirements requires instead a small , i.e., decentralization: the smaller is, the less the computation per agent (Proposition 1), as well as, the less the percommunicationround communication and memorystorage effort, since the number of received messages per round decreases.
RAG covers the spectrum from fully centralized —all agents communicate with all others— to fully decentralized —all agents communicate with none. RAG enjoys the suboptimality guarantee in Theorem 1 throughout the spectrum, capturing the tradeoff of centralization vs. decentralization. When fully centralized, RAG matches the suboptimality bound of the classical greedy [fisher1978analysis], since then for all , i.e., . For a centralized algorithm, the best possible bound is [sviridenko2017optimal].
Remark 10 (vs. StateoftheArt Approximation Guarantees).
RAG is the first algorithm to quantify the tradeoff of centralization vs. decentralization, per Table I. RAG enables each agent to independently decide the size of its inneighborhood to balance nearoptimality —which requires larger inneighborhoods, i.e., centralization— and onboard resources —which requires smaller inneighborhoods, i.e., decentralization. Instead, the state of the art tunes nearoptimality via a globally known hyperparameter , without accounting for the balance of smaller vs. larger neighborhoods at the agent level, and independently for each agent.
Moreover, the continuous methods [robey2021optimal, du2020jacobi, rezazadeh2021distributed]
achieve the suboptimality bound in probability, and the achieved value is in expectation. Instead,
RAG’s bound is deterministic, involving the exact value of the selected actions.Vi Evaluation in Image Covering with Robots
We evaluate RAG in simulated scenarios of image covering with mobile robots (Fig. 2). We first compare RAG with the state of the art (Section VIA; see Table III). Then, we evaluate the tradeoff of centralization vs. decentralization with respect to RAG’s performance (Section VIB; see Fig. 3).
We performed all simulations in Python 3.9.7, on a MacBook Pro with the Apple M1 Max chip and a 32 GB RAM.
Our code will become available via a link herein.
Via Rag vs. State of the Art
We compare RAG with the state of the art in simulated scenarios of image covering (Fig. 2). To enable the comparison, we set up undirected and connected communication networks (RAG is valid for directed and even disconnected networks but the stateoftheart methods are not). RAG demonstrates superior or comparable performance (Table III).
Simulation Scenario. We consider 50 instances of the setup in Fig. 2(a). Without loss of generality, each agent has a communication range of , a sensing radius of , and the action set {“forward”, “backward”, “left”, “right”} by 1 point. The agents seek to maximize the number of covered points.
Compared Algorithms. We compare RAG with the methods by Robey et al. [robey2021optimal] and Konda et al. [konda2021execution] since, among the state of the art, they achieve top performance for at least one resource requirement and/or for their suboptimality guarantee (Table II) —the method by Corah and Michael [corah2018distributed] may achieve fewer communication rounds, yet it requires a fully connected communication network. To ensure the method in [robey2021optimal] achieves a sufficient number of covered points, we set the sample size , as is also set in [robey2021optimal], and the number of communication rounds .
Results. The results are reported in Table III, and mirror the theoretical comparison in Table II. RAG demonstrates superior or comparable performance, requiring, (i) orders of magnitude less computation time vs. the stateoftheart consensusbased algorithm in [robey2021optimal], and comparable computation time vs. the stateoftheart greedy algorithm in [konda2021execution], (ii) order of magnitude fewer communication rounds vs. the consensus algorithm in [robey2021optimal], and comparable communication rounds vs. the greedy algorithm in [konda2021execution], and (iii) the least memory (e.g., less than the consensus algorithm in [robey2021optimal]). Still, RAG achieves the best approximation performance.
Method 





Total Computation Time (s)  1434.11  0.02  0.05  
Communication Rounds  100  13.44  7.76  
Peak Total Memory (MB)  290.43  181.07  167.07  
Total Covered Points  1773.54  1745.18  1816.4 
ViB The TradeOff of Centralization vs. Decentralization
We demonstrate the tradeoff of centralization vs. decentralization, with respect to RAG’s performance.
Simulation Scenario. We consider the same setup as in Section VIA, yet with the communication range increasing from to . That is, the communication network starts from being fully disconnected (fully decentralized) and becomes fully connected (fully centralized). The communication range is assumed the same for all robots, for simplicity.
Results. The results are reported in Fig. 3. When a higher communication range results in more inneighbors, then (i) each agent executes more iterations of RAG before selecting an action, resulting in increased (ia) computation time and (ib) communication rounds ( iteration of RAG corresponds to communication rounds; see RAG’s lines 5 and 11), and (ii) each agent needs more onboard memory for information storage and processing. In contrast, with more inneighbors, each agent coordinates more centrally, and, thus, the total covered points increase (for each agent , becomes smaller till it vanishes, and the theoretical suboptimality bound increases to ). All in all, Fig. 3 captures the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements. For increasing communication range, the required onboard resources increase, but also the total covered points increase. To balance the tradeoff, the communication range may be set to .
Vii Conclusion
Summary. We made the first step to enable a resourceaware distributed intelligence among heterogeneous agents. We are motivated by complex tasks taking the form of creftypecap 1, such as image covering. We introduced a resourceaware optimization paradigm (Table I), and presented RAG, the first resourceaware algorithm. RAG is the first algorithm to quantify the tradeoff of centralization, for global nearoptimality, vs. decentralization, for nearminimal onboard resource requirements. To capture the tradeoff, we introduced the notion of Centralization of Information among nonNeighbors (COIN). We validated RAG in simulated scenarios of image covering, demonstrating its superiority.
Future Work. RAG assumes synchronous communication. Besides, the communication topology has to be fixed and failurefree across communication rounds (creftypecap 2). Our future work will enable RAG beyond the above limitations. We will also consider multihop communication. Correspondingly, we will quantify the tradeoff of nearoptimality vs. resourceawareness based on the depth of the multihop communication. We will also extend our results to any submodular function (instead of “doubly” submodular).
Further, we will leverage our prior work [tzoumas2022robust] to extend RAG to the attackrobust case, against robot removals.
Acknowledgements
We thank Robey et al. [robey2021optimal] and Konda et al. [konda2021execution] for sharing with us the code of their numerical evaluations. Additionally, we thank Hongyu Zhou of the University of Michigan for providing comments on the paper.