Resource-Aware Distributed Submodular Maximization: A Paradigm for Multi-Robot Decision-Making

04/15/2022
by   Zirui Xu, et al.
University of Michigan
0

We introduce the first algorithm for distributed decision-making that provably balances the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board computation, communication, and memory resources. We are motivated by the future of autonomy that involves heterogeneous robots collaborating in complex tasks, such as image covering, target tracking, and area monitoring. Current algorithms, such as consensus algorithms, are insufficient to fulfill this future: they achieve distributed communication only, at the expense of high communication, computation, and memory overloads. A shift to resource-aware algorithms is needed, that can account for each robot's on-board resources, independently. We provide the first resource-aware algorithm, Resource-Aware distributed Greedy (RAG). We focus on maximization problems involving monotone and "doubly" submodular functions, a diminishing returns property. RAG has near-minimal on-board resource requirements. Each agent can afford to run the algorithm by adjusting the size of its neighborhood, even if that means selecting actions in complete isolation. RAG has provable approximation performance, where each agent can independently determine its contribution. All in all, RAG is the first algorithm to quantify the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements. To capture the trade-off, we introduce the notion of Centralization Of Information among non-Neighbors (COIN). We validate RAG in simulated scenarios of image covering with mobile robots.

READ FULL TEXT VIEW PDF
11/03/2020

Communication-Aware Multi-robot Coordination with Submodular Maximization

Submodular maximization has been widely used in many multi-robot task pl...
09/28/2020

Distributed Maximization of Submodular and Approximately Submodular Functions

We study the problem of maximizing a submodular function, subject to a c...
01/17/2019

Resource-Aware Algorithms for Distributed Loop Closure Detection with Provable Performance Guarantees

Inter-robot loop closure detection, e.g., for collaborative simultaneous...
03/26/2018

Resilient Active Information Gathering with Mobile Robots

Applications in robotics, such as multi-robot target tracking, involve t...
07/10/2019

A Resource-Aware Approach to Collaborative Loop Closure Detection with Provable Performance Guarantees

This paper presents resource-aware algorithms for distributed inter-robo...
01/26/2021

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams

This paper considers the problem of planning trajectories for a team of ...
10/31/2019

Duckiefloat: a Collision-Tolerant Resource-Constrained Blimp for Long-Term Autonomy in Subterranean Environments

There are several challenges for search and rescue robots: mobility, per...

I Introduction

In the future, robots with heterogeneous on-board capabi- lities will be teaming up to complete complex tasks such as:

  • Image Covering: How swarms of tiny to large robots can collaboratively map (image cover) an unknown environment, such as an earthquake-hit building? [mcguire2019minimal]

  • Area Monitoring: How swarms of ground and air robots can collaboratively monitor an environment to detect rare events, such as fires in a forest? [kumar2004robot]

  • Target Tracking: How a large-scale distributed network of air and space vehicles can coordinate their motions to track multiple evading targets over a large area? [corah2021scalable]

The robots’ heterogeneous capabilities (speed, size, on-board cameras, etc.) offer tremendous advantages in all aforementioned tasks: for example, in the image covering scenario, the tiny robots offer the advantage of agility, being able to navigate narrow spaces in earthquake-hit buildings; and the larger robots offer the advantages of reliability, being able to carry larger and higher-resolution cameras, for longer.

Resource-Aware Distributed Decision-Making: A Paradigm
Computations per Agent
Proportional to the size of
the agent’s action set
Communication Rounds Proportional to the number of agents
Memory per Message Length of a real number or an action
Communication Topology Directed and even disconnected
Suboptimality Guarantee
Gracefully balances the trade-off of
centralization vs. decentralization
TABLE I: Resource-Aware Distributed Optimization Paradigm. We define resource-awareness in distributed optimization across 5 performance pillars. The pillars define an algorithm that (i — see first 3 pillars) has minimal computation, communication and memory-storage requirements, and is affordable by any agent when the agent chooses a small enough neighborhood to meet its resources; (ii — see the 4th pillar) is applicable to even disconnected communication topologies, which is required when agents lack or are diminished of on-board resources for information exchange; and (iii — see the 5th pillar) has a suboptimality guarantee that balances the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements.

But heterogeneity in capabilities also implies heterogeneity in on-board resources: for example, tiny robots have limited computation, communication, and memory resources [mcguire2019minimal]. Thus, mere distributed communication among the robots is insufficient for the success of their tasks. Instead, a holistic, resource-aware distributed collaboration is necessitated that respects each robot’s on-board capacity for computation, communication, and memory storage.

Current algorithms, such as consensus-based algorithms, are insufficient to meet the need for resource-awareness: they achieve distributed communication but at the expense of communication, computation, and memory overloads. Hence, robots with limited on-board resources, such as the tiny (27 grams) Crazyflies, cannot afford to use the algorithms [mcguire2019minimal]. Also, real-time performance is compromised by the latency caused by the computation and communication overloads.

In this paper, we shift focus from the current distributed-communication only optimization paradigm to the resource-aware distributed optimization paradigm in Table I.

We focus on scenarios where the robots’ tasks are captured by an objective function that is monotone and “doubly” submodular [crama1989characterization, foldes2005submodularity], a diminishing returns property. Such functions appear in tasks of image covering [corah2018distributed] and vehicle deployment [downie2022submodular], among others. Then, the aforementioned tasks require the robots to distributively

(1)

where is set of agents/robots, is agent ’s action, is agent ’s set of available actions (e.g., motion primitives), and is the objective function (e.g., total area covered by agents’ cameras at the current time step). In online settings, the robots may need to solve a new version of eq. 1 at each time step (e.g., in a receding-horizon fashion).

Related Work. Problem 1 is NP-hard, being combinatorial, even when is monotone and submodular [Feige:1998:TLN:285055.285059]. It has been actively researched in the last 40 years in the optimization, control, and operations research literature [fisher1978analysis, crama1989characterization, Feige:1998:TLN:285055.285059, foldes2005submodularity, krause08efficient, calinescu2011maximizing, wang2015accelerated, atanasov2015decentralized, mirzasoleiman2016distributed, ramalingam2017efficient, roberts2017submodular, sviridenko2017optimal, gharesifard2017distributed, grimsman2018impact, corah2018distributed, corah2019distributed, du2020jacobi, robey2021optimal, rezazadeh2021distributed, mokhtari2018decentralized, konda2021execution, downie2022submodular, chen2022higher, corah2021scalable]. Although near-optimal approximation algorithms have been achieved, they focus on distributed communication only, instead of a holistically resource-aware optimization per Table I. For example, the sequential greedy [fisher1978analysis] and its variants [sviridenko2017optimal, gharesifard2017distributed, grimsman2018impact, corah2018distributed, konda2021execution], require increasing memory storage (agents act sequentially, and each agent passes the information about all previous agents to the next). Further the consensus-based distributed algorithms [mokhtari2018decentralized, robey2021optimal, rezazadeh2021distributed], although they achieve distributed communication, require excessive computations and communication rounds per agent. No current algorithm has suboptimality guarantees for even disconnected communication networks, nor captures the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements.

Contributions. We shift focus to holistically resource-aware distributed optimization. Along with novel definitions and theory, we provide the first algorithm balancing the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements.

1. Resource-Aware Optimization Paradigm. Our first contribution is the definition of a resource-aware distributed algorithm. The definition is summarized in Table I.

2. Resource-Aware Algorithm. We introduce the first resource-aware distributed algorithm for eq. 1 (Section III). According to Table I: (i) RAG has near-minimal computation, communication, and memory requirements (Section IV). (ii) Each agent can afford to run RAG by adjusting the size of its neighborhood, even if that means selecting actions in complete isolation. (iii) RAG enjoys a suboptimality bound that captures the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements (Section V). All agents can independently decide their contribution to the approximation performance by choosing the size of their neighborhood, and accounting at the same time for their on-board resources. RAG is the first algorithm to quantify this trade-off.

3. Centralization of Information. We introduce the notion of Centralization of Information among non-Neighbors (COIN) to quantify RAG’s suboptimality. COIN captures the information overlap between an agent and its non-neighbors.

Evaluation on Robotic Application. We evaluate RAG in simulated scenarios of image covering with mobile robots (Section VI). We first compare RAG with the state of the art. Then, we evaluate the trade-off of centralization vs. decentralization with respect to RAG’s performance. To enable the comparison with the state of art, we assume undirected and connected networks. RAG demonstrates superior or comparable performance, requiring, e.g., (i) orders of magnitude less computation time vs. the state-of-the-art consensus-based algorithm in [robey2021optimal], (ii) order of magnitude fewer communication rounds vs. the consensus algorithm in [robey2021optimal], and comparable communication rounds vs. the greedy in [konda2021execution], and (iii) the least memory (e.g., less vs. the consensus algorithm in [robey2021optimal]). Still, RAG has the best approximation performance.

Ii Distributed Submodular Maximization:
A Multi-Robot Decision-Making Perspective

We define the Distributed Submodular Maximization problem of this paper (creftypecap 1). We use the notation:

  • is a communication network with nodes and edges . Nodes represent agents (e.g., robots), and edges represent communication channels.

  • , for all ; i.e., is the in-neighbors of .

  • , for all , given graph ; i.e., is the out-neighbors of . If is undirected, then , for all .

  • is a distance metric between an and a ; e.g., may be the Euclidean distance between and , when and are robots in the space.

  • given a collection of sets ; i.e., is the cross-product of the sets in ;

  • , given a set function , , and ; i.e., is the marginal gain in for adding to .

The following preliminary framework is also required.

Agents. The terms “agent” and “robot” are used interchangeably in this paper. is the set of all robots. The robots cooperate towards a task, such as image covering. is a discrete set of available actions to each robot . For example, in image covering, may be the set of motion primitives that robot can execute to move in the environment.

Communication Network. The communication network among the robots may be directed and even disconnected. If , then a communication channel exists from robot to robot : can receive, store, and process the information from . The set of all robots that can send information to is , i.e., ’s in-neighborhood. The set of all robots that can send information to is , i.e., ’s out-neighborhood.

Remark 1 (Resource-aware in-neighborhood selection based on information overlap).

In this paper, has been implicitly decided by robot given ’s on-board resources and based on a distance metric capturing the information overlap between and any robot within communication range. Particularly, is considered to increase as the information overlap drops. E.g., in image covering, since each camera’s field of view is finite, the distance metric can be proportional to the physical distance of the robots. When is sufficiently large, such that the field of view of robots and stop overlapping, then the robots know that their information is non-overlapping, and may stop exchanging information to reserve on-board resources.

Remark 2 (Resource-aware out-neighborhood selection).

In this paper, has been implicitly decided by robot given robot ’s on-board resources.

Objective Function. The robots coordinate their actions to maximize an objective function. In tasks, such as image covering, target tracking, and persistent monitoring, typical objective functions are the covering functions [corah2018distributed, robey2021optimal, downie2022submodular, corah2021scalable]. Intuitively, these functions capture how much area/information is covered given the actions of all robots. They satisfy the properties defined below (Definition 1 and Definition 2).

Definition 1 (Normalized and Non-Decreasing Submodular Set Function [fisher1978analysis]).

A set function is normalized and non-decreasing submodular if and only if

  • ;

  • , for any ;

  • , for any and .

Normalization holds without loss of generality. In contrast, monotonicity and submodularity are intrinsic to the function. Intuitively, if captures the area covered by a set of activated cameras, then the more sensors are activated , the more area is covered ; this is the non-decreasing property. Also, the marginal gain of covered area caused by activating a camera drops when more cameras are already activated ; this is the submodularity property.

Definition 2 (2nd-order Submodular Set Function [crama1989characterization, foldes2005submodularity]).

is 2nd-order submodular if and only if

(2)

for any disjoint and () and .

The 2nd-order submodularity is another intrinsic property to the function. Intuitively, if captures the area covered by a set of cameras, then marginal gain of the marginal gains drops when more cameras are already activated.

Problem Definition. In this paper, we focus on:

Problem 1 (Distributed Submodular Maximization).

Each robot independently selects an action , upon receiving information from and about the in-neighbors only, such that the robots’ actions solve the

(3)

where is a normalized, non-decreasing submodular, and 2nd-order submodular set function.

Problem 1 requires each robot to independently choose an action to maximize the global objective , only based on local communication and information. Particularly, if after communication rounds (i.e., iterations of information exchange) robot decides to select an action , then

for a decision algorithm to be found in this paper.

Assumption 1.

Each robot may receive the action of an in-neighbor as information, and, then, robot can locally compute the marginal gain of any of its own .

The assumption is common in all distributed optimization algorithms in the literature [gharesifard2017distributed, grimsman2018impact, mokhtari2018decentralized, robey2021optimal, downie2022submodular, du2020jacobi, rezazadeh2021distributed, corah2018distributed, konda2021execution, liu2021distributed, corah2019distributed, corah2021scalable]. In practice, robot needs to transmit to robot , along with ’s action, all other required information such that can compute locally the marginal gains of any of its own candidate actions.

In Section III we introduce RAG, an algorithm that runs locally on each robot , playing the role of .

Assumption 2.

The communication network is fixed between the executions of the algorithm.

That is, we assume no communication failures once the algorithm has started, and till its end. Still, can be dynamic across consecutive time-steps  , when Problem 1 is applied in a receding-horizon fashion (Remark 3).

Remark 3 (Receding-Horizon Control, and Need for Minimal Communication and Computation).

Image covering, target tracking, and persistent monitoring are dynamic tasks that require the robots to react across consecutive time-steps  . Then, Problem 1 must be solved in a receding-horizon fashion [camacho2013model]. This becomes possible only if the time interval between any two consecutive steps and can contain the required number of communication rounds to solve creftypecap 1. Thus, for faster reaction, the smaller the number of communication rounds must be, and the smaller the computation effort per round must be. Otherwise, real-time performance will be compromised by latency.

Iii Resource-Aware distributed Greedy (Rag) Algorithm

0:  Agent ’s action set ; in-neighbors set ; out-neighbors set ; normalized, non-decreasing submodular, and 2nd-order submodular set function .
0:  Agent ’s action .
11:  ;   ;   ; // stores the agents
in that have selected an action; stores ’s selected actions; stores agent ’s selected action
2:  while  do
3:     ;
4:     ;
5:     transmit to each agent and receive ;
6:     if  then
7:        ; // selects action
28:        transmit to each agent ; // has
the best action across
9:     else
310:        denote by the set of agent(s)
that selected action(s) in this iteration;
11:        receive from each agent ;
12:        ;
13:        ;
14:     end if
15:  end while
16:  return  .
Algorithm 1 Resource-Aware distributed Greedy (RAG).

We present the Resource-Aware distributed Greedy (RAG) algorithm, the first resource-aware algorithm for creftypecap 1.

RAG’s pseudo-code is given in Algorithm 1. The algorithm requires only local information exchange, among only neighboring robots and about only neighboring robots. Each agent starts by selecting the action with the largest marginal gain (lines 3–4). Then, instead of exchanging information with all other agents , exchanges information only with the in-neighbors and out-neighbors (line 5). Afterwards, checks whether , i.e., whether is the “best agent” among the in-neighbors only, instead of comparing with all other agents in the network (line 6). If yes, then selects as its action (line 7), and lets only its out-neighbors know its selection (line 8). Otherwise, receives the action(s) from the agent(s) that just selected action(s) in this iteration (i.e., set in line 10), and continues onto the next iteration (lines 9–15). Notably, may contain multiple agents, and may even be empty.

Remark 4 (Directed and Disconnected Communication Topology).

RAG is valid for directed and even disconnected communication topologies, in accordance to the paradigm Table I. For example, if in Algorithm 1, then agent is completely disconnected from the network.

Iv Computation, Communication, and Memory Requirements of Rag

Continuous Domain
Method Du et al. [du2020jacobi] Robey et al. [robey2021optimal] Rezazadeh and Kia [rezazadeh2021distributed] RAG (this paper)
Computations per Agent
Communication Rounds
Memory per Message
Communication Topology connected, undirected connected, undirected connected, undirected even disconnected, directed
Suboptimality Guarantee
Discrete Domain
Method Corah and Michael [corah2018distributed] Liu et al. [liu2021distributed] Konda et al. [konda2021execution] RAG (this paper)
Computations per Agent
Communication Rounds
Memory per Message
Communication Topology fully connected connected, directed connected, undirected even disconnected, directed
Suboptimality Guarantee
TABLE II: RAG vs. State of the Art. The state of the art is divided into algorithms that optimize (i) in the continuous domain, employing a continuous representation of  [calinescu2011maximizing], and (ii) in the discrete domain. The continuous-domain algorithms need to compute the continuous representation’s gradient via sampling; denotes the sample size ( is in [du2020jacobi], in [robey2021optimal], and in [rezazadeh2021distributed], in numerical evaluations with 10 or fewer agents).

We present RAG’s computation, communication, and memory requirements. The requirements are in accordance to the paradigm in Table I, and are summarized in Table II.

We use the additional notation:

  • and are the lengths of a message containing a real number or an action , respectively.

  • is the diameter of a network , i.e., the longest shortest path among any pair of nodes in  [mesbahiBook];

  • is the cardinality of a discrete set .

Proposition 1 (Computation Requirements).

Each agent performs function evaluations during RAG.

Each agent needs to re-evaluate the marginal gains of all , every time an in-neighbor selects an action. Therefore, will perform evaluations in the worst case (and evaluations in the best case).

Proposition 2 (Communication Requirements).

RAG’s number of communication rounds is at most .

Each iteration of RAG requires two communication rounds: one for marginal gains, and one for actions. Also, RAG requires iterations in the worst case (when only one agent selects an action at each iteration). All in all, RAG requires at most communication rounds.

Proposition 3 (Memory Requirements).

RAG’s largest inter-agent message length is .

Any inter-agent message in RAG contains either a marginal gain, or an action . Thus, the message’s length is either or . The total on-board memory requirements for each agent are .

Remark 5 (Near-Minimal Resource Requirements).

RAG has near-minimal computation, communication, and memory requirements, in accordance to Table I. (i) Computations per Agent: the number of computations per agent is indeed proportional to the size of the agent’s action set, and, in particular, is , i.e., decreasing as decreases. The number of computations would have been minimal if instead it was , since that is the cost for agent to compute its best action in . (ii) Communication Rounds: the number of communication rounds is indeed proportional to the number of agents, and, in particular, is at most . The number is near-minimal, since, in the worst case of a line communication network, communication rounds are required for information to travel between the most distant agents. (iii) Memory per Message: the length per message is indeed equal to the length of a real number or of an action.

Besides, each agent can afford to run RAG, in accordance to Table I, by adjusting the size of its in- and out- neighborhoods and . E.g., by decreasing the size of : (i) agent ’s computation effort decreases, since the effort is proportional to the size of (Proposition 1); (ii) the per-round communication effort decreases, since the total communication rounds remain at most (Proposition 2) but the number of received messages per round decreases (RAG’s lines 5–6); and (iii) the on-board memory-storage requirements decrease, since the inter-agent message length remains constant (Proposition 3) but the number of received messages per round decreases (RAG’s lines 5–6).

Remark 6 (vs. State-of-the-Art Resource Requirements).

RAG has comparable or superior computation, communication, and memory requirements, vs. the state of the art. The comparison is summarized in Table II.

Context and notation in Table II. We divide the state of the art into algorithms that optimize (i) indirectly in the continuous domain, employing the continuous representation multi-linear extension [calinescu2011maximizing] of the set function  [robey2021optimal, du2020jacobi, rezazadeh2021distributed], and (ii) directly in the discrete domain [corah2018distributed, konda2021execution, liu2021distributed]. The continuous-domain algorithms employ consensus-based techniques [robey2021optimal, rezazadeh2021distributed]

or algorithmic game theory 

[du2020jacobi], and require the computation of the multi-linear extension’s gradient. The computation is achieved via sampling; in Table II denotes that sample size. is equal to in [du2020jacobi], in [robey2021optimal], and in [rezazadeh2021distributed], in numerical evaluations with 10 or fewer agents. The computations per agent and communication rounds reported for [du2020jacobi] are based on the numerical evaluations therein, since a theoretical quantification is missing in [du2020jacobi] and seems non-trivial to derive one as a function of , , or any other of the problem parameters. Further, all continuous-domain algorithms’ resource requirements depend on additional problem-dependent parameters (such as Lipschitz constants, the diameter of the domain set of the multi-linear extension, and a bound on the gradient of the multi-linear extension), which here we make implicit via the , , and notations. determines the approximation performance of the respective algorithms.

Computations. Konda et al. [konda2021execution] rank best with  computations per agent. RAG ranks 2nd-best with . The continuous-domain algorithms require a higher number of computations, proportional to or more.

Communication. For undirected networks, Konda et al. [konda2021execution] and RAG rank best, requiring in the worst-case the same communication rounds; but RAG is also valid for directed networks. For appropriate , Corah and Michael [corah2019distributed] may require fewer communication rounds but in [corah2019distributed], a pre-processing step with a fully connected network is required. The remaining algorithms require a significantly higher number of communication rounds, proportional to or more.

Memory. RAG ranks best when ; otherwise, it ranks after Robey et al. [robey2021optimal] and Rezazadeh and Kia [rezazadeh2021distributed], which then rank best (tie).

V Approximation Guarantee of Rag: Centralization vs. Decentralization Perspective

We present RAG’s suboptimality bound (Theorem 1). In accordance with Table I, the bound quantifies the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements.

We introduce the notion of Centralization of Information among non-Neighbors to quantify the bound.

We also use the notation:

  • is the set of agents beyond the in-neighborhood of (see Fig. 1), i.e., ’s non-neighbors;

  • is the agents’ actions in ;

  • , i.e., is an optimal solution to Problem 1;

  • is RAG’s output for all agents;

  • for any agent and distance metric , i.e., the distance between an agent and its non-neighborhood is the minimum distance between and a non-neighbor .

V-a Centralization of Information: A Novel Quantification

We use the notion of Centralization Of Information among Non-neighbors (COIN) to bound RAG’s suboptimality.

Definition 3 (Centralization Of Information among non-Neighbors (Coin)).

Consider a communication network , an agent , and a set function . Then, agent ’s Centralization Of Information among non-Neighbors is defined by

(4)

Fig. 1: Venn-diagram definition of the set , given an agent .
(a) Image covering scenario in a points map with agents. The stars are agents’ locations, the circles are agents’ sensing ranges, and the dots covered points.
(b) Agent and its non-neighbors. Non-neighbors are the agents beyond agent ’s communication range. The sensing ranges of each agent are also depicted, defining the area of their field of view.
(c) ’s upper bound for increasing agent ’s communication range. is the agents’ sensing range.
Fig. 2: Image Covering Setup. (a) A scenario; (b) Agent and its non-neighbors; (c) ’s upper bound for increasing agent ’s communication range.

If were entropy, then looks like the mutual information between the information collected by agent ’s action , and the information collected by agents ’s actions , i.e., the actions of agent ’s non-neighbors. , in particular, is computed over the best such actions (hence the maximization with respect to and in eq. 4).

captures the centralization of information within the context of a multi-agent network for information acquisition: if and only if ’s information is independent from , i.e., if and only if the information is decentralized between agent and its non-neighbors .

Computing can be NP-hard [Feige:1998:TLN:285055.285059], or even impossible since agent may be unaware of its non-neighbors’ actions; but upper-bounding it can be easy. In this paper, we focus on upper bounds that depend on , quantifying how fast the information overlap between and its non-neighbors decreases the further away is from ; i.e., how quickly information becomes decentralized beyond agent ’s neighborhood, the further away the non-neighborhood is. For an image covering task, we obtain such a bound next.

Remark 7 (Distance-Based Upper Bounds for Coin: Image Covering Example).

Consider a toy image covering task where each agent carries a camera with a round field of view of radius (Fig. 2(a)). Consider that each agent has fixed its in-neighbor , i.e., its communication range is just below the corresponding , that is,

for some, possibly arbitrarily small, ; e.g., in Fig. 2(b) the non-neighbor on the left of agent is just outside the boundary of ’s communication range. Then, is equal to the overlap of the field of views of agent and its non-neighbors, assuming, for simplicity, that the bound remains the same across two consecutive moves. Since the number of agent ’s non-neighbors may be unknown, an upper bound to is the gray ring area in Fig. 2(b), obtained assuming an infinite amount of non-neighbors around agent , located just outside the boundary of ’s communication range. That is,

(5)

The upper bound in eq. 5 as a function of the communication range is shown in Fig. 2(c). As expected, it tends to zero for increasing , equivalently, for increasing . Particularly, when , the field of views of robot and of each of the non-neighboring robots are non-overlapping, and, thus, .

In Section V-B, we show that RAG enables each agent  to choose its in-neighborhood (equivalently, its non-neighborhood ) to balance both its on-board resources and its contribution to a near-optimal approximation performance, as the latter is captured by .

Remark 8 (vs. Pairwise Redundancy [corah2018distributed]).

’s definition generalizes the notion of pairwise redundancy between two agents and , introduced by Corah and Michael [corah2018distributed]. The notion was introduced in the context of parallelizing the execution of the sequential greedy [fisher1978analysis], by ignoring the edges between pairs of agents in an a priori fully connected network. The comparison of the achieved parallelized greedy in [corah2018distributed] and RAG is found in Table II. Besides, captures the mutual information between the two agents and , defined as ; whereas captures the mutual information between an agent and all its non-neighbors, capturing directly the decentralization of information across the network.

V-B Approximation Guarantee of Rag

Theorem 1 (Approximation Performance of Rag).

RAG selects such that , and

(6)
Remark 9 (Centralization vs. Decentralization).

RAG’s suboptimality bound in Theorem 1 captures the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements:

  • Near-optimality requires large , i.e., centralization: the larger is, the larger is. Thus, the smaller is the information overlap between and its non-neighbors , i.e., the smaller is, resulting in an increased near-optimality for RAG.

  • Minimal on-board resource requirements requires instead a small , i.e., decentralization: the smaller is, the less the computation per agent (Proposition 1), as well as, the less the per-communication-round communication and memory-storage effort, since the number of received messages per round decreases.

RAG covers the spectrum from fully centralized —all agents communicate with all others— to fully decentralized —all agents communicate with none. RAG enjoys the suboptimality guarantee in Theorem 1 throughout the spectrum, capturing the trade-off of centralization vs. decentralization. When fully centralized, RAG matches the suboptimality bound of the classical greedy [fisher1978analysis], since then for all , i.e., . For a centralized algorithm, the best possible bound is  [sviridenko2017optimal].

Remark 10 (vs. State-of-the-Art Approximation Guarantees).

RAG is the first algorithm to quantify the trade-off of centralization vs. decentralization, per Table I. RAG enables each agent to independently decide the size of its in-neighborhood to balance near-optimality —which requires larger in-neighborhoods, i.e., centralization— and on-board resources —which requires smaller in-neighborhoods, i.e., decentralization. Instead, the state of the art tunes near-optimality via a globally known hyper-parameter , without accounting for the balance of smaller vs. larger neighborhoods at the agent level, and independently for each agent.

Moreover, the continuous methods [robey2021optimal, du2020jacobi, rezazadeh2021distributed]

achieve the suboptimality bound in probability, and the achieved value is in expectation. Instead,

RAG’s bound is deterministic, involving the exact value of the selected actions.

In the image covering scenario of Remark 7, RAG’s suboptimality guarantee gradually degrades with increased decentralization: as the agent ’s communication range decreases, becomes smaller, and increases (Fig. 2(b)), causing RAG’s suboptimality guarantee to decrease.

Vi Evaluation in Image Covering with Robots

We evaluate RAG in simulated scenarios of image covering with mobile robots (Fig. 2). We first compare RAG with the state of the art (Section VI-A; see Table III). Then, we evaluate the trade-off of centralization vs. decentralization with respect to RAG’s performance (Section VI-B; see Fig. 3).

We performed all simulations in Python 3.9.7, on a MacBook Pro with the Apple M1 Max chip and a 32 GB RAM.

Our code will become available via a link herein.

Vi-a Rag vs. State of the Art

We compare RAG with the state of the art in simulated scenarios of image covering (Fig. 2). To enable the comparison, we set up undirected and connected communication networks (RAG is valid for directed and even disconnected networks but the state-of-the-art methods are not). RAG demonstrates superior or comparable performance (Table III).

Simulation Scenario. We consider 50 instances of the setup in Fig. 2(a). Without loss of generality, each agent has a communication range of , a sensing radius of , and the action set {“forward”, “backward”, “left”, “right”} by 1 point. The agents seek to maximize the number of covered points.

Compared Algorithms. We compare RAG with the methods by Robey et al. [robey2021optimal] and Konda et al. [konda2021execution] since, among the state of the art, they achieve top performance for at least one resource requirement and/or for their suboptimality guarantee (Table II) —the method by Corah and Michael [corah2018distributed] may achieve fewer communication rounds, yet it requires a fully connected communication network. To ensure the method in [robey2021optimal] achieves a sufficient number of covered points, we set the sample size , as is also set in [robey2021optimal], and the number of communication rounds .

Results. The results are reported in Table III, and mirror the theoretical comparison in Table II. RAG demonstrates superior or comparable performance, requiring, (i) orders of magnitude less computation time vs. the state-of-the-art consensus-based algorithm in [robey2021optimal], and comparable computation time vs. the state-of-the-art greedy algorithm in [konda2021execution], (ii)  order of magnitude fewer communication rounds vs. the consensus algorithm in [robey2021optimal], and comparable communication rounds vs. the greedy algorithm in [konda2021execution], and (iii) the least memory (e.g., less than the consensus algorithm in [robey2021optimal]). Still, RAG achieves the best approximation performance.

Method
Robey et al.
[robey2021optimal]
Konda et al.
[konda2021execution]
RAG
(this paper)
Total Computation Time (s) 1434.11 0.02 0.05
Communication Rounds 100 13.44 7.76
Peak Total Memory (MB) 290.43 181.07 167.07
Total Covered Points 1773.54 1745.18 1816.4
TABLE III: RAG vs. State of the Art. Averaged performance over image covering instances involving 10 robots in a points map.

Vi-B The Trade-Off of Centralization vs. Decentralization

We demonstrate the trade-off of centralization vs. decentralization, with respect to RAG’s performance.

Simulation Scenario. We consider the same setup as in  Section VI-A, yet with the communication range increasing from to . That is, the communication network starts from being fully disconnected (fully decentralized) and becomes fully connected (fully centralized). The communication range is assumed the same for all robots, for simplicity.

Results. The results are reported in Fig. 3. When a higher communication range results in more in-neighbors, then (i) each agent executes more iterations of RAG before selecting an action, resulting in increased (i-a) computation time and (i-b) communication rounds ( iteration of RAG corresponds to communication rounds; see RAG’s lines 5 and 11), and (ii) each agent needs more on-board memory for information storage and processing. In contrast, with more in-neighbors, each agent coordinates more centrally, and, thus, the total covered points increase (for each agent , becomes smaller till it vanishes, and the theoretical suboptimality bound increases to ). All in all, Fig. 3 captures the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements. For increasing communication range, the required on-board resources increase, but also the total covered points increase. To balance the trade-off, the communication range may be set to .

Fig. 3: Centralization vs. Decentralization: Resource requirements and coverage performance of RAG for increasing communication range, in an image covering scenario with robots in a points map.

Vii Conclusion

Summary. We made the first step to enable a resource-aware distributed intelligence among heterogeneous agents. We are motivated by complex tasks taking the form of creftypecap 1, such as image covering. We introduced a resource-aware optimization paradigm (Table I), and presented RAG, the first resource-aware algorithm. RAG is the first algorithm to quantify the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board resource requirements. To capture the trade-off, we introduced the notion of Centralization of Information among non-Neighbors (COIN). We validated RAG in simulated scenarios of image covering, demonstrating its superiority.

Future Work. RAG assumes synchronous communication. Besides, the communication topology has to be fixed and failure-free across communication rounds (creftypecap 2). Our future work will enable RAG beyond the above limitations. We will also consider multi-hop communication. Correspondingly, we will quantify the trade-off of near-optimality vs. resource-awareness based on the depth of the multi-hop communication. We will also extend our results to any submodular function (instead of “doubly” submodular).

Further, we will leverage our prior work [tzoumas2022robust] to extend RAG to the attack-robust case, against robot removals.

Acknowledgements

We thank Robey et al. [robey2021optimal] and Konda et al. [konda2021execution] for sharing with us the code of their numerical evaluations. Additionally, we thank Hongyu Zhou of the University of Michigan for providing comments on the paper.

References