DeepAI
Log In Sign Up

Desperate Times Call for Desperate Measures: Towards Risk-Adaptive Task Allocation

Multi-robot task allocation (MRTA) problems involve optimizing the allocation of robots to tasks. MRTA problems are known to be challenging when tasks require multiple robots and the team is composed of heterogeneous robots. These challenges are further exacerbated when we need to account for uncertainties encountered in the real-world. In this work, we address coalition formation in heterogeneous multi-robot teams with uncertain capabilities. We specifically focus on tasks that require coalitions to collectively satisfy certain minimum requirements. Existing approaches to uncertainty-aware task allocation either maximize expected pay-off (risk-neutral approaches) or improve worst-case or near-worst-case outcomes (risk-averse approaches). Within the context of our problem, we demonstrate the inherent limitations of unilaterally ignoring or avoiding risk and show that these approaches can in fact reduce the probability of satisfying task requirements. Inspired by models that explain foraging behaviors in animals, we develop a risk-adaptive approach to task allocation. Our approach adaptively switches between risk-averse and risk-seeking behavior in order to maximize the probability of satisfying task requirements. Comprehensive numerical experiments conclusively demonstrate that our risk-adaptive approach outperforms risk-neutral and risk-averse approaches. We also demonstrate the effectiveness of our approach using a simulated multi-robot emergency response scenario.

READ FULL TEXT VIEW PDF
03/06/2020

Adaptive Task Allocation for Heterogeneous Multi-Robot Teams with Evolving and Unknown Robot Capabilities

For multi-robot teams with heterogeneous capabilities, typical task allo...
08/05/2021

Resource-Aware Generalization of Heterogeneous Strategies for Coalition Formation

Existing approaches to coalition formation often assume that requirement...
12/18/2020

Online Connectivity-aware Dynamic Deployment for Heterogeneous Multi-Robot Systems

In this paper, we consider the dynamic multi-robot distribution problem ...
05/12/2021

A Resilient and Energy-Aware Task Allocation Framework for Heterogeneous Multi-Robot Systems

In the context of heterogeneous multi-robot teams deployed for executing...
11/02/2020

Data-Driven Adaptive Task Allocation for Heterogeneous Multi-Robot Teams Using Robust Control Barrier Functions

Multi-robot task allocation is a ubiquitous problem in robotics due to i...
12/10/2022

Stochastic Optimization for Spectral Risk Measures

Spectral risk objectives - also called L-risks - allow for learning syst...
07/21/2021

Uncertainty-Aware Task Allocation for Distributed Autonomous Robots

This paper addresses task-allocation problems with uncertainty in situat...

I Introduction

Multi-robot systems have to deal with various sources of uncertainties when operating in the real-world. As such, we require models and approaches that account for such uncertainty when coordinating a team of robots. This need has inspired a considerable amount of recent efforts aimed at developing risk-aware approaches to multi-robot coordination that explicitly account for different forms of uncertainty (see [21] for a comprehensive survey). Indeed, such risk-aware approaches have been shown to be significantly more successful than approaches that ignore uncertainty.

In this work, we address multi-robot task allocation (MRTA) problems that involve uncertainty. Within MRTA, we focus on the single-task robots multi-robot tasks instantaneous assignment (ST-MR-IA) problem in heterogeneous multi-robot teams (see [4, 6] for detailed treatments of the various categories of MRTA). The ST-MR-IA problem is also referred to as the coalition formation problem. While there are various sources of uncertainty, we focus on the uncertainty in robots’ capabilities. Such uncertainties arise either due to potential failures or due to modeling large teams of robots into a small number of groups (e.g., [9, 11]).

Existing approaches to risk-based task allocation fall into one of two categories. First, risk-neutral approaches focus on the expected value of pay-off or cost (e.g., [10, 3]). Second, risk-averse approaches avoid worst-case and near-worst case outcomes (e.g., [7, 11]).

In this work, we argue that neither ignoring nor avoiding risk might be sufficient for a certain class of task allocation problems. Specifically, we focus on task allocation problems that require each coalition to satisfy certain minimum capability-based requirements associated with the assigned task. Examples of such minimum requirements involve capabilities such as collective payload, fuel level, and specialized equipment. As such, falling short of these requirements would result in categorical task failure. In such scenarios, we show that it might be necessary to resort to riskier solutions when faced with dire circumstances.

Our view of risk management is inspired by a rich body of work on risk-sensitive foraging behavior in animals (e.g., [2]

). This literature demonstrated that animals will prefer to forage under safer conditions (with low-variance on available food) if they are able to meet their calorific needs. However, if such safe sources of food fail to meet their energy demands, they would resort to risk-prone foraging strategies with costlier worst-case outcomes. Indeed, it was shown that this

adaptive behavior is optimal in the sense that it minimizes the probability of starvation [13].

Inspired by adaptive animal behavior, we formalize and develop a risk-adaptive approach to task allocation and coalition formation. Our approach is capable of autonomously choosing between safer and riskier options. In contrast to maximizing a stochastic pay-off, our approach solves a constrained optimization problem to explicitly optimize the probability of meeting or surpassing minimum requirements.

We evaluate our approach using detailed numerical evaluations and simulated robot experiments on the Robotarium [16] simulator. In each of the experiments, we compared our risk-adaptive approach against three baselines: random, risk-neutral, and risk-averse allocation approaches. The results conclusively demonstrate the benefits of a risk-adaptive approach over the baselines in terms of task success rates.

In summary, our core contributions include:

  • A formalism for risk-based task allocation that acknowledges the benefit of risk-seeking behavior when safer options are unlikely to satisfy minimum requirements.

  • A risk-adaptive task allocation algorithm that autonomously switches between risk-seeking and risk-averse behavior to better satisfy task requirements.

Related work

MRTA problems are typically categorized based on three dimensions: i) single-task (ST) robots vs. multi-task (MR) robots, ii) single-robot (SR) tasks vs. multi-robot (MR) tasks, and iii) instantaneous assignment (IA) vs. time-extended assignment (TA) [4, 6]. Our work falls under the category of ST-MR-IA (also called coalition formation) and is known to be NP-hard. While there is a large body of work associated with the various categories of MRTA, we limit our discussion to approaches focused on coalition formation.

Coalition formation has been tackled by a wide variety of approaches. Notable examples include auction-based methods that rely on effective biding mechanisms (e.g., [5, 17]), utility-based methods that attempt to jointly maximize the total utility (e.g., [15, 8]), and the more-recent trait-based approaches that attempt to satisfy trait requirements associated with each task. Our approach falls under the category of trait-based methods, which do not assume knowledge of the utility of assigning each robot or coalition to each task. Instead, we allow for task requirements to be specified in terms of the capabilities necessary to perform the task.

The methods discussed so far do not account for the various sources of uncertainty that a multi-robot system might face in the real-world. Recent attempts have focused on explicitly accounting for such uncertainty [21]. Existing approaches to risk-based task allocation are either risk-neutral or risk-averse. Risk-neutral approaches focus on the expected value of pay-off or cost [10, 3]. Risk-averse approaches take variance into account and try to avoid worst-case, potentially leading to highly conservative outcomes.

Recent work has demonstrated that risk-averse methods can be made less conservative by considering more nuanced measures of risk (e.g., mean-variance [19, 11], and conditional value at risk (CVaR) [7, 12]) that allow for a user-specified level of risk. However, these methods require that the user predetermines the desired risk tolerance (e.g., regularizer in mean-variance optimization and risk parameter in VaR or CVaR). This explicit and a priori specification of risk tolerance places these approaches on a static point on the spectrum from risk-averse to risk-seeking, irrespective of the current context. In contrast, our approach adaptively determines where to fall on this spectrum depending on the context as determined by task requirements and the availability of resources. As such, our approach adapts to the particulars of the problem, producing riskier or more conservative allocations depending on what will maximize the probability of task success. Further, unlike most existing approaches that optimize a single-dimensional pay-off variable, we can handle multi-dimensional requirements.

Finally, note that most existing risk-aware task allocation approaches are limited to single-robot tasks [7, 3], homogeneous agents [10], or both [18, 12]. To the best of our knowledge, our work represents the first attempt to solve risk-aware coalition formation in heterogeneous teams.

Ii Modeling Framework

To provide context, we first introduce our basic modeling principles, which are adapted from our prior work [11].

Ii-a Species

Consider a team of heterogeneous robots. We take a group modeling approach [1] and model the team of robots as being made of species (i.e. robot types). Examples of such species include a group of UAVs and a group of ground vehicles. By utilizing such an aggregate model at the level of robot types, we gain computational efficiency over alternative approaches that model each robot individually.

Ii-B Traits

When modeling traits (i.e., capabilities), we take into account the fact that robots within a particular species may not share identical traits. For instance, not all UAVs will share the same speed or carrying capacity. As such, we model the traits of the th species as , where and

are the expected trait vector and the corresponding diagonal covariance matrix indicating that each trait of the

th species is an independent Gaussian random variable. Taken together, the traits of the entire team are denoted by the

stochastic species-trait matrix with containing the expected values. Specifically the th element of denotes the expected value of the th trait of the th species. Similarly, the variances associated with each trait of each species is contained in the matrix . The th element of denotes the variance of the th trait of the th species.

Ii-C Tasks

Let the team be tasked with solving concurrent tasks, each with its own set of trait requirements denoted by . To successfully complete the tasks, the team has to form coalitions such that each coalition collectively meets or surpasses the corresponding task’s trait requirements. The trait requirements for all the tasks can be represented by a task requirements matrix .

Ii-D Agent Assignment

The assignment of agents from species across the tasks is denoted by . Thus, the assignment of the whole team across the tasks can be described using the assignment matrix .

Ii-E Trait Aggregation

Finally, the aggregation of various traits assigned across all the tasks is denoted by the stochastic trait distribution matrix , and can be computed as

(1)

Note that is composed of Gaussian random variables (one for each task) due to the fact that is composed of Gaussian random variable (one for each species). Thus, the expected value of is given by

(2)

and the variance of each element of given by

(3)

where denotes the Hadamard (element-wise) product.

Iii Risk-Adaptive Task Allocation

In this section, we introduce the notion of risk-adaptive task allocation. We begin by considering the trait requirements associated with all the tasks. Let the minimum trait requirements associated with the th task be given by . Thus, the probability of successfully performing the th task is given by

(4)

where denotes the element-wise grater than operator. Thus, the success of each task is given by a multi-variate normal cumulative density function.

Fig. 1: Consider two scenarios with different choices for coalitions that will result in different stochastic aggregation of capabilities (blue and orange curves). In each scenario, the minimum trait requirement in each scenarios is depicted by a red circle, and the area of each shaded region denotes the corresponding probability of satisfying the task requirements.

Iii-a Illustrative Example

To illustrate the benefits of a risk-adaptive approach, let us consider an example task that requires a coalition of robots that can collectively satisfy a single-dimensional trait requirement, such as payload or fuel. Without loss of generality, let us analyze two options for the coalition with different aggregate traits ( and

). Indeed, given the probabilistic nature of our capabilities model, the aggregate trait of each coalition represents a probability distribution. When a safer (orange

) option exists that can satisfy the trait requirement in expectation (as in Fig. 1, left), our risk-adaptive approach would prefer it, behaving similarly to risk-neutral or risk-averse approaches. In contrast, when neither coalition can satisfy the trait requirement in expectation (as in Fig. 1, right), our approach would adaptively choose the riskier option (blue ), as it maximizes the chances of satisfying the minimum requirement.

Iii-B Rationale

The analysis of animal foraging behavior in [13] can be easily extended to explain why a risk-adaptive strategy improves the probability of success in (4).

Consider a potential allocation such that the expected value of the resulting trait aggregation satisfies the desired trait requirements (i.e., ). Under this circumstance, it is clear that the probability of success (i.e., ) will increase only if the variance (i.e., ) decreases. Thus, our risk-adaptive approach operates in a risk-averse regime when as it prefers allocations with smaller variances if their expected values are similar. This observation further explains the choices in Fig. 1 (left).

Similarly, consider a potential allocation such that the expected value of the trait aggregation fails to satisfy the desired trait requirements (i.e., ). Under this circumstance, it is clear that the probability of success (i.e., ) will increase only if the variance (i.e., ) increases. Thus, our risk-adaptive approach operates in a risk-seeking regime when as it prefers allocations with larger variances if their expected values are similar. In contrast, risk-averse approaches will continue to prefer smaller variances as they optimize for worst-case outcomes. However, as a result, risk-averse approaches will inadvertently decrease the probability of success. This observation further explains the choices in Fig. 1 (right).

Iii-C Constrained Optimization

Given the model for task success, we turn to the problem of optimizing the probability of success. Note that the example from III-A is focused on a single task. Our problem consists of forming coalitions for tasks when provided a fixed number of agents from each species. Thus, we simultaneously optimize the chances of satisfying the requirements for all tasks.

We cast our risk-adaptive task allocation problem in the form of the following max-min optimization problem

(5)
(6)
(7)

where is a vector of the number of agents in each species. An alternative strategy would be to replace the objective function in (5

) with the average or sum of individual task probabilities. However, such an objective function will not discourage disproportionately different success probabilities across tasks, resulting in skewed allocation of robots to tasks and unintended prioritization.

Note that the optimization problem in (5)-(7) represents a considerably challenging nonlinear constrained integer program. In this work, we approximately solve this problem by relaxing the integer constraint in (7) and replacing it with the constraint . Further, given the non-convex natural of the objective function, we employ a global optimization technique that performs a scatter search to provide multiple initial conditions for a local nonlinear program solver [14]. Finally, we convert every element of the optimized assignment matrix into an integer while ensuring that the constraint in (6) is satisfied.

In practice, we initialize the allocation matrix using a risk-neutral solution. As such, the global optimization attempts to improve the probability of success when possible by resorting to riskier options when appropriate. If safer options exists, our approach will choose allocations that are similar to that of risk-neutral or risk-averse approach.

Iv Experiments

We evaluated our approach with two experiments: 1) a numerical simulation using teams of varying size, trait distribution, and task requirement, and 2) a simulated robot experiment in the Robotarium multi-robot testbed simulator [16] that samples robots from a given trait distribution in order to complete two example tasks. Across all experiments, we used MATLAB’s GlobalSearch function to approximately solve the optimization problem in (5)-(7) as detailed in Section III-C and the baselines. All experiments were conducted using a 2.6 GHz 6-core Intel i7 Processor111Source code available here.. On average, the optimization took approximately 0.4 seconds for each baseline and 3.0 seconds for our method. The difference in computation time is due to the fact that, unlike the baselines, our approach solves a non-convex problem.

Iv-a Baselines

In all of our experiments, we compared the performance of our method with that of the following three baselines:

1. Random baseline uniformly randomly allocates the available agents to all the tasks.

2. Risk-neutral baseline allocates agents such that the expected trait aggregation satisfies the trait requirements. This baseline is similar in spirit to existing approaches that focus on expected pay-off (e.g., [10, 3]). To this end, it solves the following optimization problem

where denotes the Frobenius norm.

3. Risk-averse baseline allocates agents such that worst-case or near worst-case outcomes are avoided. This baseline is similar in spirit to existing approaches that rely on mean-variance optimization (e.g. [11, 19]) as it solves the following optimization problem

where is a regularization coefficient.

Similar to our proposed risk-adaptive approach, the optimization problems associated with both the risk-neutral and risk-averse baselines were solved approximately by relaxing the integer constraint and utilizing MATLAB’s GlobalSearch function to ensure fair comparisons. Further, we utilized sequential quadratic programming (SQP) as the local solver in global optimization for all algorithms with the maximum number of iterations set to .

Iv-B Numerical Simulations

We first analyzed the performance of our method and that of the baselines using numerical simulations. To this end, we simulated independent coalition formation problems involving species each, traits, and tasks. To generate a heterogeneous teams, we ensured that each species had a dominant trait (i.e., higher expected trait value than its other traits). Simulation of such dominant traits is motivated by the fact that real-world robots are often optimized for a few attributes while trading-off others (e.g., speed vs. payload). On average, the variance of the dominant trait is smaller than that of the non-dominant traits. During each simulation run, parameters of the robot trait distribution ( and ), number of robots per species (), and task trait requirement () are uniformly randomly sampled from ranges described in Table I.

Parameter Distribution Range
Dominant trait
Non-dominant trait
Dominant Trait
Non-Dominant Trait
# robots per species ()
TABLE I: Design parameter sampling ranges

For each run, we measure the performance of each algorithm by computing the success probability for each task, given by where denotes the aggregated traits achieved by the candidate algorithm for the th task. Given that the distributions are Gaussian, this metric measures the actual probability of satisfying the task requirements when utilizing a particular allocation rather than providing an approximated rate of success based on Monte Carlo-based simulations. We report both i) the individual task success probabilities for all the tasks, and ii) the minimum task success probability (computed over the tasks) in Figs. 2 and 3, respectively.

Fig. 2: Probability of success for all tasks (300 data points per method).
Fig. 3: Minimum probability of success computed across tasks (100 data points per method).

From Figs. 2 and 3, we can see that our risk-adaptive method generally outperformed all the baselines in fulfilling the trait requirement probabilities. This is due to the fact that our approach adaptively chooses between risk-averse and risk-seeking behavior based on the particular allocation problem. Further, thanks to the max-min optimization, our approach ensures that all the chances of success for all tasks are jointly improved. This claim is supported by the considerably lower variance in task success probability across all tasks achieved by our risk-adaptive approach (see Fig. 2).

We observed that the random baseline exhibited the largest variance in individual task success probabilities. This is because the random baseline is more likely to unevenly assign the robots to tasks such that the requirements associated with a subset of the tasks are fulfilled with near-certainty. But, this usually comes at the cost of failing to meet the requirements of the rest of the tasks with near-certainty. From Fig. 3, we can see that the random baseline’s minimum task success probability per trial was near zero for several instances.

When looking at the aggregate performance across all 100 runs, we find that the risk-neutral and risk-averse baselines performed similarly to each other. However, for any given problem instance, these two baselines may not necessarily perform similarly. This is due to fact that while avoiding risk might be very helpful in some situations, it might be too conservative in others. Further, the performances of these two baselines are influenced by factors, such as the variances of the trait distributions, and the regularization coefficient . However, given the adaptive nature of our approach, it always performs similarly to or better than the baselines (in terms of ) for any given instance of the problem.

In summary, it is evident that our risk-adaptive approach has considerably higher chances of satisfying task requirements compared to approaches that either ignore or avoid risk all together. These observations are to be expected given that our risk-adaptive approach explicitly maximizes the chances of satisfying trait requirements. As explained in III-B, this incentivizes the algorithm to adaptively switch between its risk-averse and risk-seeking regimes.

Iv-C Robotarium Simulations

In the second round of experiments, we considered a multi-robot scenario to illustrate the benefits of our approach. We developed an emergency response scenario in the Robotarium simulator [16] in which we sample robot capabilities from specified distributions. Our scenario involved a fire fighting task and a debris removal task (see Fig. 4). These tasks were to be completed by a heterogeneous team of robots composed of species, each with

traits. Species 1 had 6 robots and Species 2 had 9 robots. Each robot had two traits: i) water carrying capacity and ii) payload capacity. The distribution of robot capabilities (in arbitrary units) were modelled using Gaussian distributions with the following parameters:

Fig. 4: A snapshot of the simulated emergency response task.

The robots assigned to each task must work together to collectively complete their task. The debris removal task requires 11 units of strength and the firefighting task requires 14 units of water. More formally, we defined the task requirements matrix as follows:

Note that, if the coalition assigned to the debris removal task do not have the cumulative payload capacity to move the debris, the task would fail. Similarly, if the robots assigned to the firefighting task do not have enough water to douse the flames, the fire burns on.

Using these parameters, we obtained the following allocations computed using each of the methods by solving the corresponding optimization program.

where R-N and R-A refer to the risk-neutral and risk-averse baselines, respectively. Note that we do not specify the random baseline’s assignment matrix, as it would change with every run.

To evaluate the approaches on this scenario, we generate instances of the scenario. In each instance, we sampled the robots’ traits based on the distribution parameters defined above. We measured the performance of the allocations computed by each approach in terms of task success rates (i.e., no. successful completions / 10,000). We measured both individual task success rates as well as a combined task success rates that required both tasks be completed successfully. We report these success rates for each approach in Fig. 5. As one would expect, the random baseline performs worse than all other approaches. Further, the risk-neutral and risk-averse approaches outperform each other at different tasks, resulting in similar combined performance. Finally, we can see that our risk-adaptive method successfully completed both tasks at a much higher rate than the baselines as it is more likely to fulfill the corresponding trait requirements.

Fig. 5: Individual and combined task success rates.

V Conclusion

We introduced a novel framework for risk-adaptive task allocation that maximizes the probability of satisfying minimum trait requirements instead of maximizing expected pay-off or avoiding worst-case outcomes. Using this framework, we demonstrated that it is necessary to seek risk in order to satisfy requirements when safer options do not meet requirements in expectation. Through numerical simulations and robot experiments, we have shown that our adaptive method indeed results in considerably higher probability of task success. A key limitation of our framework is that we approximately solve our optimization problem using a black-box optimization technique. Further investigation is necessary to leverage any inherent structures of the optimization problem, such as sub-modularity [10, 20].

References

  • [1] S. V. Albrecht and P. Stone (2018) Autonomous agents modelling other agents: a comprehensive survey and open problems. Artificial Intelligence 258, pp. 66–95. Cited by: §II-A.
  • [2] T. Caraco, S. Martindale, and T. S. Whittam (1980-08) An empirical demonstration of risk-sensitive foraging preferences. Animal Behaviour 28 (3), pp. 820–830. External Links: ISSN 0003-3472, Document Cited by: §I.
  • [3] S. Choudhury, J. K. Gupta, M. J. Kochenderfer, D. Sadigh, and J. Bohg (2020) Dynamic multi-robot task allocation under uncertainty and temporal constraints. arXiv preprint arXiv:2005.13109. Cited by: §I, §I, §I, §IV-A.
  • [4] B. P. Gerkey and M. J. Matarić (2004) A formal analysis and taxonomy of task allocation in multi-robot systems. The International Journal of Robotics Research 23 (9), pp. 939–954. External Links: Document, https://doi.org/10.1177/0278364904045564 Cited by: §I, §I.
  • [5] J. Guerrero and G. Oliver (2003) Multi-robot task allocation strategies using auction-like mechanisms. Artificial Research and Development in Frontiers in Artificial Intelligence and Applications 100, pp. 111–122. Cited by: §I.
  • [6] G. A. Korsah, A. Stentz, and M. B. Dias (2013) A comprehensive taxonomy for multi-robot task allocation. The International Journal of Robotics Research 32 (12), pp. 1495–1512. Cited by: §I, §I.
  • [7] C. Nam and D. A. Shell (2017-01) Analyzing the Sensitivity of the Optimal Assignment in Probabilistic Multi-Robot Task Allocation. IEEE Robotics and Automation Letters 2 (1), pp. 193–200. Note: Conference Name: IEEE Robotics and Automation Letters External Links: ISSN 2377-3766, Document Cited by: §I, §I, §I.
  • [8] L. E. Parker and F. Tang (2006) Building multirobot coalitions through automated task solution synthesis. Proceedings of the IEEE 94 (7), pp. 1289–1305. Cited by: §I.
  • [9] A. Prorok, M. A. Hsieh, and V. Kumar (2017) The impact of diversity on optimal control policies for heterogeneous robot swarms. IEEE Transactions on Robotics 33 (2), pp. 346–358. Cited by: §I.
  • [10] A. Prorok (2019) Redundant Robot Assignment on Graphs with Uncertain Edge Costs. In Distributed Autonomous Robotic Systems, N. Correll, M. Schwager, and M. Otte (Eds.), Springer Proceedings in Advanced Robotics, Cham, pp. 313–327 (en). External Links: ISBN 978-3-030-05816-6, Document Cited by: §I, §I, §I, §IV-A, §V.
  • [11] H. Ravichandar, K. Shaw, and S. Chernova (2020) STRATA: unified framework for task assignments in large teams of heterogeneous agents. Auton. Agents Multi Agent Syst. 34 (2), pp. 38. External Links: Document Cited by: §I, §I, §I, §II, §IV-A.
  • [12] V. Sharma, M. Toubeh, L. Zhou, and P. Tokekar (2020) Risk-aware planning and assignment for ground vehicles using uncertain perception from aerial vehicles. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Cited by: §I, §I.
  • [13] D. W. Stephens (1981) The logic of risk-sensitive foraging preferences. Animal Behaviour 29 (2), pp. 628–629. External Links: ISSN 0003-3472, Document Cited by: §I, §III-B.
  • [14] Z. Ugray, L. Lasdon, J. Plummer, F. Glover, J. Kelly, and R. Martí (2007) Scatter search and local nlp solvers: a multistart framework for global optimization. INFORMS Journal on Computing 19 (3), pp. 328–340. Cited by: §III-C.
  • [15] L. Vig and J. A. Adams (2006) Multi-robot coalition formation. IEEE transactions on robotics 22 (4), pp. 637–649. Cited by: §I.
  • [16] S. Wilson, P. Glotfelter, L. Wang, S. Mayya, G. Notomista, M. Mote, and M. Egerstedt (2020) The robotarium: globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems. IEEE Control Systems Magazine 40 (1), pp. 26–44. External Links: Document Cited by: §I, §IV-C, §IV.
  • [17] B. Xie, S. Chen, J. Chen, and L. Shen (2018) A mutual-selecting market-based mechanism for dynamic coalition formation. International Journal of Advanced Robotic Systems 15 (1), pp. 1729881418755840. Cited by: §I.
  • [18] F. Yang and N. Chakraborty (2017-05) Algorithm for optimal chance constrained linear assignment. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 801–808. External Links: Document Cited by: §I.
  • [19] F. Yang and N. Chakraborty (2018-05) Algorithm for Optimal Chance Constrained Knapsack Problem with Applications to Multi-Robot Teaming. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1043–1049. Note: ISSN: 2577-087X External Links: Document Cited by: §I, §IV-A.
  • [20] L. Zhou and P. Tokekar (2020) An approximation algorithm for risk-averse submodular optimization. In Algorithmic Foundations of Robotics XIII, M. Morales, L. Tapia, G. Sánchez-Ante, and S. Hutchinson (Eds.), Cham, pp. 144–159. External Links: ISBN 978-3-030-44051-0 Cited by: §V.
  • [21] L. Zhou and P. Tokekar (2021) Multi-Robot Coordination and Planning in Uncertain and Adversarial Environments. Current Robotics Research, pp. 14 (en). Cited by: §I, §I.