Partial Replanning for Decentralized Dynamic Task Allocation

In time-sensitive and dynamic missions, multi-UAV teams must respond quickly to new information and objectives. This paper presents a dynamic decentralized task allocation algorithm for allocating new tasks that appear online during the solving of the task allocation problem. Our algorithm extends the Consensus-Based Bundle Algorithm (CBBA), a decentralized task allocation algorithm, allowing for the fast allocation of new tasks without a full reallocation of existing tasks. CBBA with Partial Replanning (CBBA-PR) enables the team to trade-off between convergence time and increased coordination by resetting a portion of their previous allocation at every round of bidding on tasks. By resetting the last tasks allocated by each agent, we are able to ensure the convergence of the team to a conflict-free solution. CBBA-PR can be further improved by reducing the team size involved in the replanning, further reducing the communication burden of the team and runtime of CBBA-PR. Finally, we validate the faster convergence and improved solution quality of CBBA-PR in multi-UAV simulations.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/15/2021

A Decentralized Multi-UAV Spatio-Temporal Multi-Task Allocation Approach for Perimeter Defense

This paper provides a new solution approach to a multi-player perimeter ...
07/09/2019

Decentralized Dynamic Task Allocation in Swarm Robotic Systems for Disaster Response

Multiple robotic systems, working together, can provide important soluti...
11/23/2020

Restricted Airspace Protection using Multi-UAV Spatio-TemporalMulti-Task Allocation

This paper addresses the problem of restricted airspace protection from ...
05/19/2020

TAIP: an anytime algorithm for allocating student teams to internship programs

In scenarios that require teamwork, we usually have at hand a variety of...
07/20/2018

Decentralized Task Allocation in Multi-Robot Systems via Bipartite Graph Matching Augmented with Fuzzy Clustering

Robotic systems, working together as a team, are becoming valuable playe...
08/31/2018

Decentralized dynamic task allocation for UAVs with limited communication range

We present the Limited-range Online Routing Problem (LORP), which involv...
02/01/2021

Relational Consensus-Based Cooperative Task Allocation Management for IIoT-Health Networks

IIoT services focused on industry-oriented services often require object...

Nomenclature

 = number of robots
 = number of tasks
 = set of robots
 = set of tasks
 = Maximum length of path
 = network diameter
 = new task
 = bundle
 = path
 = winning bids
 = winning agents
 = subset tasks reset in replan
 = optimal assignment in central greedy solution

1 Introduction

The use of UAVs and UAGs in large teams has become increasingly desired and viable as robot hardware has decreased in size and cost. Likewise, there is increasing interest in solving large, more complex missions that require multi-agent teams to accomplish a varied number of tasks. Decentralized algorithms have allowed planners to scale with larger team sizes, amortizing computation and communication across the robot teams. In addition, decentralized algorithms, which only rely only peer-to-peer communication, can be used in environments without a communication infrastructure or in environment with constrained centralized communication. For example, a team of UAVs operating in a foreign terrain, may not have access to classic communication infrastructure that one may be accustomed to in local settings, especially for missions utilizing airspace or underwater environments. Likewise, in an adversarial setting, where opponents may look to target a central planner, decentralized algorithms provide robustness to single-point failures caused by a central planner or communication infrastructure.

This article investigates the decentralized dynamic task allocation problem where a team of robots must respond to new tasks that appear during the mission, allocating a new task with its existing allocations. This is in contrast to the static task allocation problem which assumes that all the tasks are known before the team executes the task allocation solver. The problem is similar to other NP-hard problems such as the Dynamic Vehicle Routing Problem (D-VRP) or Dial-A-Ride Problem [1], where online requests occur during the operation of the vehicles, in which new locations must be visited by the vehicles. In addition, we specifically seek a decentralized algorithm that relies only on peer-to-peer communication to ensure robustness and scalability.

In a centralized setting, such as those studied in the operations research and logistics communities, solvers have been developed to provide heuristics for searching the space of solutions in the dynamic vehicle routing problem. Ref. 

[2] and [3]

provide excellent reviews on dynamic VRP solutions. The first group of approaches is to periodically replan, rerunning the static task allocation solver at predetermined time epochs, such as in the ant colony algorithm

[4]. The second group of approaches is to continuously generate plans to create a shared pool of possible solution, from which a solution can be adapted when a new customer arrives. These algorithms include the adaptive memory algorithm [5]

and genetic algorithms

[6], however, they rely on a centralized memory or global situational awareness. In [7, 8], the genetic algorithm is extended to multiple UAVs, however they fail to be fully decentralized as a central planner is still required.

As for fully decentralized algorithms, most have focused on convex optimization or task-assignment where the task score functions are independent. Ref. [9] successfully decentralizes the cooperative optimization by reaching consensus on sub-gradients, however, optimizes a convex score function with continuous decision variables. Ref. [10] introduced a decentralized version of the Hungarian algorithm for task assignment, however, requires that the task scores are independent. Ref. [11]

presents an online solver by enforcing strict task swapping, but again relies on the task assignment problem where scores are independent. As for a decentralized planner for the combinatorial optimization in task allocation,

[12] introduces the CBBA algorithm which can provide an approximate solution to the vehicle routing problem when all the tasks are introduced at the beginning of the algorithm. This article extends the work in [12] to adapt to new tasks while maintaining solution quality and convergence.

In this work, we propose CBBA with Partial Replanning (CBBA-PR) which quickly allocates the new task by only reallocating a subset of tasks. Where as the static decentralized solver Consensus-Based Bundle Algorithm (CBBA) requires a full re-solving of the original task allocation problem to allocate a new task, CBBA-PR allows for a partial replanning of the existing allocations. This is achieved by enabling agent to partially reset their allocation between rounds of auctioning. We show that this partial resetting strategy still converges to a conflict-free solution. In addition, the amount of resetting can be chosen to achieve a desired response time for the system. In doing so, the team has the flexibility to allow for little coordination but quick response, or vice versa. We also present a method for choosing a subset of robots to participate in the reallocation process. Finally, we validate the convergence of CBBA-PR and solution quality improvements, compared to the baseline CBBA approach.

The remainder of this paper is structures as follows. In Section II, we state the dynamic task allocation problem statement and describe the Consensus-Based Bundle Algorithm, which we build off of in this paper. In Section III, we describe and analyze CBBA’s existing approaches to allocating a new task. Section IV, presents the main algorithm: CBBA with Partial Replanning, a resetting approach that guarantees quicker allocation of the new task. Section V reports simulations results that show improvements in convergence and solution quality. Finally, in Section VI we provide concluding thoughts and future directions.

2 Decentralized Task Allocation: Consensus-Based Bundle Algorithm (CBBA)

2.1 Problem Statement

The goal of the static task allocation problem is to allocate a set of tasks to agents to arrive at a conflict-free assignment of tasks to robots. Generally, the agents can be assigned up to tasks which can represent either a physical limitation or a planning horizon for the agent. The decentralized task assignment can then be formed as an optimization:

subject to:

where if agent is assigned to task and

is a vector of length

with the assignment of all tasks in . The variable length vector represent the path for agent which is a list of the tasks assigned to agent in order of execution. The current length of the path is and is not allowed to be longer than the path constraint .

In the dynamic scenario, a new task arrives during or at the end of the task allocation process. Now the agents must allocate a total of tasks. We denote the new set of tasks , new paths , and new decision variables, . The team must now optimize the following optimization:

subject to:

2.2 Consensus-Based Bundle Algorithm (CBBA)

Consensus-Based Bundle Algorithm [12] is a decentralized auction based algorithm designed to solve the static task allocation problem, where all the task are known at the beginning. The algorithm alternates between two main phases: the bundle building phase and the consensus phase of the algorithm. In the bundle building phase, the agents iteratively generate a list of tasks to service by bidding on the marginal increase for each task. In the consensus phase, the agents resolve differences in their understanding of the winners of each task. Before proceeding, we define five lists used in the running of CBBA:

  1. A path, is a list of tasks allocated to agent . The path is in the order by which agent will service the tasks.

  2. A corresponding bundle, is the list of tasks allocated to agent in the order by which agent bid on each task, i.e. task is added before if . The size of , denoted cannot exceed the size of and an empty bundle is denoted .

  3. A list of winning agents , where each element indicates who agent believes is the winner of task for all tasks in . If agent believes that no one is the winner of task , then .

  4. A corresponding list of winning bids where is agent ’s belief of the highest bid on task by winner for all in . If agent believes that no one is the winner of task , then .

  5. A list of timestamps where each element represents the timestamp of the last information that agent received about a neighboring agent , either directly or indirectly.

2.2.1 Phase 1: Bundle Building

Unlike other algorithm which enumerate every possible allocation of tasks for agent , in CBBA the agents greedily bid on a bundle of tasks. In the bundle building phase (Algorithm 1), an agent determines the task that will yield the maximum increase in marginal score when inserted into its previous path. If this score is larger than the current team winner, agent will add the task to its bundle. This process is repeated until it can no longer add tasks to its path, concluding by updating its list of winners and bids, and .

1:
2:
3:
4:
5:while  do
6:     
7:     
8:     
9:     
10:     
11:     
12:     
13:     
14:end while
Algorithm 1 CBBA Phase 1: Bundle Build

2.2.2 Phase 2: Consensus

In the second phase of CBBA, each agent communicates their updated lists, and to their neighboring agents and resolve any conflicts in their belief of winners. An important aspect of this process is that if two neighbors disagree on a specific task located at location in their bundles, the two agents are required to reset not only task but also any tasks located in the bundle after :

(1)

where denotes the th entry of bundle and . The resetting of subsequent tasks is necessary for the proper convergence of CBBA, as the bids for those subsequent tasks () were made assuming a bundle consisting of the reset task .

2.3 Convergence of CBBA

Along with providing a procedure for decentralized allocation, Choi et. al. were able to show that CBBA converges in rounds of communication, where is the network diameter, and that CBBA arrives at the same result as a centralized sequential greedy algorithm (SGA). In addition, they showed that for submodular value function, the sequential greedy solution achieves 50% of the optimal score. To prove convergence and optimality of the algorithm, CBBA requires that the score function has diminishing marginal gains (DMG). This leads to decreasing scores within an agent’s own bundle (), a characteristic of the bidding that also leads to CBBA’s convergence. They show in Lemmas 1 and 2 [12] that during the running of CBBA the team sequentially agree on the SGA solution. Specifically, after rounds of communication, the team will agree on the first tasks allocated using a sequential greedy allocation (). Also, the bids for the task will be optimal, , and the agents will remain in agreement on those scores for the duration of the task allocation.

3 Bundle Resetting in Consensus-Based Bundle Algorithm

The Consensus-Based Bundle Algorithm was originally intended for the static task allocation, in that it guarantees convergence when the tasks are known initially. The authors [13] proposed that in dynamic settings, when information is outdated or there is a large change in situational awareness, the team should re-solve the new task allocation problem by rerunning CBBA. The shortcoming of this approach, however, is that in missions with a large number of tasks and a network diameter , the response time for a new task will be for a single task. In addition, a full re-solving of CBBA ignores the fact that the team had already arrived at a conflict-free solution, wasting the computation and communication used to allocate the original tasks.

For a quick response, one could allow for absolutely no replanning, without allowing any resetting of an agent’s previous allocation, , . This approach, which we will call CBBA with No Bundle Reset, was in the original version of CBBA [12], having the Bundle Build process begin each round with and . The advantage of CBBA with No Bundle Reset is that the convergence of the algorithm is virtually unaffected by the new task. For example, in the case where the team has already reached convergence on the original tasks and arrived at some allocations , the agents will never consider reallocating their existing tasks and simply bid on inserting the new task into their existing bundles . By effectively only bidding on and not allowing any bidding on other tasks in its paths, the team is able to reach agreement very quickly in time. While it is beyond this paper to provide quality guarantees for the no reset solution, intuitively it is clear that a no reset solution provides very little flexibility to the robot team in allocating . For example, in a highly constrained systems where many robots are at capacity or there are only a few robots that can service specific tasks, then only those robots under capacity and with the ability to service will be considered for . In these constrained scenarios, robot teams will need reset their previous allocations to consider the new task.

A later addition to CBBA was to begin the Bundle Build process by fully resetting the previous allocations, and [14]. This approach, CBBA with Full Bundle Reset, gives the agents maximum flexibility in allocating the new task, in that they are not bound by their previous allocations. While this full bundle reset increases the team coordination, one possible shortcoming of any bundle resetting approach is that it will no longer guarantee convergence for the original task allocation problem, as the algorithm is introducing additional resetting at each round of Bundle Build.

Claim: If all tasks are known at the beginning of CBBA, both CBBA with Full Bundle Reset and CBBA with No Bundle Reset arrive at the SGA solution in

Proof: CBBA’s convergence to the centralized sequential greedy algorithm’s (SGA) solution relies on the fact that at some time the team will agree on the first tasks in the SGA solution and then subsequently agree on this solution for the rest of time (Lemma 1 [12]). The authors use induction to show that the team will first agree on the highest valued task (the first task allocated in the greedy solution) and after rounds of communication, will agree on the first tasks in the SGA solution (Lemma 2 [12]). In the case of a full reset at the beginning of Bundle Build, we need to show that the reset will not break Lemma 1, i.e. that if the team agrees on the first SGA tasks, they will continue to agree on those tasks for . First, denote the list of agreed SGA tasks at time , as and the SGA winners of those tasks as . Note that according to Lemma 1, at time , all agents are in agreement on the bids fo the first -SGA tasks:

(2)

As such, at some time , agent will have a bundle that consists of agreed-on SGA tasks , where is the number of tasks in that are assigned to agent by the SGA solution. The rest of the bundle will consist of other tasks from that may or may not be in consensus with the rest of the team, . At time , when the agent resets its bundle at the beginning of Bundle Build, it will begin greedily choosing tasks from to add to its now empty bundle. However, when agent calculates its own bid on a task in where (i.e. for tasks whose SGA winner is not ), agent will always be outbid the current team winner since their bids are greedily optimal. Instead, agent will first re-assign itself any of the tasks in that have as the SGA winner, since those tasks will have the highest bids for agent by definition, since they are the centralized sequential greedy bids. As a result, after the full bundle reset the agent rebuilds its first in its previous bundle, . This means that even in a full bundle reset, Lemma 1 and Lemma 2 hold, and thus convergence to the SGA is guaranteed in .

We have just shown that a full reset and no reset converge to the same solution, however, when a new task is introduced, these two approaches diverge in terms of solutions and convergence guarantees. First, in the proof above, the full reset converged to the sequential greedy solution because the Bundle Build process rebuilds the first part its previous bundle , even after fully resetting its allocation. However, if a new task is now considered in the building process, agent is not guaranteed to rebuild . In fact, it may be the case that the sequential greedy solution for tasks, , will be completely different to the solution for original static task allocation problem. Thus a full reset may result in a completely new allocation, requiring a full rounds of communication, even for a single new task. In summary, CBBA’s existing approaches to allocating a new tasks is either to to allow a full rerunning of CBBA (full reset), requiring rounds of communication, or a quick consensus on a winner for the new task, without allowing any reallocation of the existing tasks (no reset).

4 CBBA with Partial Replanning (CBBA-PR)

4.1 Partial Resetting of Local Bundles

(a) Initial bundles and new task
(b) Each agent resets lowest tasks in bundle
(c) Converges to modified allocations with
Figure 1: Dynamic task allocation using CBBA-PR by partially resetting the last task in each agent’s bundle at the beginning of Bundle Build. The tasks are chosen to be the last tasks auctioned in the bundle (not the order of physical path) to ensure convergence of CBBA-PR

To better trade-off coordination with the speed of convergence, we propose CBBA with Partial Replan (CBBA-PR) which enable each agent to reallocate a portion of their existing allocation at each round of CBBA. In CBBA-PR, each agent resets part of their bundle at the beginning of Bundle Build, releasing their lowest bid tasks from their previous bundles. The can be chosen by the team depending on the amount of replanning or response speed that is necessary for the team. For example, in the case where new tasks are frequently appearing and the team wants to converge before another new task arrives, they may choose to be very small. On the other hand, if the new tasks are particularly high-valued, the team can tallow for more coordination by selecting a larger number of tasks to reset. Furthermore, the amount of resetting may change during the duration of CBBA. If the new task arrives early on in the team’s allocation of the original tasks, they may allow for more resetting. While if the team has already converged on all original tasks, they may limit the amount of resetting, to not waste the computation for the original tasks.

An important requirement for the tasks chosen for resetting is they must be the lowest tasks in each agent’s respetice bundles. This is to ensure the convergence of CBBA, for if tasks are reset in any other order (randomly chosen or maximum bids), CBBA will not have diminishing valued bids, and the team will not converge to a conflict-free solution. Rather, if the agents reset only the lowest tasks in each bundle to reset, we can re-use Lemmas 1 and 2 to prove that the team sequentially agree on a conflict-free solution.

4.2 Improving on the Convergence of CBBA-PR

One limitation of the local partial reset strategy is that while average convergence will generally be better than a full reset, we can not guarantee that worst-case performance will improve. For example, if an agent only has one task to reset, and that task happens to be the first task in the centralized SGA solution, a full replan may occur. However, if the team has converged on the first tasks before arrives, then we can guarantee worst-case performance of where is the total number of tasks reset by the team. In this scenario, the team can choose the lowest bid tasks from across the entire team. Since the team has already reach consensus on the original centralized greedy solution, those lowest solutions will in fact be the last tasks allocated by the SGA. Since the higher bid tasks will remain allocated after the partial reset, the team is guaranteed to converge within rounds of communication,

In this procedure, CBBA with Partial Team Replan (Algorithm 3), when a new task appears, each agent sorts the final bid array , enabling the agents to identify the -lowest SGA tasks, (Line 4). Any agent with a task from in their previous bundle, will reset the task by removing it from and and resetting the values in and . By doing so, the team is able to get increased coordination from reallocating existing tasks while still guaranteeing convergence that is , where can be chosen to fit the team’s desired response time. In addition, if only a subset of the team is chosen to participate in the replanning, the team can reuse the known assignments in to specifically reset tasks that were assigned to agents in , ensuring that none of the reset tasks are “wasted" on agents that are not participating in the replan. Conversely, the team can choose a combination of tasks and desired subteam of diameter , reusing and to achieve replanning within a desired convergence. With this subteam and subtask selection, the team can choose between selecting a large subteam with few tasks per robot to reallocate or a small subteam with robots fully resetting previous allocations. In general, this ideal mix of and for a given scenario will be dependent on the mission characteristics.

1:
2:for all  do
3:     
4:     
5:     
6:     
7:end for
8:Phase 1: Bundle Build(
9:Phase 2: Consensus
Algorithm 2 CBBA-PR with Partial Local Replan (Fixed Bundle Size)
1:Given: ,
2:
3:
4:
5:
6:for all  do
7:     
8:     
9:     
10:     
11:end for
12:Phase 1: Bundle Build(
13:Phase 2: Consensus
Algorithm 3 CBBA with Partial Team Replan

5 Results

5.1 Simulation

Figure 2: Simulation of eight robots allocation tasks, allocated tasks are colored corresponding to the assigned robot. A new task (green star) appears sequentially and tasks are released (black, filled circles) until all are allocated.

A UAV task allocation simulator was created to validate the convergence and quality of solutions for various replanning strategies. The simulator is implemented in Python and allows for varying communication conditions, dynamic robot movements, and newly appearing tasks. CBBA with Partial Replan is run locally on multiple instances of the Robot class and the Simulator only facilitates message passing between agents and the revealing of new tasks the team. We implement a vehicle routing scenario where UAVs must visit task locations. In these experiments, we use a time-discounted scoring function:

(3)

where is the time-discount value, is the static reward of task by agent , and is the time it takes to service task along path . We run 100 monte carlo simulations where the initial tasks are placed in randomly located location, initialized with and . Once the team converges on an initial solution , a new task arrives that must be allocated by the team. This process is repeated 8 times for a total arrival of 8 tasks. For each simulation scenario, the setting is saved so that multiple strategies can be run and compared. Figure 2 shows an example simulation, where initially a new task appears (top left), then tasks are reset, and a final allocation is reached (bottom right). Note that significant changes and disagreement during the replanning phase since the team is resetting a subset of previous tasks while allocating the new task.

5.2 Comparing Convergence

We compare the number of rounds of CBBA required for the team to agree on a conflict-free solution, using the four strategies outlined above: no bundle resetting, partial local bundle reset, partial team reset, and a full bundle reset. In both cases of partial resetting, the team initially resets a total of tasks, where the difference lies in resetting a fixed number from each bundle (local reset) or choosing the lowest tasks from the entire teem (team reset). We first compare the team’s convergence for the initial static allocation of tasks in Figure 4 (left) and then in Figure 4 (right), the final team convergence time after a new task appears . In the static allocation, all four strategies perform with equal convergence times as expected by the theory. When a new task is introduced and needs to be allocated by the team, all four strategies require increased rounds of CBBA, ranging from no reset with the least bidding to a full reset which requires the most rounds of CBBA. Between the local and team resetting, the local performs worse, in some simulations, requiring the same number of rounds as a full reset. This is expected as only the worst case can be guaranteed when the lowest team wide tasks are chosen for resetting. However, on average, the local bundle reset does perform faster than a full reset, suggesting that there is still a speed up from a partial local bundle reset.

5.3 Comparing Solution Quality

Figure 3: Convergence time for the initial static allocation (left), before the new task arrives, is the same for all four replan strategies. When a new tasks arrives (right), the number of rounds on average and worst-case is highest for a full reset replan and shortest for the no reset strategy. Choosing the lowest- tasks to reset for a global replan converges faster than a fixed number of tasks reset in each bundle and provides intermediate performance as a whole.
Figure 4: Performance of partial replanning compared to no replanning, measured by score increase after allocating 8 new tasks. Partial replan improves the score quality, nearing the performance of full replan baseline.
Figure 3: Convergence time for the initial static allocation (left), before the new task arrives, is the same for all four replan strategies. When a new tasks arrives (right), the number of rounds on average and worst-case is highest for a full reset replan and shortest for the no reset strategy. Choosing the lowest- tasks to reset for a global replan converges faster than a fixed number of tasks reset in each bundle and provides intermediate performance as a whole.

To understand the performance gains of partial replanning, we compare the replan strategies to the full reset strategy. While the full reset is not an optimal solution, we will use it as a baseline for "best" performance since it does have the 50% approximation of CBBA and intuitively has the highest level of coordination. In doing so, we compare the convergence of CBBA-PR compared to the full reset CBBA. The performance of each algorithm is measured by the increase on team score caused by servicing the new task, where is the solution after all the new tasks are allocated. Figure 4 shows the performance of both no reset (top) and partial reset (bottom) in an unconstrained setting, i.e.

. As expected, the no reset and partial reset perform worst than the baseline full reset, however, the faster partial reset algorithm outperforms no resetting and generally performs more similar to a full reset. Note that the high variance in solution quality is due to the full reset still being suboptimal due to its greedy nature. However, in more constrained setting where the number of feasible solutions is fewer, partial and full reset will more consistently outperform no reset approaches.

6 Conclusion

In this work, we presented a dynamic task allocation algorithm that trades off the team’s response time for solution quality. By resetting the lowest bid tasks from previous rounds of CBBA, the team is able to get fast convergence while still coordinating with other agents. In addition, if all original tasks are already allocated, the team can faster guaranteed convergence by selecting the team-wide lowest bid tasks, reducing the tasks allocated and number of agents involved. Finally, simulations showed that the team could in fact get faster convergence than re-solving the task allocation problem and better solutions than no coordination. This framework, trading off the time to re-solve the problem with new information, can be explored for other areas of optimization and planning. In addition, future work may include responding to other levels of dynamics in the environment, such as the addition and loss of robots, outdated information, and time-varying task information.

Acknowledgments

This work was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program, and Lockheed Martin. Thanks to Dr. Golnaz Habibi for the valuable insights.

References

  • Psaraftis [1980] Psaraftis, H. N., “A Dynamic Programming Solution to the Single Vehicle Many-to-Many Immediate Request Dial-a-Ride Problem,” Transp. Sci., Vol. 14, No. 2, 1980, pp. 130–154. doi:10.1287/trsc.14.2.130, URL http://pubsonline.informs.org/doi/abs/10.1287/trsc.14.2.130.
  • Pillac et al. [2013] Pillac, V., Gendreau, M., Guéret, C., and Medaglia, A. L., “A review of dynamic vehicle routing problems,” Eur. J. Oper. Res., Vol. 225, No. 1, 2013, pp. 1–11. doi:10.1016/j.ejor.2012.08.015, URL http://dx.doi.org/10.1016/j.ejor.2012.08.015.
  • Ritzinger et al. [2016] Ritzinger, U., Puchinger, J., and Hartl, R. F., “A survey on dynamic and stochastic vehicle routing problems,” Int. J. Prod. Res., Vol. 54, No. 1, 2016, pp. 215–231. doi:10.1080/00207543.2015.1043403, URL http://dx.doi.org/10.1080/00207543.2015.1043403.
  • Montemanni et al. [2005] Montemanni, R., Ch, R., Gambardella, L. M., Rizzoli, A. E., and Donati, A. V., “Ant Colony System for a Dynamic Vehicle Routing Problem,” J. Comb. Optim., Vol. 10, 2005, pp. 327–343. doi:10.1007/s10878-005-4922-6.
  • Ichoua et al. [2007] Ichoua, S., Gendreau, M., and Potvin, J.-Y., “Planned Route Optimization For Real-Time Vehicle Routing,” Dyn. Fleet Manag., Vol. 38, 2007, pp. 1–18. doi:10.1007/978-0-387-71722-7_1.
  • van Hemert and Poutré [2004]

    van Hemert, J. I., and Poutré, J. a. L., “Dynamic Routing Problems with Fruitful Regions: Models and Evolutionary Computation,”

    Parallel Probl. Solving from Nat. VIII, 2004, pp. 690–699.
    doi:10.1007/b100601.
  • Edison and Shima [2008] Edison, E., and Shima, T., “Genetic Algorithm for Cooperative UAV Task Assignment and Path Optimization,” AIAA Guid. Navig. Control Conf. Exhib., , No. August, 2008. doi:10.2514/6.2008-6317, URL http://arc.aiaa.org/doi/10.2514/6.2008-6317.
  • Guangtong et al. [2018] Guangtong, X., Li, L., Long, T., Wang, Z., and Cai, M., “Cooperative Multiple Task Assignment Considering Precedence Constraints Using Multi-Chromosome Encoded Genetic Algorithm,” 2018 AIAA Guid. Navig. Control Conf., , No. January, 2018, pp. 1–9. doi:10.2514/6.2018-1859, URL https://arc.aiaa.org/doi/10.2514/6.2018-1859.
  • Nedic and Ozdaglar [2009] Nedic, A., and Ozdaglar, A., “Distributed Subgradient Methods for Multi-Agent Optimization,” IEEE Trans. Automat. Contr., Vol. 54, No. 1, 2009, pp. 48–61. doi:10.1109/TAC.2008.2009515, URL http://ieeexplore.ieee.org/document/4749425/.
  • Chopra et al. [2017] Chopra, S., Notarstefano, G., Rice, M., and Egerstedt, M., “A Distributed Version of the Hungarian Method for Multirobot Assignment,” IEEE Trans. Robot., 2017, pp. 1–16. doi:10.1109/TRO.2017.2693377, URL http://ieeexplore.ieee.org/document/7932518/.
  • Liu and Shell [2013] Liu, L., and Shell, D. A., “An anytime assignment algorithm: From local task swapping to global optimality,” Auton. Robots, Vol. 35, No. 4, 2013, pp. 271–286. doi:10.1007/s10514-013-9351-2.
  • Choi et al. [2009] Choi, H. L., Brunet, L., and How, J. P., “Consensus-based decentralized auctions for robust task allocation,” IEEE Trans. Robot., Vol. 25, No. 4, 2009, pp. 912–926. doi:10.1109/TRO.2009.2022423, URL http://dx.doi.org/10.1109/TRO.2009.2022423.
  • Johnson et al. [2011] Johnson, L., Ponda, S., Choi, H.-L., and How, J., “Asynchronous Decentralized Task Allocation for Dynamic Environments,” Infotech@aerosp. 2011, American Institute of Aeronautics and Astronautics, Reston, Virigina, 2011, pp. 1–12. doi:10.2514/6.2011-1441, URL http://arc.aiaa.org/doi/10.2514/6.2011-1441.
  • Johnson et al. [2012] Johnson, L., Choi, H. L., Ponda, S., and How, J. P., “Allowing non-submodular score functions in distributed task allocation,” IEEE Conf. Decis. Control, , No. 1, 2012, pp. 4702–4708. doi:10.1109/CDC.2012.6425867.