1 Introduction
In many robotics applications, for example, aircrafttowing vehicles [Morris et al.2016], warehouse and office robots [Wurman et al.2008, Veloso et al.2015], game characters [Ma et al.2017d], and other multirobot systems [Ma et al.2017a], robots need to finish tasks that have deadlines. For example, in applications that require longterm autonomy for a team of robots, it is important to move as many robots as possible from a dangerous area to reach a shelter area before a disaster occurs in inclement or adversarial conditions.
One aspect of the problem, namely MultiAgent Path Finding (MAPF), is to plan collisionfree paths for multiple agents in known environments from their given start vertices to their given goal vertices [Ma and Koenig2017]. The objective is to minimize the sum of the arrival times of the agents or the makespan. MAPF is NPhard to solve optimally [Yu and LaValle2013b] and even to approximate within a small constant factor for makespan minimization [Ma et al.2016b]. It can be solved with reductions to other wellstudied combinatorial problems [Surynek2015, Surynek et al.2016, Yu and LaValle2013a, Erdem et al.2013] and dedicated optimal [Standley and Korf2011, Goldenberg et al.2014, Sharon et al.2013, Wagner and Choset2015, Sharon et al.2015, Boyarski et al.2015, Felner et al.2018], boundedsuboptimal [Barer et al.2014, Cohen et al.2016], and suboptimal MAPF algorithms [Silver2005, Sturtevant and Buro2006, Wang and Botea2011, Luna and Bekris2011, de Wilde et al.2013] as described in several surveys [Ma et al.2016a, Felner et al.2017]. MAPF has recently been generalized in different directions [Ma and Koenig2016, Hönig et al.2016a, Ma et al.2016a, Hönig et al.2016b, Ma et al.2017b, Ma et al.2017c] but none of them capture an important characteristic of many applications, namely the ability to meet deadlines. A MAPF variant, GTAPF, assigns tasks with deadlines to agents but does not directly maximize the number of agents that can finish the tasks by the deadlines [Nguyen et al.2017].
We thus formalize MultiAgent Path Finding with Deadlines (MAPFDL). The objective is to maximize the number of agents that can reach their given goal vertices from their given start vertices within a given deadline, without colliding with each other. Since none of the existing results directly transfers to MAPFDL, we first show that MAPFDL is NPhard to solve optimally. We then present two families of algorithms to solve MAPFDL. The first family is based on a reduction of MAPFDL to a flow problem and a subsequent compact integer linear programming formulation of the resulting reduced abstracted multicommodity flow network. The second family is based on novel combinatorial search algorithms. We introduce three searchbased MAPFDL algorithms and conduct systematic experiments to compare them on a number of MAPFDL instances. The results show that all algorithms scale well to large problem instances but each one dominates the other ones in different scenarios. We study their pros and cons and provide a set of guidelines for identifying when each one should be used.
2 MultiAgent Path Finding with Deadlines
In this section, we define MAPFDL formally and prove its computational hardness. We then present an optimal MAPFDL algorithm based on integer linear programming (ILP).
2.0.1 Problem Definition
We formalize MAPFDL as follows: We are given a deadline, denoted by a time step , a finite undirected graph , and agents . Each agent has a start vertex and a goal vertex . In each time step, each agent either moves to an adjacent vertex or stays at the same vertex. Each agent can reach its goal vertex in time steps in the absence of other agents (without loss of generality). Let be the vertex occupied by agent at time step . We call an agent successful iff it occupies its goal vertex at the deadline , that is, . A plan consists of a path assigned to each successful agent that satisfies the following conditions: (1) [each successful agent starts at its start vertex]. (2) or [each successful agent always either moves to an adjacent vertex or does not move]. Each unsuccessful agent is removed at time step zero, and the plan thus contains no path assigned to it, that is, .^{1}^{1}1Depending on the application, the unsuccessful agents can be removed at time step zero, wait at their start vertices, or move out of the way of the successful agents. We choose the first option in this paper. If the unsuccessful agents are not removed, they can obstruct other agents. However, our proof of NPhardness does not depend on this assumption, and our MAPFDL algorithms can be adapted to other assumptions. We define a collision between two different successful agents and to be either a vertex collision (, , , ) iff [both successful agents occupy the same vertex simultaneously] or an edge collision (, , , , ) iff and [both successful agents traverse the same edge simultaneously in opposite directions]. A solution is a plan without collisions.
The objective of MAPFDL is to maximize the number of successful agents , that is, the number of paths in the solution, or, equivalently, minimize the number of unsuccessful agents . The cost of a plan is thus the number of unsuccessful agents . It can also be defined as the sum of the costs of all agents since a Boolean cost can be defined for each agent where each successful agent incurs cost 0 and each unsuccessful agent incurs cost 1. Obviously, every MAPFDL instance has a trivial solution where all agents are unsuccessful, namely with cost .
2.0.2 Intractability
Theorem 1.
It is NPhard to compute a MAPFDL solution with the maximum number of successful agents.
The proof of the theorem reduces the 3,3SAT problem [Tovey1984], an NPcomplete version of the Boolean satisfiability problem, to MAPFDL. The reduction is similar to the one used for proving the NPhardness of approximating the optimal makespan for MAPF [Ma et al.2016b]. It constructs a MAPFDL instance with deadline that has a zerocost solution iff the given 3,3SAT instance is satisfiable. Also see our preliminary work [Ma et al.2018].
2.0.3 ILPBased MAPFDL Algorithm
Our ILPbased MAPFDL algorithm first reduces MAPFDL to the maximum (integer) multicommodity flow problem, which is similar to the reductions of MAPF and a MAPF variant, TAPF, to multicommodity flow problems [Yu and LaValle2013a, Ma and Koenig2016]. It then encodes the latter problem using a compact integer linear programming (ILP) formulation on a reduced abstracted multicommodity flow network. See our preliminary work [Ma et al.2018] for more details on this algorithm.
3 SearchBased MAPFDL Algorithms
In this section, we present a spectrum of optimal combinatorial search algorithms for solving MAPFDL: ConflictBased Search with Deadlines (CBSDL), an adapted version of ConflictBased Search (CBS) [Sharon et al.2015]; DeathBased Search (DBS), which reasons about sets of successful agents; and MetaAgent DBS (MADBS), which incorporates the advantages of CBSDL and DBS.
3.1 CbsDl
(Standard) CBS is a twolevel MAPF algorithm that minimizes the sum of the arrival times of all agents at their goal vertices. CBSDL is an adaptation of CBS for MAPFDL. Algorithm 1 shows its highlevel search. Lines in red are used in MADBS (presented in Section 3.3) only. CBSDL uses the same framework as CBS but uses as cost. On the high level, CBSDL performs a bestfirst search to resolve collisions among the agents and thus builds a constraint tree (CT). Each CT node contains a set of constraints and a plan that obeys these constraints. CBSDL always expands the CT node with the smallest cost of its plan. The root CT node has no constraints [Line 1]. CBSDL performs a lowlevel search to find a path for each agent (without any constraints). The plan of the root CT node thus contains paths for all agents [Line 1], and its cost is zero [Line 1]. When CBSDL expands a CT node , it checks whether the CT node contains a plan that has no collisions [Line 1]. If this is the case, is a goal node and CBSDL terminates successfully [Line 1]. Otherwise, CBSDL chooses a collision to resolve [Line 1] and generates two child nodes and that inherit all constraints and the plan from [Line 11]. If the collision to resolve is a vertex collision , CBSDL adds the vertex constraint to to prohibit agent from occupying at time step and similarly adds the vertex constraint to . If the collision to resolve is an edge collision , CBSDL adds the edge constraint to to prohibit agent from moving from to at time step and similarly adds the edge constraint to [Line 1]. For each child CT node, say , CBSDL performs a lowlevel search for agent to compute a new path from its start vertex to its goal vertex within deadline that obeys the constraints of relevant to agent and replaces the old path of agent in with the new path returned by the lowlevel search (it deletes the old path if no path is returned) [Line 1]. CBSDL thus updates the cost of accordingly and inserts it into OPEN [Lines 11].
On the low level, CBSDL performs an A* search to find a path for the agent from its start vertex to its goal vertex by pruning all nodes with time step . If it finds a path from the start vertex to the goal vertex of length exactly time steps that obeys the constraints imposed by the high level, it returns the path for the agent and cost 0. Otherwise, it returns no path and cost 1.
3.1.1 Theoretical Analysis
We now prove that CBSDL is complete and optimal.
Lemma 1.
CBSDL generates only finitely many CT nodes.

The constraint added on Line 1 to a child CT node is different from the constraints of its parent CT node since the paths of its parent CT node do not obey it. The depth of the (binary) CT is finite because all paths are not longer than and only finitely many different vertex and edge constraints exist. ∎
Lemma 2.
Whenever CBSDL chooses a CT node on Line 1 and the plan of the node has no collisions, then CBSDL terminates with a solution with finite cost.

The cost of the CT node is of its plan, which is bounded by . ∎
Lemma 3.
The plan of a CT node has the largest possible number of paths (one for each successful agent) that obey its constraints.

The statement holds for the root CT node because its plan contains one path for each agent (since each agent can reach its goal vertex in time steps in the absence of other agents). Assume that the statement holds for the parent CT node of any child CT node . When CBSDL updates the plan of on Line 1, it changes the path for one agent only, say agent , by performing a lowlevel search with the constraints of (including the newly added constraint ). Therefore, CBSDL correctly updates the path for agent , and the statement holds also for due to the induction assumption and the fact that inherits the paths of all agents different from agent from on Line 1. ∎
Lemma 4.
CBSDL chooses CT nodes on Line 1 in nondecreasing order of their costs.

CBSDL performs a bestfirst search and the cost of a parent CT node is at most the cost of any of its child CT nodes since contains at most as many paths as contains because (1) the plan of a CT node contains the largest possible number of paths (one for each successful agent) that obey its constraints according to Lemma 3, and thus (2) the set of all plans that obey is a subset of the set of all plans that obey (since due to Line 1). ∎
Lemma 5.
The cost of a CT node is at most the cost of any solution that obeys its constraints.

The cost of the CT node is the cost of its plan, which in turn is the minimum among the costs of all plans that obey its constraints according to Lemma 3, which in turn is at most the cost of any solution that obeys its constraints since every solution that obeys its constraints is also a plan that obeys its constraints. ∎
Theorem 2.
CBSDL is complete and optimal.

A solution always exists, for example, where all agents are unsuccessful. Now assume that the cost of an optimal solution is and, for a proof by contradiction, that CBSDL does not terminate with a solution of cost . Therefore, whenever CBSDL chooses a CT node with cost on Line 1, its plan has collisions (because otherwise CBSDL would correctly terminate with a solution of cost according to Lemma 2 since it chooses CT nodes on Line 1 in nondecreasing order of their costs according to Lemma 4). Pick an arbitrary optimal solution. A CT node whose constraints the optimal solution obeys has cost according to Lemma 5. The root CT node is such a node since the optimal solution trivially obeys its (empty) constraints. Whenever CBSDL chooses such a CT node on Line 1, its plan has collisions (as shown directly above since its cost is ). CBSDL thus generates the child CT nodes of this parent CT node, the constraints of at least one of which the optimal solution obeys and which CBSDL thus inserts into OPEN with cost . Since CBSDL chooses CT nodes on Line 1 in nondecreasing order of their costs according to Lemma 4, it chooses infinitely many CT nodes on Line 1 with costs , which contradicts Lemma 1. ∎
3.2 DeathBased Search
DeathBased Search (DBS) is also a twolevel algorithm. Conceptually, instead of imposing vertex or edge constraints on agents, DBS marks individual agents as unsuccessful and then searches for the minimal set of unsuccessful agents necessary to produce a solution.
We define a group of agents to be consistent iff all agents in it can simultaneously be successful, that is, the subMAPFDL instance with the agents in has a zerocost solution (an empty group is consistent). This condition is verified by a special call to CBSDL with deadline , which reports that the condition holds if all agents in are successful or reports that the condition does not hold once CBSDL expands a CT node with nonzero cost (that is, at least one agent in is not successful).
On the high level, DBS performs a bestfirst search on the death tree (DT). Each DT node contains a set .live of disjoint groups of live agents (agents that have not been declared unsuccessful) and a cost .cost equal to the number of agents that have been declared unsuccessful. Algorithm 2 shows the highlevel search of DBS. The root DT node contains a set of groups of live agents, each group containing a single unique agent [Lines 2] and its cost is zero [Line 2]. DBS chooses the DT node with the smallest cost .cost and checks if all groups in its set .live are consistent [Line 22]. If .live contains a single consistent group , the DT node is a goal node, and DBS returns the zerocost solution for [Line 2]. If all (more than one) groups in .live are consistent, DBS merges the two smallest groups and in .live to form a new group and adds a child DT node whose set contains all the groups in .live but replaces and with [Lines 22]. Otherwise, there is an inconsistent group in .live [Line 2]. We know that at least one agent in must be declared unsuccessful, forcing a split. In this case, DBS adds child nodes, one for each agent , to DT, where each of these nodes declares its own unique agent unsuccessful, and its cost is thus one larger than that of its parent [Lines 22].
3.2.1 Other Versions of DBS
DBS could have started with a root DT node [Line 2] whose set contains only a single group of all agents, which does not require merging groups of live agents but results in a larger branching factor for the root DT node. DBS could have chosen different groups to merge [Line 2], which might result in an inconsistent group of larger size. Whenever DBS splits a parent DT node [Lines 22], it could have generated child DT nodes whose sets contain only consistent additional groups (and thus possibly declare more than one additional agent unsuccessful for the child DT nodes), which requires a procedure that can determine all consistent subgroups of the (inconsistent) group of agents efficiently and might result in a larger branching factor. In this paper, we chose to present the version that is the easiest to understand and analyze.
3.2.2 Theoretical Analysis
We now prove that DBS is complete and optimal.
Lemma 6.
DBS generates only finitely many DT nodes.

The branching factor of a DT node is bounded by due to Line 2. Due to Lines 2 and 2, when we consider each DT node in a downward traversal of any branch of DT from the root DT node, its set contains either one less group (when merging two groups) or one less agent (when declaring an unsuccessful agent) than that of its parent CT node. Its set is thus different from the sets of all its ancestor DT nodes. Therefore, the depth of DT is also finite since there are finitely many possible sets of disjoints groups of the agents. ∎
Lemma 7.
Whenever DBS chooses a DT node on Line 2 whose set contains one single consistent group of live agents, then DBS correctly terminates with a solution of finite cost.

Its cost is the number of agents that have been declared unsuccessful, which is bounded by . ∎
Lemma 8.
DBS chooses DT nodes on Line 2 in nondecreasing order of their costs.
Theorem 3.
DBS is complete and optimal.

A solution always exists, for example, where all agents are unsuccessful. Now assume that the cost of an optimal solution is and, for a proof by contradiction, that DBS does not terminate with a solution of cost . Therefore, whenever DBS chooses a DT node with cost on Line 2, its set does not contain one single consistent group (because otherwise DBS would correctly terminate with a solution of cost according to Lemma 7 since it chooses DT nodes on Line 2 in nondecreasing order of their costs according to Lemma 8). Pick an arbitrary optimal solution with the set of unsuccessful agents. Trivially, a DT node that has declared the agents in a subset of unsuccessful has cost . The root DT node is such a node since it has not declared any agents unsuccessful. Whenever DBS chooses such a DT node on Line 2, its set does not contain one single consistent group (as shown directly above since its cost is ). Its set thus contains (1) more than one consistent group or (2) an inconsistent group (in which case the DT node has declared the agents in a strict subset of unsuccessful). In case (1), DBS thus generates the only child DT node of this parent DT node, which has declared the same agents unsuccessful as the parent DT node and which DBS thus inserts into OPEN with cost . In case (2), DBS thus generates the child DT nodes of this parent DT node, at least one of which has still declared the agents (including one additional agent) in a subset of unsuccessful and which DBS thus inserts into OPEN with cost . Since DBS chooses DT nodes on Line 2 in nondecreasing order of their costs according to Lemma 8, it chooses infinitely many DT nodes on Line 2 with costs , which contradicts Lemma 6. ∎
3.3 MetaAgent DBS
CBS may perform poorly when an environment contains many possible, but colliding, paths for the agents since the size of CT is exponential in the number of collisions resolved. On the other hand, DBS may perform poorly for MAPFDL if the conflicting agents are not added to the same group early in the search. We thus combine the power of CBS for weakly coupled agents and the power of DBS for identifying unsuccessful agents in a tightly coupled subset of agents using the MetaAgent CBS [Sharon et al.2015] framework, which results in a new optimal MAPFDL algorithm, called MetaAgent DBS (MADBS).
MADBS is a twolevel algorithm: It uses the highlevel search of CBSDL on the high level and DBS on the low level. Algorithm 1 shows its highlevel search. MADBS is similar to CBSDL but also keeps track of the number of times collisions between every pair of (simple) agents that it has considered thus far during the search in a conflict matrix . Before MADBS expands a CT node for the colliding agents and , if the number of collisions between the two agents exceeds a userdefined merge threshold , MADBS merges them into a composite meta agent . To do so, whenever MADBS considers a collision between (meta) agents and [Line 1], because two simple agents and collide, it increases the value of by one. Function returns true iff [Line 1]. Since DBS uses a lowlevel search that finds a plan for a meta agent without any internal collisions between all (simple) agents in the meta agent, it only needs to store external constraints resulting from (external) collisions between any two (simple) agents in different meta agents. Therefore, if MADBS decides to merge and into , it updates the constraints of the CT node accordingly [Line 1]. It then calls DBS to find new paths (without internal collisions) for all agents in subject to the constraints in relevant to (by solving a MAPFDL instance with agents in ) and updates the plan and cost of the CT node according to the new paths returned by DBS [Lines 11]. Then, instead of expanding , MADBS inserts back into OPEN [Line 1]. When MADBS generates a new child CT node, it also calls DBS to find an optimal solution for a meta agent that obeys the constraints of the child CT node [Line 1].
3.3.1 Theoretical Analysis
Lemmas 1 and 4 hold for MADBS without change. Since the lowlevel search of MADBS, namely DBS, returns the maximum number of paths for a meta agent that obey the constraints of a CT node, Lemma 3 also holds for MADBS because (1), when it updates the plan of a CT node on Line 1, the resulting plan contains the maximum number of paths for the new meta agent and the original paths of the other agents, and (2), when it updates the plan of a child CT node on Line 1, the resulting plan contains the maximum number of paths for the meta agent and inherits paths of other agents from the plan of the parent CT node, and thus the induction argument for the proof of Lemma 3 holds. Consequently, Lemma 4 also holds for MADBS.
Theorem 4.
MADBS is complete and optimal.
4 Experiments
In this section, we describe our experimental results on a 2.50 GHz Intel Core i52450M laptop with 6 GB RAM. We tested six optimal MAPFDL algorithms: the ILPbased algorithm, CBSDL, DBS, and MADBS with merge thresholds 0, 10, and 100 (labeled as MADBS(0), MADBS(10), and MADBS(100), respectively). The ILPbased algorithm uses CPLEX V12.7.1 [IBM2011] as the ILP solver. We experimented on instances where the start and goal vertices of each agent are placed randomly so that the distance between them is close to the deadline. An instance becomes much easier to solve if this distance is much smaller than the deadline (since there is more leeway to plan a path for the agent). Specifically, we use three sets of randomly generated MAPFDL instances with different numbers of agents (varied from 10 to 100 in increments of 10) labeled as SMALL, MEDIUM, and LARGE on , , and 4neighbor 2D grids with deadlines
50, 100, and 150, respectively. The cells in each grid are blocked independently at random with 20% probability each. We generate 50 MAPFDL instances for each number of agents for each set. The start and goal vertices of each agent are randomly placed at distance 48, 49, or 50 for SMALL, 98, 99, or 100 for MEDIUM, and 148, 149, or 150 for LARGE. Each algorithm is given a time limit of 60 seconds to solve each instance. We did not run an algorithm for some number of agents if it solved none of the 50 instances for a smaller number of agents.
The SMALL domain Figure 1 (top left) plots the success rates (numbers of instances solved within the time limit divided by 50) for all algorithms for SMALL. ILP has the highest success rates, and they start to drop only at 50 agents. The success rates for the searchbased algorithms start to drop at 30 agents. DBS and MADBS(0) have the highest success rates among all searchbased algorithms. Figure 1 (top right) plots the average running times over all 50 instances. 60 seconds are used for an instance that is not solved. Therefore, the data points in the chart are lower bounds on the running times in those cases when not all instances are solved. ILP performs the best. Finally, the table in Figure 1 reports the average running times over those “easy” instances that are solved by all six algorithms (it also reports the numbers of those instances but does not show the rows where no instance is solved by all the algorithms). The best entry in each row is shown in bold. The searchbased algorithms use less time to solve these “easy” instances than ILP. CBSDL, MADBS(10), and MADBS(100) seem to use the least times and outperform ILP by up to a factor of 6. In some cases, the running times are smaller for larger numbers of agents because fewer (and “easier”) instances are solved by all algorithms.
The MEDIUM domain Figure 2 reports the same statistics for MEDIUM in the same format as reported for SMALL. Figure 2 (top left) plots the success rates. ILP has the highest success rates for small numbers () of agents but the lowest success rates for large numbers of agents. MADBS(10) seems to perform the best for large numbers of agents. Figure 2 (top right) plots the average running times over all 50 instances. ILP has the longest running times. MADBS(10) seems to perform the best in general. Finally, the table in Figure 2 reports the average running times over instances that are solved by all six algorithms. ILP performs the worst. CBSDL seems to have the smallest running times and outperforms ILP by up to a factor of 7.
The LARGE domain Figure 3 reports the same statistics for LARGE in the same format as reported for SMALL. Figure 3 (top left) plots the success rates. CBSDL, MADBS(10), and MADBS(100) have the best success rates. ILP has the worst success rates. Figure 3 (top right) plots the average running times over all 50 instances. MADBS(10) seems to perform the best. ILP performs the worst. The table in Figure 3 reports the average running times over instances that are solved by all six algorithms. MADBS(10) and CBSDL perform the best (very close to each other) and outperform ILP by up to a factor of 9.
Summary of Experimental Results For the same number of agents, SMALL has higher agent density, more tightlycoupled agents, and shorter planning horizons than MEDIUM and LARGE. ILP outperforms the searchbased algorithms for SMALL because the size of the ILP formulation is small. When increases, the size of the ILP formulation and the running time required to solve it increase significantly.
On the other hand, among all searchbased algorithms, there seems to be a spectrum where DBS and CBSDL sit at two extremes. DBS has higher success rates than CBSDL for SMALL. CBSDL has significantly higher success rates than DBS for MEDIUM and LARGE. CBSDL uses much less times than DBS for instances that are solved by all algorithms. MADBS seems to balance between CBSDL and DBS: (a) MADBS(0) is more similar to DBS because it merges agents into meta agents more frequently, which can result in a large meta agent containing many agents that need to be solved by DBS on the low level; and, on the other hand, (b) MADBS(10) and MADBS(100) are more similar to CBSDL because they merge agents less frequently and their searches mostly remain in the CBSDL framework.
5 Conclusions and Future Work
We formalized MAPFDL, a new variant of MAPF. Theoretically, we proved that MAPFDL is NPhard to solve optimally. We presented two families of optimal MAPFDL algorithms, one based on an ILP formulation and one based on combinatorial search techniques. Our experimental results show that each of them performs the best in different scenarios. We suggest the following future directions: (1) develop and compare new MAPFDL algorithms, for example, A*, ASP, and SATbased algorithms; (2) study important generalizations of MAPFDL (for example, when agents have different priorities) more deeply; (3) study the combinatorial difference between MAPFDL and MAPF; and (4) explore different merge criteria for MADBS.
References
 [Barer et al.2014] M. Barer, G. Sharon, R. Stern, and A. Felner. Suboptimal variants of the conflictbased search algorithm for the multiagent pathfinding problem. In SoCS, pages 19–27, 2014.
 [Boyarski et al.2015] E. Boyarski, A. Felner, R. Stern, G. Sharon, D. Tolpin, O. Betzalel, and S. E. Shimony. ICBS: Improved conflictbased search algorithm for multiagent pathfinding. In IJCAI, pages 740–746, 2015.
 [Cohen et al.2016] L. Cohen, T. Uras, T. K. S. Kumar, H. Xu, N. Ayanian, and S. Koenig. Improved solvers for boundedsuboptimal multiagent path finding. In IJCAI, pages 3067–3074, 2016.
 [de Wilde et al.2013] B. de Wilde, A. W. ter Mors, and C. Witteveen. Push and rotate: Cooperative multiagent path planning. In AAMAS, pages 87–94, 2013.
 [Erdem et al.2013] E. Erdem, D. G. Kisa, U. Oztok, and P. Schueller. A general formal framework for pathfinding problems with multiple agents. In AAAI, pages 290–296, 2013.
 [Felner et al.2017] A. Felner, R. Stern, S. E. Shimony, E. Boyarski, M. Goldenberg, G. Sharon, N. R. Sturtevant, G. Wagner, and P. Surynek. Searchbased optimal solvers for the multiagent pathfinding problem: Summary and challenges. In SoCS, pages 29–37, 2017.

[Felner et al.2018]
A. Felner, J. Li, E. Boyarski, H. Ma, L. Cohen, T. K. S. Kumar, and S. Koenig.
Adding heuristics to conflictbased search for multiagent path finding.
In ICAPS, 2018. 
[Goldenberg et al.2014]
M. Goldenberg, A. Felner, R. Stern, G. Sharon, N. R. Sturtevant, R. C. Holte,
and J. Schaeffer.
Enhanced partial expansion A*.
Journal of Artificial Intelligence Research
, 50:141–187, 2014.  [Hönig et al.2016a] W. Hönig, T. K. S. Kumar, L. Cohen, H. Ma, H. Xu, N. Ayanian, and S. Koenig. Multiagent path finding with kinematic constraints. In ICAPS, pages 477–485, 2016.
 [Hönig et al.2016b] W. Hönig, T. K. S. Kumar, H. Ma, N. Ayanian, and S. Koenig. Formation change for robot groups in occluded environments. In IROS, pages 4836–4842, 2016.
 [IBM2011] IBM. IBM ILOG CPLEX Optimization Studio CPLEX User’s Manual, 2011.
 [Luna and Bekris2011] R. Luna and K. E. Bekris. Push and Swap: Fast cooperative pathfinding with completeness guarantees. In IJCAI, pages 294–300, 2011.
 [Ma and Koenig2016] H. Ma and S. Koenig. Optimal target assignment and path finding for teams of agents. In AAMAS, pages 1144–1152, 2016.
 [Ma and Koenig2017] H. Ma and S. Koenig. AI buzzwords explained: Multiagent path finding (MAPF). AI Matters, 3(3):15–19, 2017.
 [Ma et al.2016a] H. Ma, S. Koenig, N. Ayanian, L. Cohen, W. Hönig, T. K. S. Kumar, T. Uras, H. Xu, C. Tovey, and G. Sharon. Overview: Generalizations of multiagent path finding to realworld scenarios. In IJCAI16 Workshop on MultiAgent Path Finding, 2016.
 [Ma et al.2016b] H. Ma, C. Tovey, G. Sharon, T. K. S. Kumar, and S. Koenig. Multiagent path finding with payload transfers and the packageexchange robotrouting problem. In AAAI, pages 3166–3173, 2016.
 [Ma et al.2017a] H. Ma, W. Hönig, L. Cohen, T. Uras, H. Xu, T. K. S. Kumar, N. Ayanian, and S. Koenig. Overview: A hierarchical framework for plan generation and execution in multirobot systems. IEEE Intelligent Systems, 32(6):6–12, 2017.
 [Ma et al.2017b] H. Ma, T. K. S. Kumar, and S. Koenig. Multiagent path finding with delay probabilities. In AAAI, pages 3605–3612, 2017.
 [Ma et al.2017c] H. Ma, J. Li, T. K. S. Kumar, and S. Koenig. Lifelong multiagent path finding for online pickup and delivery tasks. In AAMAS, pages 837–845, 2017.
 [Ma et al.2017d] H. Ma, J. Yang, L. Cohen, T. K. S. Kumar, and S. Koenig. Feasibility study: Moving nonhomogeneous teams in congested video game environments. In AIIDE, pages 270–272, 2017.
 [Ma et al.2018] H. Ma, G. Wagner, A. Felner, J. Li, T. K. S. Kumar, and S. Koenig. Multiagent path finding with deadlines: Preliminary results. In AAMAS, 2018.
 [Morris et al.2016] R. Morris, C. Pasareanu, K. Luckow, W. Malik, H. Ma, S. Kumar, and S. Koenig. Planning, scheduling and monitoring for airport surface operations. In AAAI16 Workshop on Planning for Hybrid Systems, 2016.
 [Nguyen et al.2017] V. Nguyen, P. Obermeier, T. C. Son, T. Schaub, and W. Yeoh. Generalized target assignment and path finding using answer set programming. In IJCAI, pages 1216–1223, 2017.
 [Sharon et al.2013] G. Sharon, R. Stern, M. Goldenberg, and A. Felner. The increasing cost tree search for optimal multiagent pathfinding. Artificial Intelligence, 195:470–495, 2013.
 [Sharon et al.2015] G. Sharon, R. Stern, A. Felner, and N. R. Sturtevant. Conflictbased search for optimal multiagent pathfinding. Artificial Intelligence, 219:40–66, 2015.
 [Silver2005] D. Silver. Cooperative pathfinding. In AIIDE, pages 117–122, 2005.
 [Standley and Korf2011] T. S. Standley and R. E. Korf. Complete algorithms for cooperative pathfinding problems. In IJCAI, pages 668–673, 2011.
 [Sturtevant and Buro2006] N. R. Sturtevant and M. Buro. Improving collaborative pathfinding using map abstraction. In AIIDE, pages 80–85, 2006.
 [Surynek et al.2016] P. Surynek, A. Felner, R. Stern, and E. Boyarski. Efficient SAT approach to multiagent path finding under the sum of costs objective. In ECAI, pages 810–818, 2016.
 [Surynek2015] P. Surynek. Reduced timeexpansion graphs and goal decomposition for solving cooperative path finding suboptimally. In IJCAI, pages 1916–1922, 2015.
 [Tovey1984] C. A. Tovey. A simplified NPcomplete satisfiability problem. Discrete Applied Mathematics, 8:85–90, 1984.
 [Veloso et al.2015] M. Veloso, J. Biswas, B. Coltin, and S. Rosenthal. CoBots: Robust symbiotic autonomous mobile service robots. In IJCAI, pages 4423–4429, 2015.
 [Wagner and Choset2015] G. Wagner and H. Choset. Subdimensional expansion for multirobot path planning. Artificial Intelligence, 219:1–24, 2015.
 [Wang and Botea2011] K. Wang and A. Botea. MAPP: A scalable multiagent path planning algorithm with tractability and completeness guarantees. Journal of Artificial Intelligence Research, 42:55–90, 2011.
 [Wurman et al.2008] P. R. Wurman, R. D’Andrea, and M. Mountz. Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Magazine, 29(1):9–20, 2008.
 [Yu and LaValle2013a] J. Yu and S. M. LaValle. Planning optimal paths for multiple robots on graphs. In ICRA, pages 3612–3617, 2013.
 [Yu and LaValle2013b] J. Yu and S. M. LaValle. Structure and intractability of optimal multirobot path planning on graphs. In AAAI, pages 1444–1449, 2013.
Comments
There are no comments yet.