Introduction
Finding a shortest path in a graph is a classical problem in computer science with numerous applications, including robot motion planning, digital entertainment, and logistics. A [Hart, Nilsson, and Raphael1968] and Dijsktra’s algorithm [Dijkstra1959] are wellknown methods for solving this kind of task. Finding a path becomes more challenging in the presence of dynamic obstacles that move through the environment, blocking vertices or prohibiting moving between some of them at specific time ranges. A solution in such a scenario is a plan that consists a sequence of actions, where an action is either to move from one vertex to an adjacent one, or to wait in it for some time. Application of textbook A or Dijkstra’s algorithm in this setting is not straightforward. First, the set of actions in a vertex depends on the current time, due to the dynamic obstacles. Second, there are potentially infinite wait actions, depending on how much time one would like to wait.
To address this problem, the Safe Interval Path Planning (SIPP) algorithm was introduced [Phillips and Likhachev2011]. SIPP computes for vertices in the graph a set of safe intervals in which it is possible to occupy them without colliding with the dynamic obstacles. Then, it runs an A search in a different graph in which each vertex represents a pair of vertex in the original graph and a safe time interval. SIPP is complete and returns optimal solutions. It has been successfully applied in a range of domains, including robot motion planning and multiagent path finding [Araki et al.2017, Andreychuk et al.2019, Cohen et al.2019]. In such applications, a common requirement is to tradeoff solution optimality in order to obtain a solution faster. To control this tradeoff, we explore boundedsuboptimal versions of SIPP. A boundedsuboptimal algorithm accepts a parameter and returns a solution whose cost is at most times the cost of an optimal solution. Since SIPP is based on A, it is natural to apply the same frameworks used for creating a boundedsuboptimal A. However, the SIPP search space has certain properties that prevent a straightforward application of frameworks such as WA [Pohl1970]. To address this, Narayanan et al. narayanan2012anytime proposed a boundedsuboptimal SIPP implementation.
In this work we revisit the assumptions of this algorithm and propose two alternative boundedsuboptimal SIPP algorithms. One based on the WA framework but allowing node reexpansions (see description later), and another based on the focal search framework [Pearl and Kim1982]. We analyze these algorithms and compare them experimentally. The results show that each algorithm has its strengths and weaknesses, and the choice of which algorithm should be use depends mainly on the value of .
Problem Statement
Consider a mobile agent that navigates in an environment represented by a weighted graph. The vertices of the graph correspond to the locations the agent may occupy, and the edges correspond to allowed transitions. When at a vertex, the agent can either wait for an arbitrary amount of time, or move to an adjacent vertex along a graph edge. For the sake of simplicity we neglect the inertial effects and assume that agent moves with constant speed such that the duration of a move action equals the weight of the corresponding edge.
A plan is a sequence of consecutive actions that move the agent from a start vertex to goal vertex . The cost of a plan is the sum of durations of its constituent actions. There exist dynamic obstacles that move in the environment and block certain vertices and edges for predefined time intervals, preventing the agent from occupying or moving through them. A plan is called valid if it avoids colliding with all dynamic obstacles. The path planning with dynamic obstacles problem is the problem of finding a valid plan for a given start and goal locations.
An optimal solution to this problem is a lowestcost valid plan from start to goal. The suboptimality of a solution is the ratio between its cost and the cost of an optimal solution. In this work we are interested in finding solutions whose suboptimality is bounded by a given scalar . E.g. by setting to we aim to find a solution whose cost is no more than 10% over the cost of an optimal solution.
Background
A
is a heuristic search algorithm for finding a path in a state space represented as a graph. It maintains two lists of states, OPEN and CLOSED. OPEN contains all states that were generated but not expanded and CLOSED contains all previously expanded states. Initially, CLOSED is empty and OPEN contains only the initial state. Every state
is associated with two values: , which is the cost of the lowestcost path found so far from the initial state to , and, which is a heuristic estimate of the cost of the lowestcost path from
to a goal. In every iteration the state with the minimal is popped out of OPEN and expanded. It means generating the successors, which are state’s neighbors in the state space, and inserting them into OPEN. A halts when it expands a goal state. Note that if a state generates a state that was already generated, then we must check if its value can be updated by considering reaching it via . If this happens and is no longer in OPEN, then it must be reinserted into OPEN. Consequently, a node may be reexpanded multiple times. In fact, in some extreme cases the number of expanded states can be exponential in the size of the state space [Martelli1977].A has several attractive properties that are relevant for this paper. First, if is admissible, that is, it always outputs a lower bound on the value it estimates, then A guarantees finding an optimal solution (if one exists). Second, if heuristic is consistent, then A will never reexpand a state. A heuristic is consistent if for every pair of states and such that generates it holds that where is the cost of the edge from to .
SIPP is a modification of A
for the path planning with dynamic obstacles problem. The core idea of SIPP is to group consecutive time moments into
time intervals and to associate every vertex with one or more safe intervals. A safe interval for a vertex is “a contiguous period of time … during which there is no collision and it is in collision one timestep prior and one timestep after the period” [Phillips and Likhachev2011]. SIPP performs an A search over a state space in which a state is a tuple , where is a vertex in the underlying graph and is a safe interval for . Note that states with different nonoverlapping time intervals but the same vertex might exist in the search space.in SIPP is the earliest time an agent can reach in the designated safe interval . SIPP uses to compute the set of successors for . I.e., when a state generates a state , SIPP first tries to commit a move at time equal to , that is without any waiting at . If such a transition results in a collision with a dynamic obstacle, then SIPP augments the move with a wait action of minimal duration. That is, the agent is planned to wait at no longer than it is needed to avoid the collision. Therefore, is set to be , where is the cost of the edge and is computed taking dynamic obstacles into account.^{1}^{1}1The method to compute is domain dependent.
in SIPP estimates the lowestcost plan to move the agent from to a goal state. SIPP requires to be both admissible and consistent. Beyond these changes, SIPP uses regular A: it extract from OPEN the state with the lowest and it halts when a goal has been expanded.
SIPP shares several of the desirable properties of A. It guarantees finding a solution if one exists (otherwise, it reports failure) and this solution is optimal. Both completeness and optimality rely on the following property: when SIPP expands a state , then is the earliest possible arrival time to in (Theorem 2 in [Phillips and Likhachev2011]). This property guarantees that set of successors for every generated state is maximal (Theorem 1 in [Phillips and Likhachev2011]). Consequently, expanding a state with the goal vertex means the lowest cost plan to it has been found and all relevant states have been generated.
BoundedSuboptimal SIPP
One of the wellknown ways to make A boundedsuboptimal is to use an inflated heuristic function during the search. WA [Pohl1970] is a prominent example of this approach. WA is similar to A except that it chooses for the expansion a node that minimizes , where is the desired suboptimality bound. When a goal state is expanded, WA is guaranteed to have found a solution whose suboptimality is at most .
Unlike A, when WA expands a state it may be that is not the lowest cost path to . Thus, a state may expanded more than once. Nevertheless, if is consistent, WA is guaranteed to find a solution with the desired suboptimality bound even without reexpanding a single state [Likhachev, Gordon, and Thrun2004].
Weighted SIPP (WSIPP)
Let Weighted SIPP (WSIPP) be SIPP that uses an inflated heuristic like WA. WSIPP is not guaranteed to find a boundedsuboptimal solution, or in fact, any solution, if it does not reexpand states. For example, see Figure 1.^{2}^{2}2The rectangles above the vertices are explained later. A dynamic obstacle moves from to , arriving at at time 10 and staying there forever. Thus, is the only safe interval for . The safe interval for all states except and is . Let . After the first expansion states and are generated. is chosen for expansion as . Next, node is generated and its value is set: . Then is expanded but it has no successors, as one can not reach within the safe interval from when (as ). To find a solution, we must expand and then reexpand to update to 6.
Weighted SIPP with Duplicate States
The creators of SIPP identified this problem and proposed the following. When the initial state is expanded, we create two copies of every state it generates. An optimal copy, which is prioritized in OPEN according to and a suboptimal copy, which is prioritized in OPEN according to . Throughout the search, whenever an optimal copy is expanded, we again generate two copies of every state it generates. We refer to this algorithm as WSIPP with Duplicate States (WSIPP). WSIPP preserves the desirable property of WA– it guarantees returning a boundedsuboptimal solution while avoiding reexpansions. That is, every copy of a state is expanded at most once.
Weighted SIPP with Reexpansions (WSIPP)
Interestingly, the creators of SIPP did not explore the possibility of allowing unlimited reexpansions instead of duplicating each state. That is, if a state in CLOSED is generated with a lower value, then it is reinserted to OPEN. Thus, every state will eventually be reexpanded with the minimal value that guarantees finding a solution. We call this algorithm WSIPP with Reexpansions (WSIPP).
WSIPP is complete and is guaranteed to find a boundedsuboptimal solution, but the number of reexpansions performed by WSIPP can be exponential [Martelli1977]. Nevertheless, in many domains the number of reexpansions is manageable [Sepetnitsky, Felner, and Stern2016].
In fact, WSIPP may expand fewer states than WSIPP. Consider Fig. 1. Rectangles over the vertices show the states expanded by WSIPP with , where the optimal and suboptimal copies of each state are marked by a solid and dashed line. The values inside the rectangles are values. After the initial expansion, WSIPP expands suboptimal with and generates suboptimal with . Then supoptimal () is expanded. This adjusts the value of suboptimal to . Next, WSIPP expands the latter and generates suboptimal with . Then optimal and optimal are consequently expanded as their values (both equal to ) are lower than . As a result of these expansions optimal is generated and its value is set to . It is expanded and optimal with is generated. Then WSIPP switches back to supoptimal (as ) and expands it generating suboptimal copy of with . Finally, the latter is expanded and the search terminates. Total number of expansions is . In the same setting WSIPP performs only expansions: , , , , , , with no reexpansions at all.
Focal SIPP
Focal Search [Pearl and Kim1982] is a framework for boundedsuboptimal search that maintains a sublist of OPEN, called FOCAL. FOCAL is the set of every state for which , where is the smallest value over all states in OPEN. In every iteration, a state is chosen from FOCAL that minimizes a secondary heuristic . The latter does not have to be consistent or even admissible. A wellknown secondary heuristics is the number of hopstothegoal [Wilt and Ruml2014], which ignores edge costs. Focal Search terminates when a goal state is in the FOCAL or when OPEN is empty.
We use the name Focal SIPP (FocalSIPP) to refer to the SIPP version that uses Focal Search instead of A and allows unlimited reexpansions. FocalSIPP is complete and guarantees finding a boundedsuboptimal solution. However, due to reexpansions its runtime may be exponential in the number of states in the search space, just like WSIPP.
Discussion
WSIPP, WSIPP, and FocalSIPP are guaranteed to return a boundedsuboptimal solution. However, the cost of the solution they return may differ, since it can be any value between the optimal cost and times that cost. Perhaps, more interesting, though, is the expected runtime for each algorithm for a given value of .
Consider setting to be very close to one. In this case, WSIPP is expected to perform poorly, since two copies of every generated state are introduced and both are likely to have nearly the same priority in OPEN, since and are very close if is close to one, especially in the beginning of the search. In contrast, WSIPP is expected to perform well, as it is almost equivalent to A, which is known to expand the minimal number of states [Dechter and Pearl1985]. On the other hand, consider the behavior of FocalSIPP for very large values of . In such a case almost all states in OPEN will be in FOCAL and FocalSIPP basically performs a greedy bestfirst search towards a goal, which is specifically designed to reach a goal state quickly. Thus, we expect in these cases that FocalSIPP will work well. In the experimental results below we confirmed these expectations.
Experimental Results
We evaluated the considered algorithms on a range of different grid maps including empty map, map containing 10 rectangular obstacles that resemble a warehouse (Warehouse), map composed of square rooms connected by the passages (Rooms), game map.^{3}^{3}3The Rooms map and the game map (den520) were taken from the movingai repository [Sturtevant2012]. Our code can be found at https://github.com/PathPlanning/SuboptimalSIPP. Grid connectivity varied from 8 to 32connected. Each map was populated with 250 dynamic obstacles that move between random cells. 100 different pairs of start and goal locations for an agent were chosen on each map randomly. Two action models were considered – one that assumes only translations (denoted “w/o rotations”) and one that also considers rotations (denoted “w rotations”), i.e., if moving from one cell to the other requires aligning the heading, the agent has to rotate and it takes time. Moving speed was 1 cell per 1 time unit, rotation speed was per time unit. We used the Euclidean distance, scaled properly with the agent’s speed, as a heuristic. The secondary heuristic for FocalSIPP, , was set to be the shortest path from to the goal ignoring all dynamic obstacles and edge costs, which we computed offline. The suboptimality bound varied from to .
In each run we measured the algorithm’s runtime and solution cost, relative to SIPP, and the number of reexpansions. While WSIPP prohibits reexpansions, it eventually generates two copies of the same state. If both of them got expanded, we count this as a reexpansion.
Fig. 2 shows the results of our experiments on two representative setups: Warehouse w/o rotations (32connected) and Rooms w rotations (32connected). We choose to present results for only these two setups due to space limitations. These specific maps represent two possible “types” of worlds. Warehouse is a relatively open environment populated with the isolated obstacles (similar to city maps, empty maps etc.), Rooms is a corridorlike environment with a large number of narrow passages (similar to mazes, indoor maps, etc.). The heuristic we used is relatively accurate for Warehouse but it is not accurate for Rooms, allowing us to show the impact of heuristic accuracy. Moreover, in the w rotations model, the heuristic is even less accurate.
In general, we observed similar trends across all the considered domains and action models. These trends, supported by the Fig. 2, are the following: (1) WSIPP is better for small values of that are close to 1, (2) FocalSIPP is better or the same on higher , and (3) WSIPP is better or the same for midrange . For midrange suboptimality bounds the results also show a notable “spike” in the number of reexpansions and runtime for FocalSIPP. We hypothesize that this occurs because the bound is large enough to cause finding suboptimal paths to generated nodes but not large enough so that boundedsuboptimal solutions can be found without reexpanding these nodes.
Figure 3 shows the tradeoff between solution cost and runtime obtained by WSIPP, WSIPP, and FocalSIPP. Each data point represents the results of an algorithm with a specific value, where the value is the average runtime and the value is the solution suboptimality. The results again show the same trends as above. FocalSIPP allows the fastest solution when solution cost is the worst. WSIPP is preferable if one wants very close to optimal solution, while WSIPP is more suitable for midrange suboptimality.
Summary
We explored three boundedsuboptimal versions of SIPP and analyzed their pros and cons. Experimental evaluation on different settings show that the previously proposed boundedsuboptimal SIPP – WSIPP – is frequently outperformed by the other algorithms, e.g. by WSIPP, which runs WA but allows reexpanding. An appealing direction of future research is to explore more sophisticated techniques for boundedsuboptimal SIPP, as well as explore its applications in an anytime planning and multiagent path finding.
Acknowledgments
This work was partially funded by RFBR (project 183720032). Anton Andreychuk is supported by the “RUDN University Program 5100”. Roni Stern is supported by ISF grant #210/17.
References

[Andreychuk et al.2019]
Andreychuk, A.; Yakovlev, K.; Atzmon, D.; and Stern, R.
2019.
Multiagent pathfinding with continuous time.
In
International Joint Conference on Artificial Intelligence (IJCAI)
, 39–45.  [Araki et al.2017] Araki, B.; Strang, J.; Pohorecky, S.; Qiu, C.; Naegeli, T.; and Rus, D. 2017. Multirobot path planning for a swarm of robots that can both fly and drive. In IEEE International Conference on Robotics and Automation (ICRA), 5575–5582.
 [Cohen et al.2019] Cohen, L.; Uras, T.; Kumar, T. S.; and Koenig, S. 2019. Optimal and boundedsuboptimal multiagent motion planning. In Symposium on Combinatorial Search.
 [Dechter and Pearl1985] Dechter, R., and Pearl, J. 1985. Generalized bestfirst search strategies and the optimality of a. Journal of the ACM (JACM) 32(3):505–536.
 [Dijkstra1959] Dijkstra, E. W. 1959. A note on two problems in connexion with graphs. Numerische mathematik 1(1):269–271.
 [Hart, Nilsson, and Raphael1968] Hart, P. E.; Nilsson, N. J.; and Raphael, B. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics 4(2):100–107.
 [Likhachev, Gordon, and Thrun2004] Likhachev, M.; Gordon, G. J.; and Thrun, S. 2004. ARA*: Anytime A* with provable bounds on suboptimality. In Advances in neural information processing systems (NIPS), 767–774.
 [Martelli1977] Martelli, A. 1977. On the complexity of admissible search algorithms. Artificial Intelligence 8(1):1–13.
 [Narayanan, Phillips, and Likhachev2012] Narayanan, V.; Phillips, M.; and Likhachev, M. 2012. Anytime safe interval path planning for dynamic environments. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 4708–4715.
 [Pearl and Kim1982] Pearl, J., and Kim, J. H. 1982. Studies in semiadmissible heuristics. IEEE transactions on pattern analysis and machine intelligence (4):392–399.
 [Phillips and Likhachev2011] Phillips, M., and Likhachev, M. 2011. SIPP: Safe interval path planning for dynamic environments. In Proceedings of The 2011 IEEE International Conference on Robotics and Automation (ICRA 2011), 5628–5635.
 [Pohl1970] Pohl, I. 1970. Heuristic search viewed as path finding in a graph. Artificial intelligence 1(34):193–204.
 [Sepetnitsky, Felner, and Stern2016] Sepetnitsky, V.; Felner, A.; and Stern, R. 2016. Repair policies for not reopening nodes in different search settings. In Symposium on Combinatorial Search (SOCS), 81–88.
 [Sturtevant2012] Sturtevant, N. R. 2012. Benchmarks for gridbased pathfinding. IEEE Transactions on Computational Intelligence and AI in Games 4(2):144–148.
 [Wilt and Ruml2014] Wilt, C. M., and Ruml, W. 2014. Speedy versus greedy search. In Symposium on Combinatorial Search (SoCS).
Comments
There are no comments yet.