The FastMap Algorithm for Shortest Path Computations

We present a new preprocessing algorithm for embedding the nodes of a given edge-weighted undirected graph into a Euclidean space. In this space, the Euclidean distance between any two nodes approximates the length of the shortest path between them in the given graph. Later, at runtime, a shortest path between any two nodes can be computed using A* search with the Euclidean distances as heuristic estimates. Our preprocessing algorithm, dubbed FastMap, is inspired by the Data Mining algorithm of the same name and runs in near-linear time. Hence, FastMap is orders of magnitude faster than competing approaches that produce a Euclidean embedding using Semidefinite Programming. Our FastMap algorithm also produces admissible and consistent heuristics and therefore guarantees the generation of optimal paths. Moreover, FastMap works on general undirected graphs for which many traditional heuristics, such as the Manhattan Distance heuristic, are not always well defined. Empirically too, we demonstrate that the FastMap heuristic is competitive with other state-of-the-art heuristics like the Differential heuristic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

04/29/2021

Reverse Shortest Path Problem for Unit-Disk Graphs

Given a set P of n points in the plane, a unit-disk graph G_r(P) with re...
02/24/2021

A New Algorithm for Euclidean Shortest Paths in the Plane

Given a set of pairwise disjoint polygonal obstacles in the plane, findi...
11/19/2021

Embeddings and labeling schemes for A*

A* is a classic and popular method for graphs search and path finding. I...
05/30/2019

Power Weighted Shortest Paths for Unsupervised Learning

We study the use of power weighted shortest path distance functions for ...
12/12/2014

Manifold Matching using Shortest-Path Distance and Joint Neighborhood Selection

Matching datasets of multiple modalities has become an important task in...
10/02/2018

Efficient Fastest-Path Computations in Road Maps

In the age of real-time online traffic information and GPS-enabled devic...
11/04/2019

Multilateration of Random Networks with Community Structure

The minimal number of nodes required to multilaterate a network endowed ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction and Related Work

Shortest path computations commonly occur in the inner procedures of many AI programs. In video games, for example, a large fraction of CPU cycles is spent on shortest path computations [Uras and Koenig2015]. Many other tasks in AI, including motion planning [LaValle2006], temporal reasoning [Dechter2003], and decision making [Russell and Norvig2009], also involve finding and reasoning about shortest paths. While Dijkstra’s algorithm [Dijkstra1959] can be used to compute shortest paths in polynomial time, speeding up shortest path computations allows one to solve the aformentioned tasks faster. One way to do that is to use A* search with an informed heuristic [Hart et al.1968].

A perfect heuristic is one that returns the true distance between any two nodes in a given graph. A* with such a heuristic and proper tie-breaking is guaranteed to expand nodes only on a shortest path between the given start and goal nodes. In general, computing the perfect heuristic between two nodes is as hard as computing the shortest path between them. Hence, A* search benefits from a perfect heuristic only if it is computed offline. However, precomputing all pairwise distances is not only time-intensive but also requires a prohibitive memory where is the number of nodes. The memory requirements for storing all-pairs shortest paths data can be somewhat addressed through compression [Botea and Harabor2013, Strasser et al.2015].

Existing methods for preprocessing a given graph (without precomputing all pairwise distances) can be grouped into the following categories: Hierarchical abstractions that yield suboptimal paths have been used to reduce the size of the search space by abstracting groups of nodes [Botea et al.2004, Sturtevant and Buro2005]. More informed heuristics [Björnsson and Halldórsson2006, Cazenave2006, Sturtevant et al.2009] focus A* searches better, resulting in fewer expanded states. Hierarchies can also be used to derive heuristics during the search [Leighton et al.2008, Holte et al.1994]. Dead-end detection and other pruning methods [Björnsson and Halldórsson2006, Goldenberg et al.2010, Pochter et al.2010] identify areas of the graph that do not need to be searched to find shortest paths. Search with contraction hierarchies [Geisberger et al.2008] is an optimal hierarchical method, where every level of the hierarchy contains only a single node. It has been shown to be efficient on road networks but seems to be less efficient on graphs with higher branching factors, such as grid-based game maps [Storandt2013]. N-level graphs [Uras and Koenig2014], constructed from undirected graphs by partitioning the nodes into levels also allow for significant pruning during the search.

A different approach that does not rely on preprocessing of the graph is to use some notion of a geometric distance between two nodes as a heuristic of the distance between them. One such heuristic for gridworlds is the Manhattan Distance heuristic.111In a 4-neighbor 2D gridworld, for example, the Manhattan Distance between two cells and is . Generalizations exist for 8-neighbor 2D and 3D gridworlds. For many gridworlds, A* search using the Manhattan Distance heuristic outperforms Dijkstra’s algorithm. However, in complicated 2D/3D gridworlds like mazes, the Manhattan Distance heuristic may not be sufficiently informed to focus A* searches effectively. Another issue associated with Manhattan Distance-like heuristics is that they are not well defined for general graphs.222Henceforth, whenever we refer to a graph, we mean an edge-weighted undirected graph unless stated otherwise. For a graph that cannot be conceived in a geometric space, there is no closed-form formula for a “geometric” heuristic for the distance between two nodes because there are no coordinates associated with them.

For a graph that does not already have a geometric embedding in Euclidean space, a preprocessing algorithm can be used to generate one. As described before, at runtime, A* search would then use the Euclidean distance between any two nodes in this space as an estimate for the distance between them in the given graph. One such approach is Euclidean Heuristic Optimization (EHO) [Rayner et al.2011]. EHO guarantees admissiblility and consistency and therefore generates shortest paths. However, it requires solving a Semidefinite Program (SDP). SDPs can be solved in polynomial time [Vandenberghe and Boyd1996]. EHO leverages additional structure to solve them in cubic time. Still, a cubic preprocessing time limits the size of the graphs that are amenable to this approach.

The Differential heuristic is another state-of-the-art approach that has the benefit of a near-linear runtime. However, unlike the approach in [Rayner et al.2011], it does not produce an explicit Euclidean embedding. In the preprocessing phase of the Differential heuristic approach, some nodes of the graph are chosen as pivot nodes. The distances between each pivot node and every other node are precomputed and stored [Sturtevant et al.2009]. At runtime, the heuristic between two nodes and is given by , where is a pivot node and is the precomputed distance. The preprocessing time is linear in the number of pivots times the size of the graph. The required space is linear in the number of pivots times the number of nodes, although a more succinct representation is presented in [Goldenberg et al.2011]. Similar preprocessing techniques are used in Portal-Based True Distance heuristics [Goldenberg et al.2010].

In this paper, we present a new preprocessing algorithm, called FastMap, that produces an explicit Euclidean embedding while running in near-linear time. It therefore has the benefits of the small preprocessing time of the Differential heuristic approach and of producing an embedding from which a heuristic between two nodes can be quickly computed using a closed-form formula. Our preprocessing algorithm, dubbed FastMap, is inspired by the data mining algorithm of the same name [Faloutsos and Lin1995]. It is orders of magnitude faster than SDP-based approaches for producing Euclidean embeddings. FastMap also produces admissible and consistent heuristics and therefore guarantees the generation of shortest paths.

The FastMap heuristic has several advantages: First, it is defined for general (undirected) graphs. Second, we observe empirically that, in gridworlds, A* using the FastMap heuristic runs faster than A* using the Manhattan or Octile distance heuristics. A* using the FastMap heuristic runs equally fast or faster than A* using the Differential heuristic, with the same memory resources. The (explicit) Euclidean embedding produced by FastMap also has representational benefits like recovering the underlying manifolds of the graph and/or visualizing them. Moreover, we observe that the FastMap and Differential heuristics have complementary strengths and can be easily combined to generate an even more informed heuristic.

The Origin of FastMap

The FastMap algorithm [Faloutsos and Lin1995] was introduced in the data mining community for automatically generating geometric embeddings of abstract objects. For example, if we are given objects in form of long DNA strings, multimedia datasets such as voice excerpts or images, or medical datasets such as ECGs or MRIs, there is no geometric space in which these objects can be naturally visualized. However, there is often a well defined distance function between every pair of objects. For example, the edit distance333The edit distance between two strings is the minimum number of insertions, deletions or substitutions that are needed to transform one to the other. between two DNA strings is well defined although an individual DNA string cannot be conceptualized in geometric space.

Clustering techniques, such as the

-means algorithm, are well studied in machine learning

[Alpaydin2010] but cannot be applied directly to domains with abstract objects because they assume that objects are described as points in geometric space. FastMap revives their applicability by first creating a Euclidean embedding for the abstract objects that approximately preserves the pairwise distances between them. Such an embedding also helps to visualize the abstract objects, for example, to aid physicians in identifying correlations between symptoms from medical records.

The data mining FastMap gets as input a complete undirected edge-weighted graph . Each node represents an abstract object . Between any two nodes and there is an edge with weight that corresponds to the given distance between objects and . A Euclidean embedding assigns to each object a -dimensional point . A good Euclidean embedding is one in which the Euclidean distance between any two points and closely approximates .

One of the early approaches for generating such an embedding is based on the idea of multi-dimensional scaling (MDS) [Torgerson1952]. Here, the overall distortion of the pairwise distances is measured in terms of the “energy” stored in “springs” that connect each pair of objects. MDS, however, requires time and hence does not scale well in practice. On the other hand, FastMap [Faloutsos and Lin1995] requires only linear time. Both methods embed the objects in a -dimensional space for a user-specified .

FastMap works as follows: In the very first iteration, it heuristically identifies the farthest pair of objects and in linear time. It does this by initially choosing a random object and then choosing to be the farthest object away from . It then reassigns to be the farthest object away from . Once and are determined, every other object defines a triangle with sides of lengths , and . Figure 1 shows this triangle. The sides of the triangle define its entire geometry, and the projection of onto is given by . FastMap sets the first coordinate of , the embedding of object , to . In particular, the first coordinate of is and of is . Computing the first coordinates of all objects takes only linear time since the distance between any two objects and for is never computed.

Figure 1: The three sides of a triangle define its entire geometry. In particular, .

In the subsequent iterations, the same procedure is followed for computing the remaining coordinates of each object. However, the distance function is adapted for different iterations. For example, for the first iteration, the coordinates of and are and , respectively. Because these coordinates fully explain the true distance between them, from the second iteration onwards, the rest of and

’s coordinates should be identical. Intuitively, this means that the second iteration should mimic the first one on a hyperplane that is perpendicular to the line

. Figure 2 explains this intuition. Although the hyperplane is never constructed explicitly, its conceptualization implies that the distance function for the second iteration should be changed in the following way: . Here, and are the projections of and , respectively, onto this hyperplane, and is the new distance function.

Figure 2: Shows a geometric conceptualization of the recursive step in FastMap. In particular, .

FastMap for Shortest Path Computations

In this section, we provide the high-level ideas for how to adapt the data mining FastMap algorithm to shortest path computations. In the shortest path computation problem, we are given a non-negative edge-weighted undirected graph along with a start node and a goal node . As a preprocessing technique, we can embed the nodes of in a Euclidean space. As A* searches for a shortest path from to , it can use the Euclidean distance from to as a heuristic for . The number of node expansions of A* search depends on the informedness of the heuristic which, in turn, depends on the ability of the embedding to preserve the pairwise distances.

The idea is to view the nodes of as the objects to be embedded in Euclidean space. As such, the data mining FastMap algorithm cannot directly be used for generating an embedding in linear time. The data mining FastMap algorithm assumes that the distance between two objects and can be computed in constant time, independent of the number of objects. Computing the distance between two nodes depends on the size of the graph. Another problem is that the Euclidean distances may not satisfy important properties such as admissibility or consistency. Admissibility guarantees that A* finds shortest paths while consistency allows A* to avoid re-expansions of nodes as well.

The first issue of having to retain (near-)linear time complexity can be addressed as follows: In each iteration, after we identify the farthest pair of nodes and , the distances and need to be computed for all other nodes . Computing and for any single node can no longer be done in constant time but requires time instead [Fredman and Tarjan1984]. However, since we need to compute these distances for all nodes, computing two shortest path trees rooted at nodes and yields all necessary distances. The complexity of doing so is also , which is only linear in the size of the graph.444unless , in which case the complexity is near-linear in the size of the input because of the factor The amortized complexity for computing and for any single node is therefore near-constant time.

The second issue of having to generate a consistent (and thus admissible) heuristic is formally addressed in Theorem 1. The idea is to use distances instead of distances in each iteration of FastMap. The mathematical properties of the distance can be used to prove that admissibility and consistency hold irrespective of the dimensionality of the embedding.

Input: , and .
Output: and for all .
1 ; ;
2 while  do
3       Let ;
4       GetFarthestPair();
5       Compute shortest path trees rooted at and on to obtain , and for all ;
6       if  then
7             Break;
8            
9      for each  do
10            
11      for each edge  do
12             ;
13            
14      ; ;
15      
Algorithm 1 Shows the FastMap algorithm. is a non-negative edge-weighted undirected graph; is the user-specified upper bound on the dimensionality; is a user-specified threshold; is the dimensionality of the computed embedding; is the Euclidean embedding of node . Line is equivalent to .

Algorithm 1 presents data mining FastMap adapted to the shortest path computation problem. The input is an edge-weighted undirected graph along with two user-specified parameters and . is the maximum number of dimensions allowed in the Euclidean embedding. It bounds the amount of memory needed to store the Euclidean embedding of any node. is the threshold that marks a point of diminishing returns when the distance between the farthest pair of nodes becomes negligible. The output is an embedding (with ) for each node .

The algorithm maintains a working graph initialized to . The nodes and edges of are always identical to those of but the weights on the edges of change with every iteration. In each iteration, the farthest pair () of nodes in is heuristically identified in near-linear time (line ). The coordinate of each node is computed using a formula similar to that for in Figure 1. However, that formula is modified to to ensure admissibility and consistency of the heuristic. In each iteration, the weight of each edge is decremented to resemble the update rule for in Figure 2 (line ). However, that update rule is modified to to use the distances instead of the distances.

The method GetFarthestPair() (line ) computes shortest path trees in a small constant number of times, denoted by .555 in our experiments. It therefore runs in near-linear time. In the first iteration, we assign to be a random node. A shortest path tree rooted at is computed to identify the farthest node from it. is assigned to be this farthest node. In the next iteration, a shortest path tree rooted at is computed to identify the farthest node from it. is reassigned to be this farthest node. Subsequent iterations follow the same switching rule for and . The final assignments of nodes to and are returned after iterations. This entire process of starting from a randomly chosen node can be repeated a small constant number of times.666This constant is also in our experiments.

Figure 3 shows the working of our algorithm on a small gridworld example.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 3: Illustrates the working of FastMap. (a) shows a -neighbor gridworld with obstacles in black. (b) shows the graphical representation of (a) with the original unit weights on the edges. (c) shows the identified farthest pair of nodes. (d) shows two numbers in each cell representing the distances from and , respectively. (e) shows the first coordinate produced for each cell. (f) shows new edge weights for the next iteration. (g), (h) and (i) correspond to (c), (d) and (e), respectively, in the second iteration. (j) shows the produced D embedding.

Proof of Consistency

In this section, we prove the consistency of the FastMap heuristic. Since consistency implies admissibility, this also proves that A* with the FastMap heuristic returns shortest paths. We use the following notation in the proofs: is the weight on the edge between nodes and in the iteration; is the distance between nodes and in the iteration (using the weights );

is the vector of coordinates produced for node

, and is its coordinate;777The iteration sets the value of . is the FastMap heuristic between nodes and after iterations. Note that is the distance between and at iteration , that is . We also define . In the following proofs, we use the fact that and .

Lemma 1.

For all , and , .

Proof.

We prove by induction that in any iteration , for all . Thus, the weight of each edge in the iteration is non-negative and therefore for all , . For the base case, . We assume that and show that . Let and be the farthest pair of nodes identified in the iteration. From lines and , . To show that we show that . From the triangle inequality, for any node , . Therefore, . This means that . Therefore, . This concludes the proof since . ∎

Lemma 2.

For all , and , .

Proof.

Let be the shortest path from to in iteration . By definition, and . From line , . Therefore, . This concludes the proof since . ∎

Lemma 3.

For all , , and , .

Proof.

We prove the lemma by induction on . The base case for is implied by Lemma . We assume that and show . We know that . Since , we have . Hence, . Using the inductive assumption, we get . By definition, . Substituting for , we get . Lemma shows that , which concludes the proof. ∎

Theorem 1.

The FastMap heuristic is consistent.

Proof.

For all , , and : From Lemma , we know . From Lemma , we know . Put together, we have . Finally, . ∎

Theorem 2.

The informedness of the FastMap heuristic increases monotonically with the number of dimensions.

Proof.

This theorem follows from the fact that for any two nodes and , . ∎

Experimental Results

We performed experiments on many benchmark maps from [Sturtevant2012]. Figure 4 presents representative results. The FastMap heuristic (FM) and the Differential heuristic (DH) with equal memory resources888The dimensionality of the Euclidean embedding for FM matches the number of pivots in DH. are compared against each other. In addition, we include the Octile heuristic (OCT) as a baseline, that also uses a closed-form formula for the computation of its heuristic.

We observe that, as the number of dimensions increases, (a) FM and DH perform better than OCT; (b) the median number of expanded nodes when using the FM heuristic decreases (which is consistent with Theorem 2); and (c) the median absolute deviation (MAD) of the number of expanded nodes when using the FM heuristic decreases. When FM’s MADs are high, the variabilities can possibly be exploited in future work using Rapid Randomized Restart strategies.

FastMap also gives us a framework of identifying a point of diminishing returns with increasing dimensionality. This happens when the distance between the farthest pair of nodes stops being “significant”. For example, such a point is observed in Figure 4(f) around dimensionality .999The distances between the farthest pair of nodes, computed on line of Algorithm 1, for the first dimensions are: .

In mazes, such as in Figure 4(g), A* using the DH heuristic outperforms A* using the FM heuristic. This leads us to believe that FM provides good heuristic guidance in domains that can be approximated with a low-dimensional manifold. This observation also motivates us to create a hybrid FM+DH heuristic by taking the maximum of the two heuristics. Some relevant results are shown in Table 1. We use FM() to denote the FM heuristic with dimensions and DM() to denote the DH heuristic with pivots. For the results in Table 1, all heuristics have equal memory resources. We observe that the number of node expansions of A* using the FM()+DH() heuristic is always second best compared to A* using the FM() heuristic and A* using the DH() heuristic. On one hand, this decreases the percentages of instances on which it expands the least number of nodes (as seen in the second row of Table 1). But, on the other hand, its median number of node expansions is not far from that of the best technique in each breakdown.

Map ‘lak503d’ ‘brc300d’ ‘maze512-32-0’
FM-WINS 570 DH-WINS 329 FM+DH-WINS 101 FM-WINS 846 DH-WINS 147 FM+DH-WINS 7 FM-WINS 382 DH-WINS 507 FM+DH-WINS 111
Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD
FM(10) 261 112 465 319 2,222 1,111 205 105 285 149 894 472 1,649 747 11,440 9,861 33,734 13,748
DH(10) 358 215 278 156 885 370 217 119 200 129 277 75 3,107 2,569 2,859 2,194 8,156 4,431
FM(5)+DH(5) 303 160 323 170 610 264 206 105 267 135 249 73 2,685 2,091 3,896 2,992 7,439 4,247
Table 1: Shows the median and MAD numbers of A* node expansions for different maps using three different heuristics with equal memory resources on random instances. FM() denotes the FastMap heuristic with dimensions, DH() denotes the Differential heuristic with pivots and FM()+DH() is a combined heuristic which takes the maximum of a -dimensional FastMap heuristic and a -pivot Differential heuristic. The results are split into bins according to winners (along with their number of wins).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 4: Shows empirical results on maps from Bioware’s Dragon Age: Origins. (a) is map ‘lak503d’ containing nodes and edges; (d) is map ‘brc300d’ containing nodes and edges; and (g) is map ‘maze512-32-0’ containing nodes and edges. In (b), the x-axis shows the number of dimensions for the FastMap heuristic (or the number of pivots for the Differential heuristic). The y-axis shows the number of instances (out of ) on which each technique expanded the least number of nodes. Each instance has randomly chosen start and goal nodes. (c) shows the median number of expanded nodes across all instances. Vertical error bars indicate the MADs. The figures in the second and third rows follow the same order. In the legends, “FM” denotes the FastMap heuristic, “DH” denotes the Differential heuristic and “OCT” denotes the Octile heuristic.

Conclusions

In this paper, we presented a near-linear time preprocessing algorithm, called FastMap, for producing a Euclidean embedding of a general edge-weighted undirected graph. At runtime, the Euclidean distances were used as heuristic by A* for shortest path computations. We proved that the FastMap heuristic is admissible and consistent, thereby generating shortest paths. FastMap produces the Euclidean embedding in near-linear time, which is significantly faster than competing approaches for producing Euclidean embeddings with optimality guarantees that run in cubic time. We also showed that it is competitive with other state-of-the-art heuristics derived in near-linear preprocessing time. However, FastMap has the combined benefits of requiring only near-linear preprocessing time and producing explicit Euclidean embeddings that try to recover the underlying manifolds of the given graphs.

Acknowledgments

The research at USC was supported by NSF under grant numbers 1724392, 1409987, and 1319966. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations, agencies or the U.S. government.

References

  • [Alpaydin2010] Ethem Alpaydin. Introduction to Machine Learning. The MIT Press, 2nd edition, 2010.
  • [Björnsson and Halldórsson2006] Yngv Björnsson and Kári Halldórsson. Improved heuristics for optimal path-finding on game maps. In

    Proceedings of the 6th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

    , 2006.
  • [Botea and Harabor2013] Adi Botea and Daniel Harabor. Path planning with compressed all-pairs shortest paths data. In Proceedings of the 23rd International Conference on Automated Planning and Scheduling, 2013.
  • [Botea et al.2004] Adi Botea, Martin Müller, and Jonathan Schaeffer. Near optimal hierarchical path-finding. Journal of Game Development, 1, 2004.
  • [Cazenave2006] Tristan Cazenave. Optimizations of data structures, heuristics and algorithms for path-finding on maps. In Proceedings of the IEEE Symposium on Computational Intelligence and Games, 2006.
  • [Dechter2003] Rina Dechter. Constraint processing. The Morgan Kaufmann Series in Artificial Intelligence. Elsevier, 2003.
  • [Dijkstra1959] Edsger W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1(1), 1959.
  • [Faloutsos and Lin1995] Christos Faloutsos and King-Ip Lin. Fastmap: A fast algorithm for indexing, data-mining and visualization of traditional and multimedia datasets. In Proceedings of the ACM SIGMOD International Conference on Management of Data, 1995.
  • [Fredman and Tarjan1984] Michael Fredman and Robert Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. Proceedings of the 25th Annual Symposium on Foundations of Computer Science, 1984.
  • [Geisberger et al.2008] Robert Geisberger, Peter Sanders, Dominik Schultes, and Daniel Delling. Contraction hierarchies: Faster and simpler hierarchical routing in road networks. In Proceedings of the 7th International Conference on Experimental Algorithms, 2008.
  • [Goldenberg et al.2010] Meir Goldenberg, Ariel Felner, Nathan Sturtevant, and Jonathan Schaeffer. Portal-based true-distance heuristics for path finding. In Proceedings of the 3rd Annual Symposium on Combinatorial Search, 2010.
  • [Goldenberg et al.2011] Meir Goldenberg, Nathan Sturtevant, Ariel Felner, and Jonathan Schaeffer. The compressed differential heuristic. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, 2011.
  • [Hart et al.1968] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems, Science, and Cybernetics, SSC-4(2), 1968.
  • [Holte et al.1994] Robert Holte, Chris Drummond, M. B. Perez, Robert Zimmer, and Alan Macdonald. Searching with abstractions: A unifying framework and new high-performance algorithm. In Proceedings of the 10th Canadian Conference on Artificial Intelligence, 1994.
  • [LaValle2006] Steven LaValle. Planning Algorithms. Cambridge University Press, New York, NY, USA, 2006.
  • [Leighton et al.2008] Michael Leighton, Wheeler Ruml, and Robert Holte. Faster optimal and suboptimal hierarchical search. In Proceedings of the 1st International Symposium on Combinatorial Search, 2008.
  • [Pochter et al.2010] Nir Pochter, Aviv Zohar, Jeffery Rosenschein, and Ariel Felner. Search space reduction using swamp hierarchies. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, 2010.
  • [Rayner et al.2011] Chris Rayner, Michael Bowling, and Nathan Sturtevant. Euclidean heuristic optimization. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, 2011.
  • [Russell and Norvig2009] Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 2009.
  • [Storandt2013] Sabine Storandt. Contraction hierarchies on grid graphs. In Proceedings of KI: the 36th Annual German Conference on Artificial Intelligence, 2013.
  • [Strasser et al.2015] Ben Strasser, Adi Botea, and Daniel Harabor. Compressing optimal paths with run length encoding. Journal of Artificial Intelligence Research, 54:593–629, 2015.
  • [Sturtevant and Buro2005] Nathan Sturtevant and Michael Buro. Partial pathfinding using map abstraction and refinement. In Proceedings of the 20th AAAI Conference on Artificial Intelligence, 2005.
  • [Sturtevant et al.2009] Nathan Sturtevant, Ariel Felner, Max Barrer, Jonathan Schaeffer, and Neil Burch. Memory-based heuristics for explicit state spaces. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, 2009.
  • [Sturtevant2012] Nathan Sturtevant. Benchmarks for grid-based pathfinding. Transactions on Computational Intelligence and AI in Games, 4(2), 2012.
  • [Torgerson1952] Warren S. Torgerson. Multidimensional scaling: I. theory and method. Psychometrika, 17(4), 1952.
  • [Uras and Koenig2014] Tansel Uras and Sven Koenig. Identifying hierarchies for fast optimal search. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, 2014.
  • [Uras and Koenig2015] Tansel Uras and Sven Koenig. Subgoal graphs for fast optimal pathfinding. In Steve Rabin, editor, Game AI Pro 2: Collected Wisdom of Game AI Professionals, chapter 15. A K Peters/CRC Press, 2015.
  • [Vandenberghe and Boyd1996] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Review, 38, 1996.