L_1 Shortest Path Queries in Simple Polygons

09/20/2018
by   Sang Won Bae, et al.
Utah State University
0

Let P be a simple polygon of n vertices. We consider two-point L_1 shortest path queries in P. We build a data structure of O(n) size in O(n) time such that given any two query points s and t, the length of an L_1 shortest path from s to t in P can be computed in O( n) time, or in O(1) time if both s and t are vertices of P, and an actual shortest path can be output in additional linear time in the number of edges of the path. To achieve the result, we propose a mountain decomposition of simple polygons, which may be interesting in its own right. Most importantly, our approach is much simpler than the previous work on this problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/04/2019

A Divide-and-Conquer Algorithm for Two-Point L_1 Shortest Path Queries in Polygonal Domains

Let P be a polygonal domain of h holes and n vertices. We study the prob...
10/29/2021

Shortest Beer Path Queries in Outerplanar Graphs

A beer graph is an undirected graph G, in which each edge has a positive...
04/20/2021

Query-by-Sketch: Scaling Shortest Path Graph Queries on Very Large Networks

Computing shortest paths is a fundamental operation in processing graph ...
10/12/2020

Interval Query Problem on Cube-free Median Graphs

In this paper, we introduce the interval query problem on cube-free medi...
04/02/2020

A Spectral Approach to the Shortest Path Problem

Let G=(V,E) be a simple, connected graph. One is often interested in a s...
11/03/2021

Geodesic statistics for random network families

A key task in the study of networked systems is to derive local and glob...
01/09/2021

An Optimization Framework for Power Infrastructure Planning

The ubiquitous expansion and transformation of the energy supply system ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Let be a simple polygon of vertices. We consider two-point shortest path queries in for which the path lengths are measured in the metric. The problem is to build a data structure to quickly compute an shortest path or only compute the path length between any two query points and .

If the Euclidean metric is instead used to measure the path lengths, Guibas and Hershberger [8] built a data structure of space in time that can answer each two-point Euclidean shortest path query in time. It is well-known that a Euclidean shortest path in is also an shortest path [11]. Therefore, using the data structure in [8], one can answer each two-point shortest path query in time. However, since the data structure [8] is particularly for the Euclidean metric, both the data structure and the query algorithm are quite complicated. Indeed, problems in the metric are usually easier than their counterparts in the Euclidean metric. As a fundamental problem in computational geometry, it is desirable to have a simpler approach for the two-point shortest path query problem.

In this paper, we present such a data structure of space that can be built in time. With the data structure, given any two query points and , we can compute the length of an shortest path from to in in time. Further, if and are both vertices of , then the query time of our data structure becomes only . An actual shortest path can be output in additional time linear in the number of edges in the path.

Our method has several advantages compared to [8]. First, our method is much simpler. Second, if both and are vertices of , then the query time of our algorithm is only , while the data structure in [8] still needs query time. In addition, using our techniques, we can obtain the following result. Given a set of points in , we can build a data structure of size in time such that each two-point shortest path query can be answered in time for any two query points in .

Given a point in as a single source, a Euclidean shortest path map in can be built in time [9] and the map is usually used for answering single-source shortest path queries (i.e., is fixed and only is a query point). Again, as a Euclidean shortest path in is also an shortest path, the Euclidean shortest path map of is also an shortest path map. As a by-product of our techniques, we present another simpler way to compute an shortest path map for in time.

Many previous results, e.g., [1, 2, 6], have normally resorted to the corresponding Euclidean data structures [8, 9] in order to answer shortest path queries in simple polygons. With our simpler (and faster, if both query points are vertices of in the two-point queries) solutions, the algorithms in those previous work can be simplified as well.

1.1 Our Techniques

If is a simple rectilinear polygon where every edge of is parallel to either the - or -axis, then Schuierer [15] gave an -size data structure that can be built in time such that given any two query points and in , a shortest rectilinear shortest - path can be found in time, and if both and are vertices of , then the query time becomes .

Our approach follows a similar scheme as the one in [15] but extends the result in [15] for simple rectilinear polygons to arbitrary simple polygons. Specifically, a main geometric structure used in [15] is a histogram partition, which is commonly used for solving problems in simple rectilinear polygons, e.g., see [14]. We generalize the concept to arbitrary simple polygons and develop a so-called mountain decomposition for arbitrary simple polygons. The mountain decomposition may be interesting in its own right and find other applications on simple polygons as well.

The rest of the paper is organized as follows. In Section 2, we introduce the mountain decomposition of . In Section 3, we show how an shortest path map can easily be obtained by using the mountain decomposition. In Section 4, we solve the two-point shortest path query problem. Section 5 presents some applications of our new results.

In the following, unless otherwise stated, a shortest path refers to an shortest path in and the path length is measured in the metric. For ease of exposition, we make a general position assumption that no two vertices of have the same - or -coordinate (otherwise we could slightly perturb the data to achieve the assumption).

2 The Mountain Decomposition

In this section, we introduce a decomposition of , which we call the mountain decomposition. It generalizes the histogram partition of simple rectilinear polygons [15]. We first introduce some notation and concepts.

2.1 Preliminaries

For any two points and in , let denote the line segment joining and . We use to denote the length of an shortest path from to in . For convenience, sometimes we use to refer to , if is equal to the length of the line segment (which may not be entirely in ).

The horizontal trapezoidal decomposition of a simple polygon is a decomposition of into cells by extending a horizontal line from each vertex of to the polygon interior until both of its ends hit the boundary of  [5]. The vertical trapezoidal decomposition is defined analogously. Both decompositions can be computed in linear time [5].

Next, we introduce a new concept, namely, mountains. A polygonal chain is -monotone if the intersection of with every vertical line is connected; is called -monotone if the intersection of with every horizontal line is connected. A simple polygon is called an upward mountain if the following are satisfied (e.g., see Fig. 2): (1) the leftmost and rightmost vertices of divide the boundary of into two -monotone chains; (2) the lower chain has a horizontal edge, called the base; (3) the lower chain has at most one edge on the left of the base, which is of negative slope and is called the left-wing; (4) the lower chain has at most one edge on the right of the base, which is of positive slope and is called the right-wing. According to the definition, the lower chain of has at most three edges, and we call the lower chain the bottom of . If we rotate by , then becomes a downward mountain. Similarly, we can define the rightward and leftward mountains, i.e., by rotating by and clockwise, respectively.

Consider an upward mountain . Let and be two points in with at the base of . We call the following path from to the canonical path (e.g., see Fig. 2): From , move vertically down until the bottom of , then move to reach along the bottom. Let denote the canonical path. Observe that , which has at most three edges, is both - and -monotone, and thus is a shortest path with length equal to .

Figure 1: Illustrating an upward mountain. The bottom is shown with thick (red) segments, and the horizontal edge is the base.
Figure 2: Illustrating the canonical path in an upward mountain.

For a horizontal or vertical line segment in , we say that a point is perpendicularly visible to if the line through and orthogonal to intersects and the line segment connecting the intersection and is in (e.g., see Fig. 3), and we call the intersection the orthogonal projection of onto .

Figure 3: The point is perpendicularly visible to and is the orthogonal projection of onto .

2.2 The Mountain Decomposition

Let be the given simple polygon of vertices, and let denote the boundary of . In the following, we introduce a mountain decomposition of .

Our decomposition starts with an arbitrary horizontal line segment contained in with . The segment partitions into two sub-polygons: one locally above and the other locally below ; let and denote the former and the latter sub-polygons, respectively. We only describe how to decompose , and the decomposition of is done analogously. Roughly speaking, the decomposition partitions into mountains. The details are given below.

If is a triangle, which is a special mountain, then we are done with the decomposition and we call the base of the triangle. We use to denote this triangle.

If is a not a triangle, let be the maximal mountain whose base is (e.g., see Fig. 5). Specifically, is defined as follows. Without loss of generality, assume and are left and right endpoints of , respectively. If is in the relative interior of an edge of , then define to be the upper endpoint of the edge; otherwise, if is a vertex of , then only one of the two adjacent vertices of belongs to and we let refer to that vertex. In either case, is the next clockwise vertex of from . The left-wing of is if is of negative slope; otherwise, has no left-wing. Similarly, let be the next counterclockwise vertex of from . The right-wing of is if is of positive slope; otherwise, has no right-wing. The wings (if any) and the base of together constitute the bottom of . Let be a vertex of not on the bottom of such that is vertically visible to the bottom of (i.e., there is a point on the bottom such that is vertical and in ) and the two edges of incident to are on the same side of the vertical line through . Let be the first point on hit by the vertically upward ray from . We call the segment a window. Note that lies in and divides into two sub-polygons: one contains and the other does not. Let denote the latter sub-polygon. See Fig. 5. In addition, for each endpoint of the bottom of , if the vertically upward ray from lies locally inside , then is also a window, where is the first point on hit by the ray, and we define similarly. The mountain is the sub-polygon of excluding for all such windows . The windows defined above are considered to be windows of . If there are no windows, then is equal to and we are done with the decomposition; otherwise, for each window , we decompose the sub-polygon recursively with respect to (so the window will become the base of a mountain in ).

Figure 4: Illustrating the mountain , the gray region. has three windows, shown with dotted vertical segments.
Figure 5: Illustrating the mountain decomposition of , i.e., the sub-polygon of above .

We use to denote the resulting decomposition of after the preceding decomposition process finishes (e.g., see Fig. 5).

Remark.

Observe that the mountain contains at least one vertex of other than the two endpoints of . Indeed, if has at least one wing, then this is obviously true because the upper endpoint of the wing is such a vertex. Otherwise, one can verify that at least one vertex of is vertically visible to and thus is in . This property is crucial for proving the combinatorial size of later in Lemma 1. This is also the reason we introduce the wings for mountains.

The mountain decomposition of induces a natural tree structure, called the mountain decomposition tree, denoted by . Each cell of corresponds to a node in . Consider any cell of , which is a mountain. has a base , which is also a window of a mountain unless . If , then we call the root cell of . Specifically, the root cell corresponds to the root of . For each window of , has a child subtree corresponding to the mountain decomposition of . Note that each leaf of corresponds to a mountain with no windows.

In the following, we sometimes use the cells of and the nodes of interchangeably. Following the tree terminology, for each cell of , we use parent cell to refer to the cell of corresponding to the parent node of in . The ancestor or descendant cells are defined similarly.

Lemma 1

The combinatorial size of is and can be computed in time.

Proof

To argue that the size of is , for each cell of , as remarked above, other than the endpoints of the base of , contains at least one vertex of . Observe that each vertex of is on the boundary of at most two cells of . Since has vertices, the size of is .

In the following, we present an time algorithm to compute . We make use of the vertical trapezoidal decomposition and the horizontal trapezoidal decomposition . Both decompositions can be computed in time [5].

In the beginning, we need to compute the mountain (if it is not a triangle) and its windows. Let denote the bottom of , which can be determined in constant time given the segment . Observe that the mountain is exactly the union of the cells of that properly intersect the bottom (i.e., the intersection contains more than one point). Thus, starting from one endpoint of , can be computed by traversing along in time, where is the combinatorial size of . The above traversing procedure can also identify the windows of simultaneously.

Let be any window of . Note that an endpoint of must be a vertex of and let denote that vertex (e.g., see Fig. 5 in which ). Without loss of generality, we assume the sub-polygon is locally on the left side of . Let denote the mountain in with base , and let denote the bottom of , which can be determined in constant time since is known. Observe that is the union of the cells of that properly intersect , restricted to lie on the left side of . Hence, starting from an endpoint of , can be computed by traversing along . In this way, the mountain decomposition of the sub-polygon can be computed recursively.

For the time analysis, notice that the time for computing is linear in the number of cells of intersecting and the wings of (if any). Because the wings are on the boundary of , the total number of cells of (or ) intersecting the wings of for all such windows in the entire algorithm is . On the other hand, each cell of intersecting can be visited at most twice in the entire algorithm (i.e., for computing of at most two windows , which are in the same mountain facing each other). Hence, the total time for computing is . ∎

The mountain decomposition on is built in the same way, which induces a mountain decomposition tree . Consequently, is also of size and can also be computed in time. Let denote the decomposition of the whole induced by and . Note that can be considered to be the mountain decomposition of with respect to the chosen line segment , and is uniquely determined once is fixed.

3 The Shortest Path Map and Single-Source Shortest Path Queries

In this section, based on the mountain decomposition, we present a simple way to construct a shortest path map for a fixed source point in . A shortest path map for is a decomposition of that encodes all shortest paths from to all other points in .

Let be the maximal horizontal line segment through that lies in . We compute the mountain decomposition with respect to . Define , , , in the same way as before.

Consider any cell of . Without loss of generality, we assume is in . We next define the anchor for , which is a point on the base of and will be used for computing shortest paths from to all points in . If is the root cell (i.e., the base of is ), then is on the base of and we define to be . Otherwise, let be the base of . According to our mountain decomposition, is a window of the parent cell of in (e.g., see Fig. 7). Let be the endpoint of closer to the bottom of . By our way of defining windows, is orthogonal to the base of and must be a vertex of . We define the anchor to be . Recall the definition of canonical paths in a mountain. Because , for each point , has a canonical path from to , which is a shortest path of length . The following lemma explains why we need the anchors.

Figure 6: Illustrating the definition of : is and is .
Figure 7: Illustrating the proof of Lemma 2. The dashed (blue) path is a shortest path from to . The dashed dotted (red) path is the canonical path from to in , which must contain the anchor of .
Lemma 2

Let be a cell of . For any point in , there is a shortest - path passing through the anchor and containing the canonical path .

Proof

If is the root cell, then since is on the base of and , the lemma statement obviously holds.

In the following, we assume that is not the root cell. This means that has a parent cell . Let be the base of and be the base of . Let be any shortest path from to . Since the base separates from and thus separates from , must intersect at a point . See Fig. 7. If , since , must intersect . Otherwise, separates from and thus also intersects . In either case, let be an intersection between and . Since the canonical path is a shortest path from to , by replacing the portion of between and by , we can obtain another shortest - path that contains . Notice that according to the definition of and the definition of , must be on ; see Fig. 7. Thus, the shortest path passes through .

Further, since is a shortest path from to , if we replace the portion of between and by , then we obtain a shortest path from to that satisfies the lemma. ∎

Consider any point in the cell . Based on Lemma 2, a shortest path from to can be found as follows. First, we connect to by the canonical path in . If , then we are done. Otherwise, lies on the boundary of its parent cell and we connect to by the canonical path in . We repeat this process until we reach , and the path thus obtained is a shortest - path.

Further, if we store the shortest path length from to at each cell of , then we can obtain in time and output an actual shortest path in time linear in the number of edges of the path for a query point , once we know the cell of containing . Therefore, the decomposition acts as a shortest path map for a fixed source point . Determining the cell of containing a query point can be done in time by a point location data structure [7, 13] (after time preprocessing) or in time if is a vertex of (after we associate each vertex with the cell of that contains it in the preprocessing).

Theorem 3.1

Let be a simple polygon with vertices and let be a fixed source point. After time preprocessing, given any query point , we can compute the length of an shortest - path in time, or time if is a vertex of . An actual shortest path can be reported in additional time linear in the number of edges of the path.

Proof

The preprocessing phase is done as follows: We build the mountain decomposition of with respect to the horizontal segment , as described in Lemma 1. Then, we build a point location data structure on in additional time [7, 13]. Also, for every vertex of , we associate it with a cell of such that lies on the boundary of .

Next, for each cell of that is not a root cell, we store a shortest path from its anchor to the anchor of its parent cell . Since is also in , we can simply store , i.e., the canonical path from to in , which has at most three edges. Finally, using the tree structure of and the anchor-to-anchor shortest paths , we can compute and store the values for all cells of in time. All of these can be done in time using storage.

Given a query point , we can find the cell of containing in time using the point location structure, or time if is a vertex of . Then, using Lemma 2, find a shortest path from to . Adding the length of to results in by Lemma 2, while a shortest path from to can be obtained by concatenating , , and all for all ancestors of up to the root cell. Therefore, the theorem follows. ∎

4 Two-Point Shortest Path Queries

In this section, we use the mountain decomposition to answer two-point shortest path queries. Let be the mountain decomposition with respect to any line segment , as discussed in Section 2.

We begin by introducing parent points and -projections.

4.1 Parent Points and -Projections

We first define the parent point of any cell of , which is a point on the base of the parent cell of . If is a root cell, then its parent point is undefined. Otherwise, let be the parent cell of . Let and be the bases of and , respectively. Then, the parent point of , denoted by , is defined as the first point encountered on if we traverse from to along their canonical path in , where is any point on and is any point on (e.g., see Fig. 9). In other words, if the endpoint of closer to is perpendicularly visible to (recall that is orthogonal to ), then is the orthogonal projection of on ; otherwise, is the endpoint of closer to .

Figure 8: Illustrating the parent points and of two cells and .
Figure 9: Illustrating the definition of the projection of on . The cell that contains is .

Consider any cell of and any point . Without loss of generality, we assume is in . Let be any cell of that is an ancestor of or itself. Let be the base of . We define the -projection of on , denoted by , as follows.

Definition 1
  1. If , then define to be the first point encountered on if we traverse from to along the their canonical path in , where is any point on .

  2. If is an ancestor of (e.g., see Fig. 9), then define to be the parent point of , where is the child of that is an ancestor of ( is if is the parent of ).

Observe that if and is an ancestor of , then the -projection of all points on the base of fall into a unique point , where is the child of that is ancestor of or itself. That is, all points in have the same -projection on .

The following two lemmas justify why we define -projections and parent points, where the notation follows the definitions above.

Lemma 3

Let be any cell of and be an ancestor of in , or itself, whose base is . For any two points and , there exists a shortest path from to that contains , and thus . Hence, is the point on whose shortest path length to is the minimum.

Proof

If , based on the definition of , one can easily verify that the lemma holds.

Figure 10: Illustrating the proof of Lemma 3. The dashed (blue) path is an a shortest path from to , which crosses at a point .

Suppose that , and let be the child of that is an ancestor of ( is if is the parent of ). Let be the base of . Let be any shortest path from to . Note that must intersect at a point (e.g., see Fig. 10). By the definition of the parent point of , the concatenation of the canonical path and the line segment must be a shortest path from to . Thus, replacing the portion of between and by results in another shortest path from to . By definition, we have in this case. Therefore, the path contains and . The lemma thus follows. ∎

Lemma 4

Let be any cell of and be an ancestor of in or itself. Also, let be the sequence of cells along the path from to in . Then, there exists a shortest path from any point to any point on the base of that passes through the -projections in order, where denotes the base of for . More precisely, the path is the concatenation of the canonical paths , and the line segment .

Proof

Let be the path from to obtained according to the lemma. By Lemma 3, it is sufficient to show that the portion of between to is a shortest path from to ; let denote that portion. We prove it by induction.

If , then . By the definition of , it is easy to verify that is a shortest path from to .

Suppose that . Then, by the inductive hypothesis, the subpath of from to is a shortest path. Let be the endpoint of closer to . We claim that there exists a shortest path from to that contains . This can be proved by a similar analysis as in the proof of Lemma 3 and we omit the details. On the other hand, by Lemma 3, is a shortest path from to . Also, by definition, is a window of . Hence, is in , and the canonical path is a shortest path from to . The above together implies that is a shortest path from to . By the definition of and the definition of canonical paths, is exactly , and thus, is excatly . This proves that is a shortest path from to . The lemma thus follows. ∎

4.2 The Data Structure

Here we describe our data structure for two-point shortest path queries and how to build it in the preprocessing phase.

Our data structure is based on the mountain decomposition with respect to an arbitrary maximal horizontal segment in . Let be defined as before. We store and maintain the following auxiliary values and structures at each cell of for :

  • : the parent point of if is not the root; undefined if is the root.

  • : the -projection of on . If is not the root, then all points in have the same -projection on ; if is the root, then is undefined.

  • : the shortest path length from to .

  • : the depth of in , that is, the number of edges in the path from to the root in .

  • : the horizontal trapezoidal decomposition of if the base of is horizontal, or the vertical trapezoidal decomposition of if the base of is vertical.

  • : the rooted tree corresponding to such that each trapezoid of corresponds to a node of , two adjacent trapezoids of are joined by an edge in , and the one incident to the base of is the root of .

All these elements can be computed in linear time, as shown in the following lemma.

Lemma 5

Our data structure described above can be built in time.

Proof

As described in Lemma 1, the mountain decomposition with respect to can be computed in linear time. We next describe how to compute those auxiliary information stored at each cell of for .

The parent point can be easily found in time by looking at the parent of , and the depth of all in can be computed in total time by a top-down traversal on .

For computing , observe that is equal to the parent point , where is the child of the root that is an ancestor of in . This implies that holds for all nodes in the subtree of rooted at , where is a child of the root of . Thus, we can compute for all cells of in time.

Note that if is not a child of the root, then we have , where is the parent of ; otherwise, if is a child of the root, then we have , so . Therefore, the values for all nodes can be computed in time in a top-down manner.

To compute and its corresponding tree structure , we run the algorithm in [5] to compute the trapezoidal decomposition of . This takes time linear in the number of vertices in . Since the total complexity of and is , summing up the cost over all cells results in time. ∎

During processing a two-point query, we will need some operations on the trees performed efficiently. For the purpose, we do some additional preprocessing, taking linear time. We preprocess in time such that each level ancestor query can be answered in time [4], i.e., given a node of and a value , where is the depth of in , the query asks for the ancestor of at depth of . In addition, we preprocess in time such that given any two nodes of , their lowest common ancestor can be found in time [3, 10]. Similarly, we preprocess for both level ancestor and lowest common ancestor queries in time. We also preprocess for all cells of for the lowest common ancestor query in the same way as above, which takes time in total.

The following lemma is a consequence of our data structure, which will be used as a subroutine of our query algorithm described later.

Lemma 6

Let be a cell of and be an ancestor of with base . Given any point , the -projection of on and the value of can be computed in time using our data structure, provided that both and are known. Further, an actual shortest path from to can be output in time linear in the number of edges of the path.

Proof

If , then the lemma trivially holds. In the following, we assume . Without loss of generality, we assume that is a cell of , so is a cell of as well.

We first show that can be computed in time. By definition, is the parent point , where is the child of that is an ancestor of . Since is stored at , once we know in , can be immediately obtained. To this end, we use a level ancestor query as follows.

Since both and are known, the depth of can be obtained in time since it is stored at . It is easy to see that is the ancestor of at depth . Therefore, by a level ancestor query, can be located in in time, after which is obtained as well.

Next, we show how to compute in time. Let be the base of . We can obtain the -projection of on in time. Notice that , and . Note that , and because is an ancestor of . Hence, we have

Due to our data structure, is stored at and is stored at . Also, has already been determined above. Hence, both and can be found in time. Further, by the definition of , . By the definition of , . Clearly, can be computed in time. Because is stored at , can be obtained in time. Therefore, can be computed in time.

Finally, to find a shortest path from to , we apply the algorithm described in Lemma 4. Since is known, the time for reporting the path is linear in the number of edges of the path. ∎

4.3 Processing Queries

In the following, we describe how to process a two-point query. Given two query points and in , we first determine two cells and of such that and . To be precise when either or lies on a common edge of two adjacent cells, we choose and as follows. If (or , resp.) lies on a window, then let (, resp.) be the parent cell of the other; if either or lies on , then we choose and such that the two cells and lie on the same side of ; otherwise, two cells and such that and are uniquely determined.

Before discussing how to find and , we first describe how to compute the distance and an actual shortest path between and , provided both and are known. We distinguish two cases depending on whether and are separated by (i.e., they are in two different sub-polygons of divided by ).

We assume that our data structure described above has been built in the preprocessing phase. The following lemma handles the case where and are separated by .

Lemma 7

Given two points , suppose that and are separated by . After time preprocessing described above, the distance can be computed in time and an actual shortest path between and can be output in time linear in the number of edges of the path, provided that both and are known.

Proof

Since and are separated by , and are also separated by . Without loss of generality, we assume that is a cell of and is a cell of .

Let and be the -projections of and on , respectively. We first show that there exists a shortest path that is a concatenation of three parts: a shortest path from to , the line segment , and a shortest path from to .

Let be a shortest path from to . Clearly, must intersect at a point . By Lemma 3, there is a shortest path from to that contains and there is a shortest path from to that contains . Hence, if we replace the portion of between and by and replace the portion of between and by , then we obtain another shortest path from to . Observe, if we traverse along from to , then we encounter the points , , , , and in this order. Since , , are all on , is a concatenation of a shortest path from to , the line segment , and a shortest path from to .

To compute the length , according to the above discussion, we have . By Lemma 6, , , , and can all be computed in time. Hence, can be computed in time.

To output a shortest - path, by Lemma 6, a shortest path from to (resp., from to ) can be computed in time linear in the number of edges of the path. According to the above discussion, the concatenation of the above two paths along with is a shortest - path. Thus, a shortest - path can be output in linear time in the number of the edges of the path. ∎

Next we discuss the case where and are not separated by . Without loss of generality, we assume that both of them are in , so both and are cells of . In this case, we make use of the lowest common ancestor query on . Let be the lowest common ancestor of and in .

We define two points and as follows: If , then ; otherwise, define to be the -projection of onto the window of that separates and . Analogously, if , then ; otherwise, define to be the -projection of onto the window of that separates and .

Due to Lemma 3, we obtain the following lemma.

Lemma 8

There exists a shortest - path that consists of the following three parts: a shortest path from to , a shortest path from to , and a shortest path from to .

Proof

Let be any shortest path from to . We first show that there exists a shortest path from to that passes through . If , then