1 Introduction
Computing shortest paths is one of the most fundamental and wellstudied algorithmic problems, with numerous applications in various fields. In the data structure version of the problem, the goal is to preprocess a graph into a compact representation such that the distance (or a shortest path) between any pair of vertices can be retrieved efficiently. Such data structures are called distance oracles. Distance oracles are useful in applications ranging from navigation, geographic information systems and logistics, to computer games, databases, packet routing, web search, computational biology, and social networks. The topic has been studied extensively; see for example the survey by Sommer [42] for a comprehensive overview and references.
The two main measures of efficiency of a distance oracle are the space it occupies and the time it requires to answer a distance query. To appreciate the tradeoff between these two quantities consider two naïve oracles. The first stores all pairwise distances in a table, and answers each query in constant time using table lookup. The second only stores the input graph, and runs a shortest path algorithm over the entire graph upon each query. Both of these oracles are not adequate when working with mildly large graphs. The first consumes too much space, and the second is too slow in answering queries. A third quantity of interest is the preprocessing time required to construct the oracle. Since computing the data structure is done offline, this quantity is often considered less important. However, one approach to dealing with dynamic networks is to recompute the entire data structure quite frequently, which is only feasible when the preprocessing time is reasonably small.
One of the ways to design oracles with small space is to consider approximate distance oracles (allowing a small stretch in the distance output). However, it turns out that one cannot get both small stretch and small space. In their seminal paper, Thorup and Zwick [44] showed that, assuming the girth conjecture of Erdős [17], there exist dense graphs for which no oracle with size less than and stretch less than exists. Pǎtraşcu and Roditty [41] showed that even sparse graphs with edges do not have distance oracles with stretch better than 2 and subquadratic space, conditioned on a widely believed conjecture on the hardness of set intersection. To bypass these impossibility results one can impose additional structure on the graph. In this work we follow this approach and focus on distance oracles for planar graphs.
Distance oracles for planar graphs.
The importance of studying distance oracles for planar graphs stems from several reasons. First, distance oracles for planar graphs are ubiquitous in realworld applications such as geographical navigation on road networks [42] (road networks are often theoretically modeled as planar graphs even though they are not quite planar due to tunnels and overpasses). Second, shortest paths in planar graphs exhibit a remarkable wealth of structural properties that can be exploited to obtain efficient oracles. Third, techniques developed for shortest paths problems in planar graphs often carry over (because of intricate and elegant connections) to maximum flow problems. Fourth, planar graphs have proved to be an excellent sandbox for the development of algorithms and techniques that extend to broader families of graphs.
As such, distance oracles for planar graphs have been extensively studied. Works on exact distance oracles for planar graphs started in the 1990’s with oracles requiring space and querytime for any [15, 1]. Note that this includes the two trivial approaches mentioned above. Over the past three decades, many other works presented exact distance oracles for planar graphs with increasingly better space to querytime tradeoffs [15, 1, 11, 20, 39, 37, 6, 12, 22]. Figure 1 illustrates the advancements in the space/querytime tradeoffs over the years. Until recently, no distance oracles with subquadratic space and polylogarithmic query time were known. CohenAddad et al. [12], inspired by Cabello’s use of Voronoi diagrams for the diameter problem in planar graphs [7], provided the first such oracle. The currently best known tradeoff [22] is an oracle with space and querytime for any . Note that all known oracles with nearly linear (i.e. ) space require time to answer queries.
The holy grail in this area is to design an exact distance oracle for planar graphs with both linear space and constant querytime. It is not known whether this goal can be achieved. We do know, however, (for nearly twenty years now) that approximate distance oracles can get very close. For any fixed there are approximate distance oracles that occupy nearlylinear space and answer queries in polylogarithmic, or even constant time [43, 29, 28, 26, 23, 45, 9]. However, the main question of whether an approximate answer is the best one can hope for, or whether exact distance oracles for planar graphs with linear space and constant query time exist, remained a wide open important and interesting problem.
Our results and techniques.
In this paper we approach the optimal tradeoff between space and query time for reporting exact distances.We design exact distance oracles that require almostlinear space and answer distance queries in polylogarithmic time. Specifically, given a planar graph of size , we show how to construct in roughly time a distance oracle admitting any of the following space, querytime tradeoffs (see Theorem 7 and Corollary 8 for the exact statements).

, for any constant ;

, for any constant ;

.
Voronoi diagrams.
The main tool we use to obtain this result is point location in Voronoi diagrams on planar graphs. The concept of Voronoi diagrams has been used in computational geometry for many years (cf. [2, 13]). We consider graphical (or network) Voronoi diagrams [34, 19]. At a high level, a graphical Voronoi diagram with respect to a set of sites is a partition of the vertices into parts, called Voronoi cells, where the cell of site contains all vertices that are closer (in the shortest path metric) to than to any other site in . Graphical Voronoi diagrams have been studied and used quite extensively, most notably in applications in road networks (e.g., [40, 16, 25, 14]).
Perhaps the most elementary operation on Voronoi diagrams is point location. Given a point (vertex) , one wishes to efficiently determine the site such that belongs to the Voronoi cell of . CohenAddad et al. [12], inspired by Cabello’s [7] breakthrough use of Voronoi diagrams in planar graphs, suggested a way to perform point location that led to the first exact distance oracle for planar graphs with subquadratic space and polylogarithmic query time. A simpler and more efficient point location mechanism for Voronoi diagrams in planar graphs was subsequently developed in [22]. In both oracles, the space efficiency is obtained from the fact that the size of the representation of a Voronoi diagram is proportional to the number of sites and not to the total number of vertices.
To obtain our result, we add two new ingredients to the point location mechanism of [22]. The first is the use of what might be called external Voronoi diagrams. Unlike previous constructions, instead of working with Voronoi diagrams for every piece in some partition (division) of the graph, we work with many overlapping Voronoi diagrams, representing the complements of such pieces. This is analogous to the use of external dense distance graphs in [5, 37]. This approach alone leads to an oracle with space and query time (see Section 3). The obstacle with pushing this approach further is that the point location mechanism consists of auxiliary data for each piece, whose size is proportional to the size of the complement of the piece, which is rather than the much smaller size of the piece. We show that this problem can be mitigated by using recursion, and storing much less auxiliary data at a coarser level of granularity. This approach is made possible by a more modular view of the point location mechanism which we present in Section 2, along with other preliminaries. The proof of our main space/querytime tradeoffs is given in Section 4 and the algorithm to efficiently construct these oracles is given in Section 5.
2 Preliminaries
In this section we review the main techniques required for describing our result. Throughout the paper we consider as input a weighted directed planar graph , embedded in the plane. (We use the terms weight and length for edges and paths interchangeably throughout the paper.) We use to denote the number of vertices in . Since planar graphs are sparse, as well. The dual of a planar graph is another planar graph whose vertices correspond to faces of and vice versa. Arcs of are in onetoone correspondence with arcs of ; there is an arc from vertex to vertex of if and only if the corresponding faces and of are to the left and right of the arc , respectively.
We assume that the input graph has no negative length cycles. We can transform the graph in a standard [38] way so that all edge weights are nonnegative and distances are preserved. With another standard transformation we can guarantee, in time, that each vertex has constant degree and that the graph is triangulated, while distances are preserved and without increasing asymptotically the size of the graph. We further assume that shortest paths are unique; this can be ensured in time by a deterministic perturbation of the edge weights [18]. Let denote the distance from a vertex to a vertex in .
Multiplesource shortest paths.
The multiplesource shortest paths (MSSP) data structure [30] represents all shortest path trees rooted at the vertices of a single face in a planar graph using a persistent dynamic tree. It can be constructed in time, requires space, and can report any distance between a vertex of and any other vertex in the graph in time. We note that it can be augmented to also return the first edge of this path within the same complexities. The authors of [22] show that it can be further augmented —within the same complexities— such that given two vertices and a vertex of it can return, in time, whether is an ancestor of in the shortest path tree rooted at as well as whether occurs before in a preorder traversal of this tree.
Separators and recursive decompositions.
Miller [35] showed how to compute, in a biconnected triangulated planar graph with vertices, a simple cycle of size that separates the graph into two subgraphs, each with at most vertices. Simple cycle separators can be used to recursively separate a planar graph until pieces have constant size. The authors of [32] show how to obtain a complete recursive decomposition tree of in time. is a binary tree whose nodes correspond to subgraphs of (pieces), with the root being all of and the leaves being pieces of constant size. We identify each piece with the node representing it in . We can thus abuse notation and write . An division [21] of a planar graph, for , is a decomposition of the graph into pieces, each of size , such that each piece has boundary vertices, i.e. vertices that belong to some separator along the recursive decomposition used to obtain . Another desired property of an division is that the boundary vertices lie on a constant number of faces of the piece (holes). For every larger than some constant, an division with few holes is represented in the decomposition tree of [32]. In fact, it is not hard to see that if the original graph is triangulated then all vertices of each hole of a piece are boundary vertices. Throughout the paper, to avoid confusion, we use “nodes” when referring to and “vertices” when referring to . We denote the boundary vertices of a piece by . We refer to nonboundary vertices as internal. We assume for simplicity that each hole is a simple cycle. Nonsimple cycles do not pose a significant obstacle, as we discuss at the end of Section 4.
It is shown in [32, Theorem 3] that, given a geometrically decreasing sequence of numbers , where is a sufficiently large constant, for all for some , and , we can obtain the divisions for all in time in total. For convenience we define the only piece in the division to be itself. These divisions satisfy the property that a piece in the division is a —not necessarily strict— descendant (in ) of a piece in the division for each . This ancestry relation between the pieces of an division can be captures by a tree called the recursive division tree.
The boundary vertices of a piece that lie on a hole of separate the graph into two subgraphs and (the cycle is in both subgraphs). One of these two subgraphs is enclosed by the cycle and the other is not. Moreover, is a subgraph of one of these two subgraphs, say . We then call the outside of hole with respect to and denote it by . In the sections where we assume that the boundary vertices of each piece lie on a single hole that is a simple cycle, the outside of this hole with respect to is and to simplify notation we denote it by .
Additively weighted Voronoi diagrams.
Let be a directed planar graph with real edgelengths, and no negativelength cycles. Assume that all faces of are triangles except, perhaps, a single face . Let be the set of vertices that lie on . The vertices of are called sites. Each site has a weight associated with it. The additively weighted distance between a site and a vertex , denoted by is defined as plus the length of the to shortest path in .
Definition 1.
The additively weighted Voronoi diagram of () within is a partition of into pairwise disjoint sets, one set for each site . The set , which is called the Voronoi cell of , contains all vertices in that are closer (w.r.t. (. , .)) to than to any other site in .
There is a dual representation (or simply ) of a Voronoi diagram . Let be the planar dual of . Let be the subgraph of consisting of the duals of edges of such that and are in different Voronoi cells. Let be the graph obtained from by contracting edges incident to degree2 vertices one after another until no degree2 vertices remain. The vertices of are called Voronoi vertices. A Voronoi vertex is dual to a face such that the vertices of incident to belong to three different Voronoi cells. We call such a face trichromatic. Each Voronoi vertex stores for each vertex incident to the site such that . Note that (i.e. the dual vertex corresponding to the face to which all the sites are incident) is a Voronoi vertex. Each face of corresponds to a cell . Hence there are at most faces in . By sparsity of planar graphs, and by the fact that the minimum degree of a vertex in is 3, the complexity (i.e., the number of vertices, edges and faces) of is . Finally, we define to be the graph obtained from after replacing the node by multiple copies, one for each occurrence of as an endpoint of an edge in . It was shown in [22] that is a forest, and that, if all vertices of are sites and if the additive weights are such that each site is in its own nonempty Voronoi cell, then is a ternary tree. See Fig. 2 (also used in [22]) for an illustration.
Point location in Voronoi diagrams.
A point location query for a node in a Voronoi diagram VD asks for the site of VD such that and for the additive distance from to . Gawrychowski et al. [22] described a data structure supporting efficient point location, which is captured by the following theorem.
Theorem 2 ([22]).
Given an sized MSSP data structure for with sources , and after an time preprocessing of , point location queries can be answered in time .
We now describe this data structure. The data structure is essentially the same as in [22], but the presentation is a bit more modular. We will later adapt the implementation in Section 4.
Recall that is triangulated (except the face ). For technical reasons, for each face of with vertices we embed three new vertices inside and add a directed length edge from to . The main idea is as follows. In order to find the Voronoi cell to which a query vertex belongs, it suffices to identify an edge of that is adjacent to . Given we can simply check which of its two adjacent cells contains by comparing the distances from the corresponding two sites to . The point location structure is based on a centroid decomposition of the tree into connected subtrees, and on the ability to determine which of the subtrees is the one that contains the desired edge .
The preprocessing consists of just computing a centroid decomposition of . A centroid of an node tree is a node such that removing and replacing it with copies, one for each edge incident to , results in a set of trees, each with at most edges. A centroid always exists in a tree with at least one edge. In every step of the centroid decomposition of , we work with a connected subtree of . Recall that there are no nodes of degree 2 in . If there are no nodes of degree 3, then consists of a single edge of , and the decomposition terminates. Otherwise, we choose a centroid , and partition into the three subtrees obtained by splitting into three copies, one for each edge incident to . Clearly, the depth of the recursive decomposition is . The decomposition can be computed in time and can be represented as a ternary tree which we call the centroid decomposition tree, in space. Each nonleaf node of the centroid decomposition tree corresponds to a centroid vertex , which is stored explicitly. We will refer to nodes of the centroid decomposition tree by their associated centroid. Each node also corresponds implicitly to the subtree of of which is the centroid. The leaves of the centroid decomposition tree correspond to single edges of , which are stored explicitly.
Point location queries for a vertex in the Voronoi diagram VD are answered by employing procedure PointLocate (Algorithm 1), which takes as input the Voronoi diagram, represented by the centroid decomposition of , and the vertex . This in turn calls the recursive procedure HandleCentroid (Algorithm 2), where is the root centroid in the centroid decomposition tree of .
We now describe the procedure HandleCentroid. It gets as input the Voronoi diagram, represented by its centroid decomposition tree, the centroid node in the centroid decomposition tree that should be processed, and the vertex to be located. It returns the site such that , and the additive distance to . If is a leaf of the centroid decomposition, then its corresponding subtree of is the single edge we are looking for (Lines 16). Otherwise, is a nonleaf node of the centroid decomposition tree, so it corresponds to a node in the tree , which is also a vertex of the dual of . Thus, is a face of . Let the vertices of be and . We obtain , the site such that , from the representation of (Line 7). (Recall that if since is a trichromatic face.) Next, for each , we retrieve the additive distance from to . Let be the site among them with minimum additive distance to .
If is an ancestor of the node in the shortest path tree rooted at , then and we are done. Otherwise, handling consists of finding the child of in the centroid decomposition tree whose corresponding subtree contains . It is shown in [22] that identifying amounts to determining a certain left/right relationship between and . (Recall that is the artificial vertex embedded inside the face with an incoming 0length edge from .) We next make this notion more precise.
Definition 3.
For a tree and two nodes and , such that none is an ancestor of the other, we say that is left of if the preorder number of is smaller than the preorder number of . Otherwise, we say that is right of .
It is shown in [22, Section 4] that given the left/right/ancestor relationship of and , one can determine, in constant time, the child of in the centroid decomposition tree containing . We call the procedure that does this NextCentroid. Having found the next centroid in Line 15, HandleCentroid moves on to handle it recursively.
3 An space query oracle
We state the following result from [22] that we use in the oracle presented in this section.
Theorem 4 ([22]).
For a planar graph of size , there is an sized data structure that answers distance queries in time .
For clarity of presentation, we first describe our oracle under the assumption that the boundary vertices of each piece in the division of the graph lie on a single hole and that each such hole is a simple cycle. Multiple holes and nonsimple cycles do not pose any significant complications; we explain how to treat pieces with multiple holes that are not necessarily simple cycles, separately.
Data Structure.
We obtain an division for . The data structure consists of the following for each piece of the division:

The space, querytime distance oracle of Theorem 4. These occupy space in total.

Two MSSP data structures, one for and one for , both with sources the nodes of . The MSSP data structure for requires space , while the one for requires space . The total space required for the MSSP data structures is , since there are pieces.

For each node of :

, the dual representation of the Voronoi diagram for with sites the nodes of , and additive weights the distances from to these nodes in ;

, the dual representation of the Voronoi diagram for with sites the nodes of , and additive weights the distances from to these nodes in .
The representation of each Voronoi diagram occupies space and hence, since each vertex belongs to a constant number of pieces, all Voronoi diagrams require space .

Query.
We obtain a piece of the division that contains . Let us first suppose that . We have to consider both the case that the shortest to path crosses and the case that it does not. If it does cross, we retrieve this distance by performing a point location query for in the Voronoi diagram . If the shortest to path does not cross , the path lies entirely within . We thus retrieve the distance by querying the exact distance oracle of Theorem 4 stored for . The answer is the minimum of the two returned distances. This requires time by Theorems 4 and 2. Else, and the shortest path from to must cross . The answer can be thus obtained by a point location query for in the Voronoi diagram in time by Theorem 2. The pseudocode of the query algorithm is presented below as procedure SimpleDist (Algorithm 3).
Dealing with holes.
The data structure has to be modified as follows.

For each hole of , two MSSP data structures, one for and one for , both with sources the nodes of that lie on .

For each node of , for each hole of :

, the dual representation of the Voronoi diagram for with sites the nodes of that lie on , and additive weights the distances from to these nodes in ;

, the dual representation of the Voronoi diagram for with sites the nodes of that lie on , and additive weights the distances from to these nodes in .

As for the query, if we have to perform a point location query in for each hole of . Else and we have to perform a point location query in for the hole of such that . We can afford to store the required information to identify this hole explicitly in balanced search trees.
We thus obtain the following result.
Theorem 5.
For a planar graph of size , there is an sized data structure that answers distance queries in time .
4 An oracle with almost optimal tradeoffs
In this section we describe how to obtain an oracle with almost optimal space to querytime tradeoffs. Consider the oracle in the previous section. The size of the representation we store for each Voronoi diagram is proportional to the number of sites, while the size of an MSSP data structure is roughly proportional to the size of the graph in which the Voronoi diagram is defined. Thus, the MSSP data structures that we store for the outside of pieces of the division are the reason that the oracle in the previous section requires space. However, storing the Voronoi diagrams for the outside of each piece is not a problem. For instance, if we could somehow afford to store MSSP data structures for and of each piece of an division using just space, then plugging into the data structure from the previous sections would yield an oracle with space and querytime .
We cannot hope to have such a compact MSSP representation. However, we can use recursion to get around this difficulty. We compute a recursive division, represented by a recursive division tree . We store, for each piece , the Voronoi diagram for . However, instead of storing the costly MSSP for the entire , we store the MSSP data structure (and some additional information) just for the portion of that belongs to the parent of in . Roughly speaking, when we need to perform point location on a vertex of that belongs to we can use this MSSP information. When we need to perform point location on a vertex of that does not belong to (i.e., it is also in ), we recursively invoke the point location mechanism for .
We next describe the details. For clarity of presentation, we assume that the boundary vertices of each piece in lie on a single hole which is a simple cycle. We later explain how to remove these assumptions. In what follows, if a vertex in a Voronoi diagram can be assigned to more than one Voronoi cell, we assign it to the Voronoi cell of the site with largest additive weight. In other words, since we define the additive weights as distances from some vertex , and since shortest paths are unique, we assign each vertex to the Voronoi cell of the last site on the shortest to path. In particular, this implies that there are no empty Voronoi cells as every site belongs to its own cell. Thus is a ternary tree (see [22]). We can make such an assignment by perturbing the additive weights to slightly favor sites with larger distances from at the time that the Voronoi diagram is constructed.
The data structure.
Consider a recursive division of for some to be specified later. Recall that our convention is that the division consists of itself. For convenience, we define each vertex to be a boundary vertex of a singleton piece at level of the recursive division. Denote the set of pieces of the division by . Let denote the tree representing this recursive division (each singleton piece at level is attached as the child of an arbitrary piece at level such that ).
We will handle distance queries between vertices that have the same parent in separately by storing these distances explicitly (this takes space because pieces at level have constant size).
The oracle consists of the following for each , for each piece whose parent in is :

If , two MSSP data structures for , with sources and , respectively.

If , for each boundary vertex of :

[leftmargin=3pt]

, the dual representation of the Voronoi diagram for with sites the nodes of , and additive weights the distances from to these nodes in ;

, the dual representation of the Voronoi diagram for with sites the nodes of , and additive weights the distances from to these nodes in ;

if , the coarse tree , which is the tree obtained from the shortest path tree rooted at in (the fine tree) by contracting any edge incident to a vertex not in . Note that the left/right/ancestor relationship between vertices of in the coarse tree is the same as in the fine tree. Also note that each (coarse) edge in originates from a contracted path in the fine tree. We preprocess the coarse tree in time proportional to its size to allow for time lowest common ancestor (LCA)^{2}^{2}2An LCA query takes as input two nodes of a rooted tree and returns the deepest node of the tree that is an ancestor of both and level ancestor^{3}^{3}3A level ancestor query takes as input a node at depth and an integer and returns the ancestor of that is at depth . queries [3, 4]. In addition, for every (coarse) edge of , we store the first and last edges of the underlying path in the fine tree. We also store the preorder numbers of the vertices of .

Space. The space required to store the described data structure is . Part 1 of the data structure: At the division, we have pieces and for each of them, we store two MSSP data structures, each of size . Thus, the total space required for the MSSP data structures is . Part 2 of the data structure: At the division, we store, for each of the boundary nodes, the representation of two Voronoi diagrams and a coarse tree, each of size . The space for this part is thus .
Query.
Upon a query for the distance between vertices and in , we pick the singleton piece and call the procedure Dist presented below. In what follows we denote by the ancestor of in that is in .
The procedure Dist (Algorithm 4) gets as input a piece , a source vertex and a destination vertex . The output of Dist is a tuple , where is the distance from to in and is a sequence of certain boundary vertices of the recursive division that occur along the recursive query. (Note that and thus Dist returns .) Let be the smallest index such that . For every , the ’th element in the list is the last boundary vertex on the shortest to path that belongs to . As we shall see, the list enables us to obtain the information that enables us to determine the required left/right/ancestor relationships during the recursion. Dist works as follows.

If , it computes the required distance by considering the minimum of the distance from to returned by a query in the MSSP data structure for with sources and the distance returned by a (vanilla) point location query for in using the procedure PointLocate. The first query covers the case where the shortest path from to lies entirely within , while the second one covers the complementary case. The time required in this case is .

If , Dist performs a recursive point location query on by calling the procedure ModifiedPointLocate.
Since Dist returns a list of boundary vertices as well as the distance, PointLocate and HandleCentroid must pass and augment these lists as well. The pseudocode for ModifiedPointLocate is identical to that of PointLocate, except it calls ModifiedHandleCentroid instead of HandleCentroid; see Algorithm 5. In what follows, when any of the three procedures is called with respect to a piece , we refer to this as an invocation of this procedure at level .
The pseudocode of ModifiedHandleCentroid (Algorithm 6) is similar to that of the procedure HandleCentroid, except that when the site such that is found, is prepended to the list. A more significant change in ModifiedHandleCentroid stems from the fact that, since we no longer have an MSSP data structure for all of , we use recursive calls to Dist to obtain the distances from the sites to in in Lines 4 and 9. We highlight these changes in red in the pseudocode provided below (Algorithm 6). We next discuss how to determine the required left/right/ancestor relationships in (Line 11) in the absence of MSSP information for the entire .
Left/right/ancestor relationships in .
Let be the site among the three sites corresponding to a centroid such that is closest to with respect to the additive distances (see Lines 710 of Algorithm 6). Recall that is the vertex of the centroid face that belongs to and that is the artificial vertex connected to and embedded inside the face . In Line 11 of ModifiedHandleCentroid we have to determine whether is an ancestor of in the shortest path tree rooted at in , and if not, whether the to path is left or right of the to path. To avoid clutter, we will omit the subscript in the following, and refer to as , to as , and to the sequence of boundary vertices returned by the recursion as . To infer the relationship between the two paths we use the sites (boundary vertices) stored in the list . We prepend to , and, if is not already the last element of , we append to and use a flag to denote that was appended.
To be able to compare the to path with the to path, we perform another recursive call Dist (this call is implicit in Line 11). Let be the list of sites returned by this call. As above, we prepend to , and append if it is not already the last element. The intuition is that the lists and are a coarse representation of the to and to shortest paths, and that we can use this coarse representation to roughly locate the vertex where the two paths diverge. The left/right relationship between the two paths is determined solely by the left/right relationship between the two paths at that vertex (the divergence point). We can use the local coarse tree information or the local MSSP information to infer this relationship.
Recall that is a boundary vertex of the piece at level of the recursive division. More generally, for any , is a boundary vertex of (except, possibly, when ). To avoid this shift between the index of a site in the list and the level of the corresponding piece in the recursive division, we prepend empty elements to both lists, so that now, for any , is a boundary vertex of . Let be the largest integer such that . Note that exists because is the first vertex in both and .
Observation 6.
The restriction of the shortest path from to in to the nodes of is identical to the path from to in .
We next analyze the different cases that might arise and describe how to correctly infer the left/right relationship in each case. An illustration is provided as Fig. 9.

or . This corresponds to the case that in at least one of the lists there are no boundary vertices after the divergence point. We have a few cases depending on whether and are in or not.

If , then it must be that and were appended to the sequences manually. In this case we can query the MSSP data structure for with sources the boundary vertices of to determine the left/right/ancestor relationship.

If , we can use the MSSP data structure stored for with sources the nodes of . Let us assume without loss of generality that . We then define to be if or the child of the root of the coarse that is an ancestor of otherwise. (In the latter case we can compute in time with a level ancestor query.) We then query the MSSP data structure for the relation between and the . Note that the relation between the to and to paths is the same as the relation between the to and to paths.

Else, one of is in and the other is not. Let us assume without loss of generality that . We can infer the left/right relation by looking at the circular order of the following edges: (i) the last fine edge in the rootto path in , (ii) the first edge in the to path, and (iii) the first edge in the to path, where is defined as in Case 1(b). Edge (i) is stored in —note that exists since . Edge (ii) can be retrieved from the MSSP data structure for with sources the boundary vertices of , and edge (iii) can be retrieved from the MSSP data structure for with sources the boundary vertices of .


. We first compute the LCA of and in .

If neither of is an ancestor of the other in we are done by utilising the preorder numbers stored in .

Else, is one of . We can assume without loss of generality that . Using a level ancestor query we find the child of in that is a (not necessarily strict) ancestor of . The to shortest path in is internally disjoint from ; i.e. it starts and ends at

Comments
There are no comments yet.