1 Introduction
Computing shortest paths is one of the most wellstudied algorithmic problems. In the data structure version of the problem, the aim is to compactly store information about a graph such that the distance (or the shortest path) between any queried pair of vertices can be retrieved efficiently. Data structures supporting distance queries are called distance oracles. The two main measures of efficiency of a distance oracle are the space it occupies and the time it requires to answer a distance query. Another quantity of interest is the time required to construct the oracle.
In recent decades researchers have investigated the shortest path problem in graphs subject to failures, or more broadly, to changes. One such variant is the replacement paths problem. In this problem we are given a graph and vertices and . The goal is to report the to distance in for each possible failure of a single edge along the shortest to path. Another variant is that of constructing a distance oracle that answers to distance queries subject to edge or vertex failures ( and the set of failures are given at query time). Perhaps the most general of these variants is the fullydynamic distance oracle; a data structure that supports distance queries as well as updates to the graph such as changes to edge lengths, edge insertions or deletions and vertex insertions or deletions.
One obvious but important application of handling failures is in geographical routing. Further motivation for studying this problem originates from Vickrey pricing in networks [32, 22]; see [11] for a concise discussion on the relation between the problems. A longstudied generalization of the shortest path problem is the shortest paths problem, in which not one but but several shortest paths must be produced between a pair of vertices. This problem reduces to running executions of a replacement paths algorithm, and has many applications itself [15].
In this paper we focus on these problems, and in particular on handling vertex failures in planar graphs. Observe that edge failures easily reduce to vertex failures. Indeed, by replacing each edge of with a new dummy vertex and appropriately weighted edges and ; the failure of edge in corresponds to the failure of vertex in the new graph. Note that this transformation does not depend on planarity. In sparse graphs, such as planar graphs, this transformation only increases the number of vertices by a constant factor. Also note that there is no such obvious reduction in the other direction that preserves planarity. In general graphs, one can replace each vertex by two vertices and , assign to (resp. ) all the edges incoming to (resp. outgoing from ) and add a 0length directed edge from to . The failure of vertex in the original graph corresponds to the failure of edge in the new graph. However, this transformation does not preserve planarity.
1.1 Related Work
General Graphs.
Demetrescu et al. presented an size oracle answering single failure distance queries in constant time [11]. Bernstein and Karger, improved the construction time in [5]. Interestingly, Duan and Pettie, building upon this work, showed an size oracle that can report distances subject to two failures, in time [13]. Based on this oracle, they then easily obtain an space oracle answering distance queries in time for any . Oracles that require less space for more than failures have been proposed, such as the one presented in [33], but at the expense of query time. Such oracles are unsatisfactory for planar graphs, where single source shortest paths can be computed in linear or nearly linear time.
Planar Graphs.
Exact (failurefree) distance oracles for planar graphs have been studied extensively over the past three decades [12, 3, 9, 17, 30, 6, 10, 21]. The known space to querytime tradeoffs have been significantly improved very recently [21, 10]. The currently best known tradeoff is an oracle of size , that answers queries in time for any [21]. Note that all known oracles with nearly linear (i.e. ) space require query time.
As for handling failures, the replacement paths problem (i.e. when both the source and destination are fixed in advance) can be solved in nearly linear time [14, 27, 34]. For the single source, single failure version of the problem (i.e. when the source vertex is fixed at construction time, and the query specifies just the target and a single failed vertex), Baswana et al. [4] presented an oracle with size and construction time that answers queries in time. They then showed an oracle of size for the general single failure problem (i.e. when the source, destination, and failed vertex are all specified at query time), that answers queries in time for any . They concluded the paper by asking whether it is possible to design a compact distance oracle for a planar digraph which can handle multiple vertex failures. We answer this question in the affirmative.
Fakcharoenphol and Rao, in their seminal paper [17], presented distance oracles that require and amortized time per update and query for nonnegative and arbitrary edgeweight updates respectively.^{2}^{2}2Though this is not mentioned in [17], the query time can be made worst case rather than amortized by standard techniques. The space required by these oracles is . Klein presented a similar data structure in [25] for the case where edgeweight updates are nonnegative, requiring time . Klein’s result was extended in [23], where, assuming nonnegativity of edgeweight updates, the authors showed how to handle edge deletions and insertions (not violating the planarity of the embedding), and in [24], where the authors showed how to handle negative edgeweight updates, all within the same time complexity. In fact, these results can all be combined, and along with a recent slight improvement on the running time of FRDijkstra [20], they yield a dynamic distance oracle that can handle any of the aforementioned edge updates and queries within time . We further extend these results by showing that vertex deletions and insertions can also be handled within the same time complexity. The main challenge lies in handling vertices of high degree.
For the case where one is willing to settle for approximate distances, Abraham et al. [2] gave a labeling scheme for undirected planar graphs with polylogarithmic size labels, such that a approximation of the distance between vertices and in the presence of vertex or edge failures can be recovered from the labels of and the labels of the failed vertices in time. They then use this labeling scheme to devise a fully dynamic distance oracle with size and query and update time.^{3}^{3}3A fully dynamic distance oracle supports arbitrary edge and vertex insertions and deletions, and length updates.
On the lower bounds side, it is known that an exact dynamic oracle requiring amortized time , for any constant , for both edgeweight updates and distance queries, would refute the APSP conjecture, i.e. that there is no truly subcubic combinatorial algorithm for solving the allpairs shortest path problems in weighted (general) graphs [1].
1.2 Our Results and Techniques
In this work we focus on distance queries subject to vertex failures in planar graphs. Our results can be summarized as follows.

We show how to preprocess a directed weighted planar graph in time into an oracle of size that, given a source vertex , a target vertex , and a set of failing vertices, reports the length of a shortest to path in in time. See Lemma 9.

For allowed failures, and for any , we show how to construct an size oracle that answers queries in time . See Theorem 11. For , this improves over the previously best known tradeoff of Baswana et al. [4] by polynomial factors for , . To the best of our knowledge, this is the first tradeoff for . See Fig. 3.

We extend the exact dynamic distance oracles mentioned in the previous section to also handle vertex insertions and deletions without changing their space and time bounds.
Our nearlylinear space oracle that reports distances in the presence of failures in time is obtained by adapting a technique of Fakcharoenphol and Rao [17]. In a nutshell, a planar graph can be recursively decomposed using small cycle separators, such that the boundary of each piece in the decomposition (i.e. the vertices of a piece that also belong to other pieces in the decomposition) is a cycle with relatively few vertices. Instead of working with the given planar graph, one computes distances over its dense distance graph (DDG); a nonplanar graph on the boundary vertices of the pieces which captures the distances between boundary vertices within each of the underlying pieces. Fakcharoenphol and Rao developed an efficient implementation of Dijkstra’s algorithm on the DDG. This algorithm, nicknamed FRDijkstra, runs in time roughly proportional to the number of vertices of the DDG (i.e. boundary vertices), rather than in time proportional to the number of vertices in the planar graph. Roughly speaking, Fakcharoenphol and Rao show that to obtain distances from to with edge failures, it (roughly) suffices to consider just the boundary vertices of the pieces in the recursive decomposition that contain failed edges. Since pieces at the same level of the recursive decomposition are edgedisjoint, the total number of boundary vertices in all the required pieces is only . This size, querytime oracle, supporting distance queries subject to a batch of edge cost updates, leads to their dynamic distance oracle.
The difficulty in handling vertex failures is that a high degree vertex may be a boundary vertex of many (possibly ) pieces in the recursive decomposition. Then, if fails, one would have to consider too many pieces and too many boundary vertices. Standard techniques such as degree reduction by vertex splitting are inappropriate because when a vertex fails all its copies fail. To overcome this difficulty we define a variant of the dense distance graph which, instead of capturing shortest path distances between boundary vertices within a piece, only captures distances of paths that are internally disjoint from the boundary. We show that such distances can be computed efficiently, and that it then suffices to include in the FRDijkstra computation (roughly) only pieces that contain , but not as a boundary vertex. This leads to our nearlylinearspace oracle reporting distances in the presence of failures in time (item 1 above). See Section 3. Plugging the same technique into the existing dynamic distance oracles extends them to support vertex deletions (item 3 above). See Section 6.
Our main result, the space vs. querytime tradeoff (item 2 above), is obtained by a nontrivial combination of this technique with ideas from the recent static distance oracle presented in [21]. Namely, by a combination of FRDijkstra on our variant of the DDG with divisions, external s, and efficient point location in Voronoi diagrams. See Sections 5 and 4.
2 Preliminaries
In this section we review the main techniques required for describing our result. Throughout the paper we consider a weighted directed planar graph , embedded in the plane. (We use the terms weight and length for edges and paths interchangeably throughout the paper.) We use to denote the number of vertices in . Since planar graphs are sparse, as well. For an edge , we say that is its tail and is its head. denotes the distance from to in . We denote by the distance from to in , where or . If the reference graph is clear from the context we may omit the subscript. We assume that the input graph has no negative length cycles. If it does, we can detect this in time by computing single source shortest paths from any vertex [31]. In the same time complexity, we can transform the graph in a standard way so that all edge weights are nonnegative and shortest paths are preserved. We further assume that shortest paths are unique as required for a result from [19] that we use; this can be ensured in time by a deterministic perturbation of the edge weights [16]. Each original distance can be recovered from the corresponding distance in the transformed graph in constant time.
Separators and recursive decompositions in planar graphs.
Miller [28] showed how to compute a Jordan curve that intersects the graph at vertices and separates it into two pieces with at most vertices each. Jordan curve separators can be used to recursively separate a planar graph until pieces have constant size. The authors of [26] show how to obtain a complete recursive decomposition tree of in time. is a binary tree whose nodes correspond to subgraphs of (pieces), with the root being all of and the leaves being pieces of constant size. We identify each piece with the node representing it in . We can thus abuse notation and write . An division [18] of a planar graph, for , is a decomposition of the graph into pieces, each of size , such that each piece has boundary vertices, i.e. vertices incident to edges in other pieces. Another usually desired property of an division is that the boundary vertices lie on a constant number of faces of the piece (holes). For every larger than some constant, an division with this property (i.e. few holes per piece) is represented in the decomposition tree of [26]. Throughout the paper, to avoid confusion, we use “nodes” when referring to and “vertices” when referring to . We denote the boundary vertices of a piece by . We refer to nonboundary vertices as internal.
Lemma 1 ([21]).
Each node in corresponds to a piece such that (i) each piece has holes, (ii) the number of vertices in a piece at depth in is , for some constant , (iii) the number of boundary vertices in a piece at depth in is , for some constant .
We use the following wellknown bounds (see e.g., [21]).
Proposition 2.
, and .
We show the following bound that will be used in future proofs.
Proposition 3.
.
Proof.
Let be the pieces at the th level of the decomposition. since the pieces are edgedisjoint. We know by Lemma 1 that for all and hence for all . It follows that and the claimed bound follows by summing over all levels of . ∎
Dense distance graphs and FRDijkstra.
The dense distance graph of a piece , denoted is a complete directed graph on the boundary vertices of . Each edge has weight , equal to the length of the shortest to path in . can be computed in time using the multiple source shortest paths (MSSP) algorithm [25, 7]. Over all pieces of the recursive decomposition this takes time in total and requires space by Proposition 2. We next give a —convenient for our purposes— interface for FRDijkstra [17], which is an efficient implementation of Dijkstra’s algorithm on any union of s. The algorithm exploits the fact that, due to planarity, certain submatrices of the adjacency matrix of satisfy the Monge property. (A matrix satisfies the Monge property if, for all and , [29].) The interface is specified in the following theorem, which was essentially proved in [17], with some additional components and details from [24, 31].
Theorem 4 ([17, 24, 31]).
A set of s with vertices in total (with multiplicities), each having at most vertices, can be preprocessed in time and extra space in total. After this preprocessing, Dijkstra’s algorithm can be run on the union of any subset of these s with vertices in total (with multiplicities) in time , by relaxing edges in batches. Each such batch consists of edges that have the same tail.
The algorithm in the above theorem is called FRDijkstra. It is useful in computing distances in sublinear time, as demonstrated by the following lemma and corollary which are a reformulation of ideas from [17] and are provided for completeness.
Definition 1.
Let be a vertex. A cone of is the union of the following DDGs: (i) , where is a leaf piece in containing , with considered a boundary vertex of . (ii) For every (not necessarily strict) ancestor of , of the sibling of .
Lemma 5.
Let and be two vertices in a cone of a vertex . The to distance in equals the to distance in this cone of .
Proof.
Let be the ancestors of ordered by decreasing depth in . Let be the sibling of in . Let be . We will prove by induction that for any two vertices , the to distance in equals the to distance in . This statement is trivially true for . Let us assume it is true for . Consider an to shortest path in , where . Path can be decomposed into maximal subpaths that are entirely contained in or and whose endpoints are in . For each such subpath we either have a path with the same length in by the inductive assumption, or an edge of . This shows that the length of is at least the length of the to distance in . Since every edge of corresponds to some path in , the opposite also holds, so the two quantities are equal. ∎
Corollary 6.
Let be two distinct vertices in . Let be a shortest to path in . If is not fully contained in then we can compute the length of by running FRDijkstra on the union of a cone of and a cone of . This takes time.
Voronoi diagrams with point location.
Let be a directed planar graph with real edgelengths, and no negativelength cycles. Let be a set of vertices that lie on a single face of ; we call the elements of sites. Each site has a weight associated with it. The additively weighted distance between a site and a vertex , denoted by is defined as plus the length of the to shortest path in .
Definition 2.
The additively weighted Voronoi diagram of () within is a partition of into pairwise disjoint sets, one set for each site . The set which is called the Voronoi cell of , contains all vertices in that are closer (w.r.t. (. , .)) to than to any other site in (assuming that the distances are unique). There is a dual representation of a Voronoi diagram as a planar graph with vertices and edges.
Theorem 7 ([21, 19]).
Given subsets of , and additive weights for each , we can construct a data structure of size that supports the following (point location) queries. Given , and a vertex of , report in time the site in the additively weighted Voronoi diagram such that belongs to and the distance . The time and space required to construct this data structure are .
3 Near linear space data structure for any number of failures
In this section we show how to adapt the approach of [17] for dynamic distance oracles supporting cumulative edge changes to support distance queries with failed vertices. The main technical challenge is in dealing with failures of highdegree vertices, since such vertices may belong to many pieces at each level of the decomposition. For example, think of a failure of the central vertex in a wheel graph, which belongs to all the pieces in the recursive decomposition. Note that standard degree reduction techniques such as vertex splitting are not useful because when a vertex fails all its copies fail. This is in contrast with the situation when dealing only with edgeweight updates, since each edge can be in at most one piece per level. We circumvent this by defining and employing the strictly internal dense distance graph. The main intuition is that strictly internal DDGs enable us to handle pieces that only contain failed boundary vertices, i.e. do not contain any internal vertex that fails. Then, only pieces that contain internal failed vertices are “problematic”. Note however, that a vertex is internal in at most one piece per level of the decomposition.
Definition 3.
The strictly internal dense distance graph of a piece , denoted , is a complete directed graph on the boundary vertices of . An edge has weight equal to the length of the shortest to path in that is internally disjoint from .
The sole difference to the standard definition is that in our case paths are not allowed to go through . Observe that the shortest path in between two vertices of is still represented in , just not necessarily by a single edge as in . This establishes the following lemma.
Lemma 8.
For any piece and any two boundary vertices , the to distance in equals the to distance in .
We now discuss how to efficiently compute . We construct a planar graph , by creating a copy of and incrementing the weight of each edge , such that , by . can be computed in time using MSSP [25, 7]. Observe that any to path in that starts at and is internally disjoint from has exactly one edge with , so its length is at least and less than , while any to path that has an internal vertex in is of length at least . Therefore, the to distance in is equal to plus the length of the shortest to path in that is internally disjoint from if the latter one is not . We thus set . This completes the description of the computation of . Note that since is defined in terms of rather than , edge weights greater than in effectively represent infinite length in the sense that such edges will never be used by any shortest path (in nor in ). Also note that it follows directly from the definition of the Monge property that subtracting from each entry of a Monge matrix preserves the Monge property. Therefore, we can use in FRDijkstra (Theorem 4) instead of .
Preprocessing.
We compute a complete recursive decomposition tree of in time as discussed in Section 2. We compute for each nonleaf piece and preprocess it as in FRDijkstra. By Proposition 2, Theorem 4 and the above discussion, the time and space complexities are and respectively.
Query.
Upon query , for each we arbitrarily choose a leafpiece containing , and run FRDijkstra on the union of the following s, which we denote by (inspect Fig. 4 for an illustration):

For each , of with regarded as a boundary vertex. This can be computed on the fly in constant time since the size of the leaf piece is constant.

For each , for each ancestor of (including ), of the sibling of if does not contain any internal (i.e. nonboundary) vertex that is in .

For each , of . This can be computed on the fly in constant time since the size of the leaf piece is constant.

For each , for each ancestor of (including ), of the sibling of if does not contain any internal vertex that is in .
We can identify these s in time by traversing the parent pointers from each , for , and marking all the nodes that have an internal failed vertex. We make one small but crucial change to FRDijkstra. When running FRDijkstra, we do not relax edges whose tail is a failed vertex. This guarantees that, although failed vertices might appear in the graph on which FRDijkstra is invoked, the to shortest path computed by FRDijkstra does not contain any failed vertices. We therefore obtain the following lemma.
Lemma 9.
There exists a data structure of size , which can be constructed in time, and answer the following queries in time. Given vertices and , and a set of failing vertices, report the length of a shortest to path in that avoids the vertices of .
Proof.
We have already discussed the space occupied by the oracle and the time required to build it. It remains to analyze the query algorithm.
Correctness. First, it is easy to see that no edge of any of the s in represents a path containing a vertex , unless . The latter case does not affect the correctness of the algorithm, since in FRDijkstra we do not relax edges whose tail is a failed vertex. Hence, the algorithm never computes a distance corresponding to a path going through a failed vertex.
It remains to show that the shortest path in is represented in . For this, by Corollary 6, it suffices to prove that for each piece in the cone of (and similarly in the cone of ), either for belongs to , or contains enough information to reconstruct for (i.e. subject to the failures) during FRDijkstra. In the latter case we say that is represented in . Note that, for any piece , is represented in if the s of its two children in are represented in . (This follows by an argument identical to the one used in the proof of Lemma 5.) If contains no internal failed vertex then is in by point or above. We next consider the case that does contain some failed vertex as an internal vertex. Thus is an ancestor of . To show that is represented in , we prove that for any failed vertex , the of any nonroot ancestor of in is represented in .
We proceed by the minimal counterexample method. For any , is in since it is computed on the fly in point . Let be the deepest node in that is a strict ancestor of for some and whose is not represented in . It follows that one of ’s children must also be an ancestor of and by the choice of its is represented in . Let the other child of be . If is an ancestor of some , , then is also represented in by the choice of . Otherwise, does not contain any internal failed vertex, and hence is in by point . In either case, the s of both children of are represented in , so is also represented in , a contradiction.
Time complexity. Let and consider an division of in . The pieces of this division have boundary vertices in total and this is known to also be an upper bound on the total number of boundary vertices (with multiplicities) of ancestors of pieces in this division (cf. the discussion after Corollary 5.1 in [21]).
Recall that we have chosen a leafpiece for each vertex . Each piece (other than the s) whose belong to is a sibling of an ancestor of some . This implies that each contributes the s of at most two pieces per level of the decomposition. Let the ancestor of that is in the division be . For each , we only need to bound the total size of pieces it contributes that are descendants of , since we have already bounded the total size of the rest. We do so by applying Lemma 1 for the subtree of rooted at each . (The extra boundary vertices we start with do not alter the analysis of this lemma as these many are anyway introduced by the first separation of .) It yields , where , which is . Summing over all pieces we obtain the upper bound .
FRDijkstra runs in time proportional to the total number of vertices of the s in up to a multiplicative factor and hence the time complexity follows. ∎
Remark. By using existing techniques (cf. [24, Section 5.4]), we can report the actual shortest path in time , where is the maximum degree of a vertex of in .^{4}^{4}4This remark also applies to the dynamic distance oracle presented in Section 6. However, it does not apply to the oracles presented in Section 4, where we use a different modification of s for which we can not afford to store the MSSP data structures that would allow us to return the actual shortest paths efficiently.
4 Tradeoffs
In this section we describe a tradeoff between the size of the oracle and the querytime. We first define another useful modification of dense distance graphs.
Definition 4.
The strictly external dense distance graph of for pieces () is a complete directed graph on the boundary vertices of . The edge has weight equal to the length of the shortest to path in .
s can be preprocessed using Theorem 4 together with s so that we can perform efficient Dijkstra computations in any union of s and s.
The number of pieces in an division is at most for some constant . For convenience, we define
where the last inequality holds for sufficiently large . We use throughout to encapsulate the dependency on .
4.1 The case of a single failure
For ease of presentation we first describe an oracle that can handle just a single failure. We prove the following lemma, which is a restricted version of our main result, Theorem 11.
Lemma 10.
For any , there exists a data structure of size , which can be constructed in time , and can answer the following queries in time. Given vertices , report the length of a shortest to path that avoids .
We first perform the precomputations of Section 3. We also obtain an division of from in time. Let us denote the pieces of this division by .
Warm up.
We first show how to get an space oracle with query time for a single failure using the approach of Section 3. For each triplet of pieces in the division we store ; these require space in total. Given in and , respectively, we consider the required s that allow us to represent subject to the failures for each as in Section 3 (i.e. the s in items 2 and 4 in Section 3 are only taken for ancestors of that are descendants of ). We then run FRDijkstra on these along with , not relaxing edges whose tail is if encountered. This takes time .
Main Idea for reducing the space complexity.
Instead of storing information for triplets of pieces, we will store more information, but just for pairs. Given we show how to compute relying on the information stored for the pair of pieces and . We first compute the distances from to each in using FRDijkstra with as in the warm up above. We then identify an appropriate piece in that contains , and does not contain nor . Exploiting the fact that distances within remain unchanged when fails, we employ Voronoi Diagrams with point location for the piece , adapting ideas from [21].
Additional Preprocessing.
For each pair of pieces of the division we compute and store the following:

.

Let be a separator in the recursive decomposition, separating a piece into two subpieces and , such that and . For each , for each hole of , we compute and store a Voronoi diagram with the point location data structure for , with sites the boundary vertices of that lie on , and additive weights the distances from to these sites in .
We now show that the space required is . The space required for the preprocessed internal and external dense distance graphs is and , respectively, by Theorem 4. We next analyze the space required for storing the Voronoi diagrams. We consider pairs of pieces , and for each of the boundary vertices of each such pair we store, in the worst case, a Voronoi diagram for each of the holes of each sibling of the nodes in the rootto and rootto paths in . The total number of sites of all Voronoi diagrams we store for a pair of pieces can be upper bounded by by noting that the number of sites at level of has boundary vertices by Lemma 1. By Theorem 7, the space required to store a representation of a set of Voronoi diagrams with the functionality allowing for efficient point location queries for a piece , with sites a subset of the boundary vertices of , lying on a hole is , where is the total cardinality of these sets of sites. Summing over all holes of all pieces , noting that by the above discussion, and using Proposition 2, the total space required for all Voronoi diagrams is .
We analyze the construction time in Section 5. The internal dense distance graphs can be computed in time . The external dense distance graphs and the additive weights can be computed in time and , respectively; see Lemmas 13 and 12. We show in Lemma 14 that we can compute all required Voronoi diagrams in time , where is the size of their representation described in Section 2.
Query.
If any two of are in the same piece of the division, then we can use FRDijkstra taking into account just two pieces of the division containing and , similarly to the description in the warm up above. We therefore assume that no two of are in the same piece of the division. We first retrieve a piece of the division, containing (to support that, each vertex stores a pointer to some piece of the division that contains it). In the following we will need to check whether a vertex is in some particular piece of . This can be done in time by storing, for each piece in , a binary tree with the vertices in the piece. We then proceed as follows (inspect Fig. 7 for an illustration).

Following parent pointers from in , we find the highest ancestor of containing neither nor . Thus, the sibling of in contains a vertex . We find a descendant of that is in the division and contains . We then find any piece of the division containing the element of . Note that, by choice of , is not a descendant of . Finding these pieces requires time .

For each , for each hole of , we perform an time query to the Voronoi diagram stored for , and to get the distance from to in . The required distance is the minimum over all . Each query takes time and hence the total time required is .
We now argue the correctness of the query algorithm. Let be a shortest to path that avoids . Let be the last vertex of that belongs to . Let be the hole of such that the last vertex of that belongs to the boundary of belongs to hole . The distance from to in is computed by the FRDijkstra computation in step 2, while the distance from to in is obtained from the query to the Voronoi diagram stored for , and . It is easy to see that we do not obtain any distance that does not correspond to an actual path in and hence the correctness of the query algorithm follows.
4.2 Handling multiple failures
The warmup approach of Section 4.1 can be trivially generalized to handle failed vertices by considering tuples of pieces of the division. (We consider the elements of tuples to be unordered throughout.) The space required is and queries can be answered in time. We reduce the space to by generalizing the main algorithm of Section 4.1.
Preprocessing.

We perform the precomputations of Section 3.

For each tuple of pieces of the division we compute and store the following:

.

Let be a separator in the recursive decomposition, separating a piece into and , such that for some and none of the other pieces of the tuple is a subgraph of . For each , for each hole of , we store a Voronoi diagram with the point location data structure for , with sites the boundary vertices of that lie on , and additive weights the distances from to these sites in .

Query.
We first retrieve a piece of the division, containing . We can again assume that no two elements of are in the same piece of the division, since otherwise we can answer the query in time by running FRDijkstra on the of a tuple and the s we add for each of the pieces in the tuple, following the algorithm of Section 3.
The algorithm is then essentially the same as that of Section 4.1.

We find the highest ancestor of in that does not contain any of the elements of and retrieve a descendant of its sibling in the division that does contain some element . We then identify a piece in the division for each . This requires time .

We run FRDijkstra on s of total size .

We perform point location queries to Voronoi diagrams of , each requiring time .
We hence obtain the general tradeoff theorem.
Theorem 11.
For any integer and for any integer , there exists a data structure of size , which can be constructed in time , for some constant , and can answer the following queries in time. Given vertices and and a set of at most failing vertices, report the length of a shortest to path that avoids .
Remark. Our distance oracle can handle any number of failures that lie in at most pieces of the division in time with an size oracle. This follows from the fact that the s we will add for a piece with failures have total size by the same analysis as in the proof of Lemma 9 and the fact that, given such that , we have
Comments
There are no comments yet.