Geometric Algorithms with Limited Workspace: A Survey

06/15/2018
by   Bahareh Banyassady, et al.
Tohoku University
0

In the limited workspace model, we consider algorithms whose input resides in read-only memory and that use only a constant or sublinear amount of writable memory to accomplish their task. We survey recent results in computational geometry that fall into this model and that strive to achieve the lowest possible running time. In addition to discussing the state of the art, we give some illustrative examples and mention open problems for further research.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/04/2008

Linear-Time Algorithms for Geometric Graphs with Sublinearly Many Edge Crossings

We provide linear-time algorithms for geometric graphs with sublinearly ...
07/31/2019

Sublinear Subwindow Search

We propose an efficient approximation algorithm for subwindow search tha...
08/16/2017

A Survey of Parallel A*

A* is a best-first search algorithm for finding optimal-cost paths in gr...
04/04/2019

Sublinear quantum algorithms for training linear and kernel-based classifiers

We investigate quantum algorithms for classification, a fundamental prob...
09/16/2020

Faster Property Testers in a Variation of the Bounded Degree Model

Property testing algorithms are highly efficient algorithms, that come w...
12/18/2017

Time-Space Trade-Offs for Computing Euclidean Minimum Spanning Trees

In the limited-workspace model, we assume that the input of size n lies ...
04/05/2019

An Experimental Study of Algorithms for Geodesic Shortest Paths in the Constant-Workspace Model

We perform an experimental evaluation of algorithms for finding geodesic...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Space usage has been a concern since the very early days of algorithm design. The increased availability of devices with limited memory or power supply—such as smartphones, drones, or small sensors—as well as the proliferation of new storage media for which write access is comparatively slow and may have negative effects on the lifetime—such as flash drives—has led to renewed interest in the subject. As a result, the design of algorithms for the limited workspace model has seen a significant rise in popularity in computational geometry over the last decade. In this setting, we typically have a large amount of data that needs to be processed. Although we may access the data in any way and as often as we like, write-access to the main storage is limited and/or slow. Thus, we opt to use only higher level memory for intermediate data (e.g., CPU registers). Since application areas of the devices mentioned above – sensors, smartphones, and drones – often handle a large amount of geographic (i.e., geometric) data, the scenario becomes particularly interesting from the viewpoint of computational geometry.

Motivated by these considerations, there have been numerous recent works developing algorithms for geometric problems that, in addition to the input, use only a small amount of memory. Furthermore, there has been research on time-space trade-offs, where the goal is to find the fastest algorithm for a given space budget. In the following, we give a broad overview of these results and of the current state of the field. We also provide examples to illustrate the main challenges that these algorithms must face and the techniques that have been developed to overcome them.

2 The Limited Workspace Model

Designing algorithms that require little working memory is a classic and well-known challenge in theoretical computer science. Over the years, it has been attacked from many different angles. In computational complexity theory, the complexity class LOGSPACE contains all algorithmic problems that can be solved with a workspace that has a logarithmic number of bits [7, 40]. The research on LOGSPACE has led to several surprising insights [44, 58, 57], perhaps most recently the -connectivity algorithm for undirected graphs by Reingold [56], an unexpected application of expander graphs. The focus in computational complexity is mainly on what can be done in principle in LOGSPACE. Obtaining LOGSPACE-algorithms with a low running time is usually a secondary concern. Streaming algorithms are another classic area where the amount of writable memory is limited [53]. Here, the data items can be read only once (or a limited number of times), in an unknown order. In addition, the algorithm may maintain a sublinear amount of storage to accomplish its task. There are several results on geometric problems in the streaming model [45, 27], mostly dealing with problems concerning clustering and extent measures [4]

, but also with classic questions, such as computing convex hulls or low-dimensional linear programming 

[28]. The in-place model assumes that the input resides in a memory that can be read and written arbitrarily. The algorithm may use only a constant number of additional memory cells. This means that all complex data structures need to be encoded with the help of the input elements, severely restricting the algorithmic options at our disposal. There are geometric in-place algorithms for computing the convex hull or the Voronoi diagram of a given planar point set [26, 25, 29, 30]. In succinct data structures, the goal is to minimize the precise number of bits that are needed to represent the data, getting as close to the entropy bound as possible [54]. At the same time, one would like to retain the ability to support the desired data structure operations efficiently. In computational geometry, succinct data structures have been developed for classic problems like range searching, point location, or nearest neighbor search [43].

The present notion of a limited workspace algorithm, which constitutes the main focus of this survey, was introduced to the computational geometry community by Tetsuo Asano [8]. Initially, the model postulated a workspace that consists of a constant number of cells [12, 13]. Over the years, this was extended to also allow for time-space trade-offs. In the following, we describe the most general variant of the model.

The model is similar to the standard word RAM in which the memory is organized as a sequence of cells that can be accessed in constant time via their addresses. Each memory cell stores a single data word [48]. In contrast to the standard word RAM, the limited workspace model distinguishes two kinds of cells: (i) read-only cells that store the input; and (ii) read-write cells that constitute the algorithm’s workspace. A cell of the workspace can store either an integer of bits, a pointer to some input cell, or a root of some polynomial of bounded degree that depends on a fixed number of input variables (for example, the intersection point of two lines, each passing through two input points). Here, denotes the input size (measured in the number of cells).

Figure 1: The different types of memory cells available in the limited workspace model.

Typically, the output is larger than the workspace. Thus, we assume that the output is written sequentially to a dedicated write-only stream. Once a data item is written to the output stream, it cannot be accessed again by the algorithm; see Figure 1. It usually depends on the algorithmic problem at hand how exactly the output should be structured. Our algorithms are typically deterministic, but there are also results that use randomization. Randomized algorithms in the limited workspace model may use an unlimited stream of random bits. However, these bits cannot be accessed arbitrarily: if the algorithm wishes to revisit previous random bits, it needs to store them in its workspace.111Refer to Goldreich’s book [40] for further discussion of randomness in the presence of space restrictions. As usual, the running time of an algorithm on a given input is measured as the number of elementary operations that it performs. The space usage is counted as the number of cells in the workspace. Note that the input does not contribute to this count, but any other memory consumption does (like memory implicitly allocated during recursion).

To bring our model into perspective, we compare it with the related models. Unlike the typical viewpoint from computational complexity theory, our goal is to find the best running time that can be achieved with a given space budget. In contrast to streaming algorithms, we may read the input repeatedly and with random access. Unlike in-place algorithms, our input resides in read-only memory, and the workspace can potentially contain arbitrary data. When analyzing the space usage of an algorithm, we typically ignore constant factors and lower order terms, whereas these play a crucial role in succinct data structures.

Although the objective is to have algorithms that are fast and at the same time use little—ideally constant—workspace, it is normally not possible to achieve both goals simultaneously. Thus, the aim is to balance the two. Often, this results in a trade-off

: as more cells of workspace become available, the running time decreases. Now, the precise relationship between the running time and the available space becomes the main focus of our attention. For many problems, this dependency is linear (i.e., by doubling the amount of workspace, the running time can be halved). However, recent research has uncovered a wide range of possible trade-offs that often interpolate smoothly between the best known results for constant and for

cells of workspace.

Table 1 summarizes recent algorithms for geometric problems in the limited workspace model. In the rest of the survery, we will briefly discuss the problems that have been considered and the current state of the art.

Problem Running Time Space Source
shortest path in a tree [13]
all nearest larger neighbors [11]
sorting [10]
convex hull of a point set [35]
triangulation of a point set [12, 5]
Voronoi diagram/Delaunay triangulation [49, 20]
Voronoi diagrams of order to () [20]
Euclidean minimum spanning tree [12, 19]
triangulation of a simple polygon [55, 6]
balanced partition of a simple polygon [55]
shortest path in a simple polygon [13, 42, 55]
triangulation of a monotone polygon [11]
visibility in a simple polygon [22]
-visibility in a polygonal domain [16]
weak visibility in a simple polygon [1]
minimum link path in a simple polygon [1]
convex hull of a simple polygon [21]
convex hull of a simple polygon [21]
common tangents of two disjoint polygons [2, 3]
Table 1: A selection of problems and the best known running times in the limited workspace model. The -notation has been omitted in the bounds. If the space usage is given as , then may range from to . For , it may range from to , and for it ranges from to . The running times for -visibility and for the convex hull of a simple polygon have been simplified.

3 Point Sets in the Plane

The most basic problems in computational geometry concern point sets in the plane: convex hulls, triangulations, Voronoi diagrams, Euclidean minimum spanning trees, and related structures have captured the imagination of computational geometers for decades [24]. Naturally, these structures have also been an early focus in the study of the limited workspace model.

3.1 Convex Hulls

Asano et al. [12] observed that the edges of the convex hull for a set of points in the plane can be found in time when cells of workspace are available, through a straightforward application of Jarvis’ classic gift-wrapping algorithm [46]. Darwish and Elmasry [35] extended this result to an asymptotically optimal time-space trade-off. For this, they developed a space-efficient heap data structure. The optimality of the trade-off is implied by a lower bound for the sorting problem due to Beame [23]. The algorithm by Darwish and Elmasry needs time in order to output the edges of the convex hull, provided that cells of workspace are available. The underlying heap data structure is very versatile, and it can also be used to obtain time-space trade-offs for the sorting problem and for computing a triangulation of a planar point set [10, 49].

3.2 Delaunay Triangulations and Voronoi Diagrams

Constant Workspace.

For computing the Delaunay triangulation and the Voronoi diagram of a given set of sites in the plane, Asano et al. [12] presented an -time algorithm that uses cells of workspace under a general position assumption (i.e., no 3 sites of lie on a common line and no 4 sites of lie on a common circle). The Voronoi diagram for ,

, is obtained by classifying the points in the plane according to their nearest neighbor in

. For each site , the open set of points in with as their unique nearest site in is called the Voronoi cell of and is denoted by . The Voronoi edge for two sites consists of all points in the plane with and as their only two nearest sites. It is a (possibly empty) subset of the bisector of and , the line that has all points with the same distance from and from . Finally, Voronoi vertices are the points in the plane that have exactly three nearest sites in . It is well known that has vertices, edges, and cells [24].

First, we explain how Asano et al. [12] find a single edge of a given cell of . Then, we repeatedly use this procedure to find all the edges of , using cells of workspace.

Lemma 3.1.

Given a site and a ray from that intersects , we can report an edge of whose relative closure intersects in time using cells of workspace.

Proof.

Among all bisectors , for , we find a bisector that intersects closest to . This can be done by scanning the sites of while maintaining a closest bisector. The desired Voronoi edge is a subset of . To find , we scan again. For each , we compute the intersection between and . Each such intersection determines a piece of not in , namely the part of that is closer to than to . The portion of that remains after the scan is exactly . Since the current piece of in each step is connected, we must maintain at most two endpoints to represent the current candidate for . The claim follows. ∎

Theorem 3.2.

Suppose we are given planar sites in general position. We can find all the edges of in time using cells of workspace.

Proof.

We process each site to find the edges on . We choose as the ray from through an arbitrary site of . Then, intersects . Using Lemma 3.1, we find an edge of that intersects . We let be the ray from through the left endpoint of (if it exists), and we apply Lemma 3.1 again to find the left adjacent edge of in . We repeat this to find further edges of , in counterclockwise order, until we return to or until we find an unbounded edge of . In the latter case, we start again from the right endpoint of (if it exists), and we find the remaining edges of , in clockwise order.

Since each Voronoi edge is incident to two Voronoi cells, we will encounter each edge twice. To avoid repetitions in the output, whenever we find an edge of with , we report if and only if . Since has edges, and since reporting one edge takes time and cells of workspace, the result follows. ∎

Time-Space Trade-Off.

Korman et al. [49] gave a randomized time-space trade-off for computing that runs in expected time , provided that cells of workspace may be used. The algorithm is based on a space-efficient implementation of the Clarkson-Shor random sampling technique [33] that makes it possible to divide the problem into subproblems with sites each. All subproblems can then be handled simultaneously with the constant workspace method of Asano et al. [12], resulting in the desired running time.

More recently, Banyassady et al. [20] developed a better trade-off that is also applicable to Voronoi diagrams of a higher order. Their algorithm is deterministic, and it computes the traditional Voronoi diagram as well as the farthest-site Voronoi diagram of in time, using cells of workspace, for some . The main idea is to obtain by processing in batches of sites each, using a special procedure to handle sites whose Voronoi cells have a large number of edges.

We now describe this algorithm in more detail. We assume that our algorithm has cells of workspace, for the given parameter . This does not strengthen the model, and it simplifies the analysis of the algorithms, as we can assume that we can manipulate structures that require cells of space (such as a Voronoi diagram of sites), without a detailed analysis of the associated constants.

Lemma 3.3.

Suppose we are given a set of sites in , together with, for each , a ray from that intersects . We can report, for each , an edge of that intersects , in total time using cells of workspace.

Proof.

The algorithm scans the sites twice. In the first scan, we find, for , the bisector that contains . In the second scan, we determine for each , , the exact portion of that determines .

We start by partitioning into disjoint batches of consecutive sites in the input. Then, we compute in time, using cells of workspace. For each , we find the edge of that intersects closest to , and we store the bisector that contains . This takes time by a simple traversal of the diagram. Now, for , we repeat this procedure with , and we update if the new diagram gives an edge that intersects closer to than . After all batches have been scanned, the current bisector is the desired final bisector , for .

In the second scan, to find the portion of that constitutes , for , we again consider the batches of size . As before, for , we compute , and for , we find the portion of inside the cell of in . We update the endpoints of to the intersection of the current and the cell . After processing , there is no site in that is closer to than . Thus, after the second scan, is the edge of that intersects .

In total, we construct Voronoi diagrams, each with at most sites, in time each. The remaining work in each step of each scan is . Thus, the total running time is . At each time, we have sites in workspace and a constant amount of information per site, including their Voronoi diagram. Thus, the space bound is not exceeded. ∎

The global time-space trade-off algorithm has two phases. In the first one, we scan sequentially. During this scan, we keep a set of sites from in the workspace whose Voronoi cells we intend to find. In the beginning, consists of the first sites from , and we apply Lemma 3.3 to compute one edge of on each cell , for . The starting ray for each is constructed in the same way as in Theorem 3.2. After that, we update these rays to find the next edge on each , for , as in Theorem 3.2. Now, however, for each , we store the original ray in addition to the current ray . These two rays help us determine which edges on have been reported already. Whenever all edges for a site have been found, we replace with the next relevant site from , and we say that has been processed; see Figure 2.

Figure 2: Illustration of the algorithm of Banyassady et al. [20] after nine iterations of Lemma 3.3 for a set of sites and workspace of cells. The black segments are the edges of that have already been found. The gray and the red sites represent, respectively, the sites which have been fully processed and those which are currently in the workspace.

Since (i) the Voronoi diagram of has edges; (ii) in each iteration, we find edges; and (iii) each edge is encountered at most twice, it follows that after iterations of this procedure, fewer than sites remain in . All other sites of have been processed

At this point, phase 1 of the algorithm ends (and the second one starts). During the execution of the first phase, we output only some of the Voronoi edges, according to the following rule: suppose we discover the edge while scanning the site , and let be the other site with on its cell boundary. We output only if either (i) is a fresh site, i.e., has not been processed yet and is not currently in (this can be tested using the index of the last site inserted into ); or (ii) is in and has not been reported as an edge of (this can be tested in time with a binary search tree that contains all elements in and using the rays and ). In this way, each Voronoi edge is reported at most once.

Let be the set of sites that have not been processed when the first phase of the algorithm ends. We cannot use Lemma 3.3 any longer, since it needs two passes over the input to find a single new edge for each site in , and the sites in may have too many associated Voronoi edges. Each remaining Voronoi edge must be on the cell boundaries of two sites in . Thus, in the second phase, we compute in time. Let denote the set of its edges. Some edges of also occur in (possibly in truncated form). To identify these edges, we proceed similarly to the second scan of Lemma 3.3: in each step, we compute the Voronoi diagram of and a batch of sites from . For each edge of , we check whether occurs in (possibly in truncated form). If not, we set to be empty; otherwise, we update the endpoints of according to the truncated version. After all edges in have been checked, we continue with the next batch of sites from . After processing all batches, the remaining non-empty edges in are precisely the edges of that are incident to two cells in . Some of these edges may have been reported in the first phase. We can identify each such edge in time by using the starting ray and the current ray of the two incident cells from the first phase. These rays are still available in the workspace, because consists of those sites that were left over at the end of the first phase. We output the remaining edges in .

Theorem 3.4.

Let be a planar -point set in general position, and . We can report all edges of using time and cells of workspace.

Proof.

Lemma 3.3 guarantees that the edges reported in the first phase are in . Also, conditions (i) and (ii) ensure that no edge is reported twice. Furthermore, in the second phase, no edge will be reported for a second time. Since , an edge is incident to the cell of two sites in , if and only if the same edge (possibly in extended form) occurs in . Furthermore, for each edge of , we consider all sites of , and we remove only the portions of that cannot be present in .

Regarding the running time, the first phase requires invocations of Lemma 3.3 and tests whether a Voronoi edge should be output. This takes time. The second phase does a single scan over , and it computes a Voronoi diagram for each batch of sites, which takes time in total. Thus, the total running time is .

At each point, we store only sites in (along with a constant amount of information attached to each site), the batch of sites being processed and the associated Voronoi diagram. All of this requires cells of workspace, as claimed. ∎

Banyassady et al. [20] also showed that these techniques work for Voronoi diagrams of higher order [14, 50]. More precisely, they obtained the first time-space trade-off for computing the family of all Voronoi diagrams for up to a given order in time using cells of workspace.222The algorithm assumes that . This assumption is due to the fact that we need cells of workspace to represent a feature of a Voronoi diagram of order , so that our algorithm can handle up to features of a diagram of order with cells of workspace. The algorithm is based on the simulated parallelization technique, i.e, there is one instance of the algorithm for each order , for , where the instance for order outputs the features for the order- Voronoi diagram and produces the input needed by the instance for order . The computational steps of the individual instances and the memory usage are coordinated in such a way as to make efficient use of the available workspace while avoiding an exponential blow-up in the running time that could be caused by a naive application of the technique.

Open Problem 1.

What is the best trade-off for computing higher-order Voronoi diagrams? Are there non-trivial lower bounds? Can we quickly compute a diagram of a given order without computing the diagrams of lower order? How can we compute Voronoi diagrams of order larger than when words of workspace are available?

If the goal is to compute any triangulation (rather than a Delaunay triangulation), Ahn et al. [5] gave a divide and conquer approach that is faster. Their method runs in time and uses cells of workspace.

3.3 Euclidean Minimum Spanning Trees

Constant Workspace.

Let be a set of sites in the plane. The Euclidean minimum spanning tree of , , is the minimum spanning tree of the complete graph on , where the edges are weighted with the Euclidean distance of their endpoints. The task of computing was among the first problems to be considered in the limited workspace model. Asano et al. [12] provided an algorithm that reports the edges of by increasing length using time and cells of workspace. This is still the fastest algorithm for the problem when constant workspace is available.

We now describe the algorithm in more detail. We assume that is in general position, i.e., no sites of lie on a common line, no four points in lie on a common circle, and the edge lengths in are pairwise distinct. Then, is unique. Given , we can compute in time if cells of workspace are available [24]. It is well known that is a subgraph of the Delaunay triangulation of ,  [24]. Thus, it suffices to consider only the edges of instead of the complete graph on .

We apply Theorem 3.2, to construct using the edges of  [12]. Since and are graph-theoretic duals, we can output all edges of in time with cells of workspace by adapting the algorithm in the proof of Theorem 3.2. Even more, we can implement a subroutine clockwiseNextDelaunayEdge that receives the Delaunay edge of a site and finds the next clockwise Delaunay edge incident to in time using cells of workspace.

By the bottleneck shortest path property of minimum spanning trees, a Delaunay edge is not in if and only if has a path between and consisting only of edges with length less than  [36]. Let be the subgraph of with the edges of length less than . The subgraph is defined analogously, having all edges of with length at most .

Lemma 3.5.

Let be a planar point set and let be an edge of . Given , we can determine whether appears in in time using cells of workspace.

Proof.

As explained above, we must determine whether there is a path between and in . Since is a plane graph, if to are connected in , then there is a path that forms a face in with on its boundary. Thus, we can check for the existence of such a path by walking from along the boundary of the face of that is intersected by and checking whether we can encounter .

In the first step, we find the clockwise next edge from that is incident to and has length at most . This can be done by repeated invocations of clockwiseNextDelaunayEdge. In the next step, we find the clockwise next edge from that is incident to and that has length at most , again by repeated calls to clockwiseNextDelaunayEdge. The walk continues, for at most steps, until we encounter the edge again. This can happen in two ways: (i) we see while enumerating the incident edges of . Then, and are in the same component of , and hence does not belong to ; or (ii) we see while again enumerating incident edges of . In this case, it follows that connects two connected components of , and belongs to .333Note that the walk may return to first even though and lie in the same component of . Thus, it is crucial that we continue the walk until is encountered for a second time from . See Figure 3 for an illustration.

Figure 3: Illustration of the algorithm for determining if the edge is part of the EMST. The walk starts from the edge . After 5 steps, it returns to , but it does not see . Thus the walk continues until it encounters as an edge incident to . Thus, . (b) Slightly modified instance in which and are in different components. The walk starts from , and after 7 steps it encounters as an edge of . Thus, .

This procedure generates a subset of the edges in , and each edge is generated at most twice, by a call to clockwiseNextDelaunayEdge in time. Thus, the total running time is . The space bound is immediate. ∎

Theorem 3.6.

Given a set of sites in the plane, we can output all edges of in time using cells of workspace.

Proof.

The algorithm computes the edges of using the adaptation of the algorithm in the proof of Theorem 3.2. Every time we detect a new Delaunay edge , we pause the computation of and, using Lemma 3.5, we determine if is in . If so, we output (do nothing otherwise). In either case we resume with the computation of (pausing and resuming can be done because this subroutine only uses space). Since has edges, and since it takes time to decide membership in , the total time to determine the edges of is . Furthermore, by Theorem 3.2, the overhead for computing is , which is negligible compared to the remainder of the algorithm. The space requirement is immediate from Theorem 3.2 and Lemma 3.5. ∎

Open Problem 2.

Can be found in time and constant workspace?

Time-Space Trade-Offs.

The running time of the algorithm is dominated by the calls to Lemma 3.5. Thus, the time-space trade-off from Theorem 3.4 does not immediately extend for computing . Recently, Banyassady et al. [19] revisited the problem and provided a time-space trade-off, building on the ideas in Theorem 3.6. Their algorithm computes in time, provided that cells of workspace are available. It uses the workspace in two different ways: akin to Lemma 3.3, we check edges in parallel for membership in . Further, Banyassady et al.  [19] introduce -nets, a compact representation of planar graphs. Using -nets, one can speed up Kruskal’s MST algorithm on by making better use of the limited workspace.444Although the spirit of the algorithm is the same, in [19], the walks are performed in the relative neighborhood graph of instead of . This is critical, since is not of bounded degree. The -net structure seems to be of independent interest, as it provides a compact way to represent planar graphs that could be exploited by other algorithms that deal with such graphs.

4 Triangulations, Partitions, and Shortest Paths in Polygons

Let be a simple planar polygon with vertices. When dealing with , it is often useful to first compute a triangulation for . We remind the reader of the terminology: a diagonal is a line segment between two vertices of that goes through the interior of . A triangulation is a maximal set of diagonals that do not cross each other. Any triangulation of contains exactly diagonals, and—famously and perhaps notoriously—a triangulation of can be found in linear time when linear workspace is available [32].

In order to perform divide-and-conquer algorithms on , we often would like to have a balanced partition of : a chord is a line segment with endpoints on the boundary of that goes through the interior of . Given , a balanced partition of is a set of mutually non-crossing chords that partition into subpolygons with vertices each. Ideally, the chords should be diagonals, but other choices, such as vertical segments, are acceptable as well. With linear workspace at our disposal, for any given , a balanced partition of can be obtained in linear time by first triangulating and then traversing the dual graph of the triangulation while greedily selecting appropriate diagonals to serve as the chords of the partition.

A third classic problem on is shortest path computation: given two points and inside , find the unique (geodesic) shortest path from to that stays in . The standard algorithm for this problem triangulates and then walks in the dual graph of the triangulation from the triangle containing to the triangle containing . During this walk, the algorithm maintains a simple funnel data structure and outputs the segments of the shortest path. This requires linear time and linear space (see, for example, Mitchell’s survey [52]).

Thus, in traditional computational geometry, it appears that the most fundamental of the three problems is polygon triangulation. Once a subroutine for triangulating is at hand, the other two problems can be solved rather easily, with additional linear overhead. However, the study of the limited workspace model has revealed a more intricate web of relationships between these problems that we will now explain.

4.1 First Results in the Limited Workspace Model

Asano et al. [13] showed a way of navigating a trapezoidation of with cells of workspace, so that the adjacent trapezoids for any given trapezoid can be found in time (assuming that the vertices of are in general position). They also gave a simple algorithm to output the path between two vertices of a tree that needs linear time and cells of workspace [13]. These two results together lead to a constant workspace algorithm for the shortest path problem in that requires time in theory and turns out to be quite efficient in practice [34]. Furthermore, Asano et al. provide two alternative constant workspace algorithms for shortest paths in polygons that are based on constrained Delaunay triangulations [13] and ray shooting [12], respectively. The former algorithm requires time, the latter algorithm requires time. An experimental evaluation of these algorithms was conducted by Cleve and Mulzer [34]. It showed that the theoretical guarantees for the shortest path algorithms could also be observed in practice.

Open Problem 3.

How fast can we find shortest paths in polygonal domains in the limited workspace model?

4.2 Space-efficient Reductions

The initial study of Asano et al. [13] was quickly followed by another work of Asano et al. [9] that showed how to enumerate all triangles in a triangulation of using time and cells of workspace. They also proved the following: if one can compute a triangulation of in time when cells of workspace are available, then one can use a bottom-up approach to find diagonals that constitute a balanced partition of into subpolygons with vertices, using time and cells of workspace. As they explain, such a balanced partition also leads to a time-space trade-off for the shortest path problem in : given the diagonals of the partition, one can navigate through the subpolygons and compute a shortest path between any two vertices in in additional time, using cells of workspace. Thus, we have a reduction from shortest paths in polygons to polygon triangulation that runs in time, using cells of workspace. At the time of Asano et al.’s work [9], the best general bound for was , so that their result at first seemed only useful in a regime where can be preprocessed and the balanced partition can be stored in the workspace for later use.

Aronov et al. [6] showed a connection between shortest paths and polygon triangulation that goes in the other direction. Given a shortest path algorithm as a black-box, they use it to partition recursively into smaller pieces that fit into the workspace. Then, each piece can be triangulated in the workspace using linear time and space [32]. Even though it may happen that the recursion runs out of space—in which case the algorithm needs to fall back to a brute force triangulation method—it turns out that if there are at least cells of workspace at our disposal, the recursion always succeeds. Then, the resulting running time is of the form , where is a constant and is the time needed to compute the shortest path between two points in a simple polygon with vertices, provided that cells of workspace are available.

Thus, by combining the results of Asano et al. [9] and Aronov et al. [6], one can see that all three problems are equivalent in the limited workspace model: a triangulation can be used to partition into balanced pieces [9]. Once we have the partition, we can compute the shortest path between any two points in  [9]. Finally, a shortest path subroutine allows us to find a triangulation of in essentially the same time and space [6]. Thus, given a fast algorithm for any of the three problems, we can use the reductions to obtain algorithms for the other two problems that require essentially the same time and space (the transformation only adds an additive overhead of time).

4.3 Obtaining Genuine Trade-Offs

Even though the work of Asano et al. [9] gave a time-space trade-off for shortest paths in polygons, it did so at the expense of a preprocessing step that triangulates the polygon, a task for which they could only claim a time algorithm, even if cells of workspace were available. Thus, at SoCG 2014, Tetsuo Asano asked whether there is a more direct time-space trade-off for the shortest path problem that does not go through such a preprocessing step.

Soon after, Har-Peled [42] answered this question in the affirmative. He described a randomized algorithm that uses the violator spaces framework [38] and a new polygon decomposition technique [41] in order to compute the shortest path between any two points in in expected time , provided that cells of workspace are available. Har-Peled’s result, combined with the reductions of Asano et al. and Aronov et al., also gives algorithms for the other two problems that run in expected time , using cells of workspace [6].

Very recently, Oh and Ahn [55] showed that a similar result could also be obtained with a deterministic algorithm. They explained how to find the trapezoidal decomposition of in time, using cells of workspace. This decomposition, together with the method of Asano et al. [9], gives a time deterministic algorithm for obtaining a balanced partition of with chords using cells of workspace. Again, the reductions yield algorithms with similar running times for both computing a triangulation and shortest paths in , see Table 1.

4.4 Special Cases

Special classes of polygons have also been studied. Asano and Kirkpatrick [11] showed that a monotone polygon can be triangulated in time, provided that cells of workspace are available. This is a remarkable trade-off, since at the extreme, it decreases the memory required by a linear fraction while only increasing the running time by a logarithmic factor. Moreover, it also has a gradual improvement of the running time as the available space increases that matches the best bounds when . The algorithm by Asano and Kirkpatrick proceeds through a reduction to the all nearest larger neighbors (ANLN) problem: given a sequence of real numbers, find for each , , the closest indices and with .

Open Problem 4.

Asano and Kirkpatrick showed that their algorithm is optimal for . Can the time-space trade-off be improved?

5 Other Problems in Simple Polygons

In addition to shortest paths and triangulations, also problems concerning visibility, convex hulls, and common tangents for simple polygons have been studied in the limited workspace model.

5.1 Visibility

Another problem that has received heavy attention in the limited workspace model is visibility in simple polygons. Visibility problems have played a major role in computational geometry for a long time, see [39] for an overview. The simplest problem in this family is as follows: given a simple polygon , a point , and an integer , compute the set of points in that are -visible from , i.e., the set of points for which the connecting segment properly intersects the boundary of at most times. This set is called the -visible region of in . Thus, if we interpret the polygon as the walls of a building, the -visible region of is the set of points that can see directly, without seeing through the walls. If we consider -visibility, we allow the segment to leave (and re-enter) once, and so on.

For , the -visibility problem can be solved in linear time and space, using a classic algorithm [47]. For larger values of , the problem can be solved in quadratic time [17].555Note that the notion of -visibility is slightly different in [17]: they consider all -visible points in the plane and not just the points inside the polygon. Several time-space trade-offs are known for computing the -visible region. For -visibility, Barba et al. [22] presented an algorithm that requires cells of workspace and that runs in time, where is the number of reflex vertices of that appear in the -visible region of . Their work also contains a time-space trade-off for finding the -visible region. More precisely, they describe a recursive method based on their constant workspace algorithm that runs in deterministic time and in randomized expected time. Here, is the number of reflex vertices in . Their algorithm uses cells of workspace, where is allowed to range from to  [22]. Only slightly later, a superset of the authors [21] gave an improved algorithm for the -visibility problem that runs faster for specific combinations of , , and . In fact, Barba et al. [21] discovered a much more general method for obtaining time-space trade-offs for a wide class of geometric algorithms that they call stack-based algorithms. See below for a more detailed description of this method.

Open Problem 5.

For , the visibility region can be computed in deterministic and expected time. For , it takes deterministic time. What happens for ? Partial answers are known. For example, using the compressed stack framework mentioned below, we can achieve time using cells of workspace, for any fixed . What is the smallest value of for which we can compute the visibility region in linear time?

For the general case of , Bahoo et al. [16] provided a time-space trade-off that computes the -visible region of in a polygonal region in time using cells of workspace, where may range from to . Here, is the number of “critical” vertices of , i.e., vertices, where the -visible region may change.666The actual trade-off is a bit more complicated, and we chose a simpler bound for the sake of presentation. This algorithm makes use of known time-space trade-offs for the -selection problem [31] and requires a careful analysis of the combinatorics of the -visible region of in .

Another notion of visibility was considered by Abrahamsen [1]. Given a simple polygon and an edge of , the weak visibility region of in is the set of all the points inside that are visible from at least one point on . Abrahamsen developed constant workspace algorithms for edge-to-edge visibility. This leads to an time algorithm for finding the weak visibility region inside a given simple polygon for an edge in constant workspace, where denotes the size of the output. The result also gives an time and constant workspace algorithm for computing a minimum-link-path between two points in a simple polygon with vertices.

5.2 The Compressed Stack Framework

Barba et al. [21] provided a general method for stack-based algorithms in the limited workspace model. Intuitively, a deterministic incremental algorithm is stack-based if its main data structure takes the form of a stack. In addition to computing -visibility regions, classic examples from this category include the algorithms for computing the convex hull of a simple polygon by Lee [51] or for triangulating a monotone polygon by Garey et al. [37]. Other applications in graphs of bounded treewidth are also known [18].

The general trade-off is obtained by using a compressed stack that explicitly stores only a subset of the stack that is needed during the computation and that recomputes the remaining parts of the stack as they are needed. Some delicate work goes into balancing the space required for the partial stack and the time needed for reconstructing the other parts. The upshot of applying this technique is as follows: given a stack-based algorithm that on input size runs in time and uses a stack with cells, one can obtain an algorithm that uses cells of workspace and runs in time for and in time for .777Again, as above, the actual trade-off is more nuanced, but we simplified the bound to make it more digestible for the casual reader. More details can be found in the original paper [21]. An experimental evaluation of the framework was conducted by Baffier et al. [15].

Open Problem 6.

Can we modify the compressed stack framework to compress other structures (e.g., queues, dequeues, or trees)? If so, what additional applications follow from these techniques?

5.3 Common Tangents

The problem of finding the common tangents of two disjoint polygons can also be solved with a limited amount of workspace. Abrahamsen [2] showed that one can find the separating tangents of two polygons with vertices and disjoint convex hulls in time and cells of workspace (if the convex hulls overlap, the algorithm reports that a separating common tangent does not exist.) His algorithm is stunningly simple, consisting essentially of a single for-loop, but it requires a subtle analysis. In follow-up work, Abrahamsen and Walczack [3] changed two lines in Abrahamsen’s algorithm to show that the problem of finding the outer tangents of two disjoint polygons with vertices can also be solved with time and cells of workspace. Combining these two results, one can decide in the same time and space complexity if the convex hulls of the given polygons are disjoint, overlapping, or nested.

Open Problem 7.

Can we find the outer common tangents of two non-disjoint polygons in linear time using words of workspace?

References

  • [1] M. Abrahamsen. An optimal algorithm computing edge-to-edge visibility in a simple polygon. In Proc. 25th Canad. Conf. Comput. Geom. (CCCG), 2013.
  • [2] M. Abrahamsen. An optimal algorithm for the separating common tangents of two polygons. In Proc. 31st Int. Sympos. Comput. Geom. (SoCG), pages 198–208, 2015.
  • [3] M. Abrahamsen and B. Walczak. Outer common tangents and nesting of convex hulls in linear time and constant workspace. In Proc. 24th Annu. European Sympos. Algorithms (ESA), pages 4:1–4:15, 2016.
  • [4] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Approximating extent measures of points. J. ACM, 51(4):606–635, 2004.
  • [5] H.-K. Ahn, N. Baraldo, E. Oh, and F. Silvestri. A time-space trade-off for triangulations of points in the plane. In Proc. 23rd Internat. Comput. and Combinat. Conf. (COCOON), pages 3–12, 2017.
  • [6] B. Aronov, M. Korman, S. Pratt, A. van Renssen, and M. Roeloffzen. Time-space trade-offs for triangulating a simple polygon. In Proc. 15th Scand. Symp. Work. Alg. Theo. (SWAT), pages 30:1–30:12, 2016.
  • [7] S. Arora and B. Barak. Computational Complexity: A Modern Approach. Cambridge University Press, Cambridge, UK, 2009.
  • [8] T. Asano. Constant-working-space algorithms: How fast can we solve problems without using any extra array? In Proc. 19th Annu. Internat. Sympos. Algorithms Comput. (ISAAC), page 1, 2008.
  • [9] T. Asano, K. Buchin, M. Buchin, M. Korman, W. Mulzer, G. Rote, and A. Schulz. Memory-constrained algorithms for simple polygons. Comput. Geom., 46(8):959–969, 2013.
  • [10] T. Asano, A. Elmasry, and J. Katajainen. Priority queues and sorting for read-only data. In Proc. 10th Int. Conf. Theory and Applications of Models of Computation (TAMC), pages 32–41, 2013.
  • [11] T. Asano and D. G. Kirkpatrick. Time-space tradeoffs for all-nearest-larger-neighbors problems. In Proc. 13th Algorithms and Data Structures Symposium (WADS), pages 61–72, 2013.
  • [12] T. Asano, W. Mulzer, G. Rote, and Y. Wang. Constant-work-space algorithms for geometric problems. J. Comput. Geom., 2(1):46–68, 2011.
  • [13] T. Asano, W. Mulzer, and Y. Wang. Constant-work-space algorithms for shortest paths in trees and simple polygons. J. Graph Algorithms Appl., 15(5):569–586, 2011.
  • [14] F. Aurenhammer, R. Klein, and D.-T. Lee. Voronoi Diagrams and Delaunay Triangulations. World Scientific Publishing, 2013.
  • [15] J.-F. Baffier, Y. Diez, and M. Korman. Experimental study of compressed stack algorithms in limited memory environments. In Proc. 17th Int. Symp. Experimental Algorithms (SEA), page to appear, 2018.
  • [16] Y. Bahoo, B. Banyassady, P. Bose, S. Durocher, and W. Mulzer. Time-space trade-off for finding the -visibility region of a point in a polygon. In Proc. 11th Workshop Alg. Comp. (WALCOM), pages 308–319, 2017.
  • [17] A. L. Bajuelos, S. Canales, G. Hernández-Peñalver, and A. M. Martins. A hybrid metaheuristic strategy for covering with wireless devices. J. UCS, 18(14):1906–1932, 2012.
  • [18] N. Banerjee, S. Chakraborty, V. Raman, S. Roy, and S. Saurabh. Time-space tradeoffs for dynamic programming algorithms in trees and bounded treewidth graphs. In Proc. 21st Internat. Comput. and Combinat. Conf. (COCOON), pages 349–360, 2015.
  • [19] B. Banyassady, L. Barba, and W. Mulzer. Time-space trade-offs for computing Euclidean minimum spanning trees. In Proc. 13th Latin American Symp. Theoret. Inform. (LATIN), pages 108–119, 2018.
  • [20] B. Banyassady, M. Korman, W. Mulzer, A. van Renssen, M. Roeloffzen, P. Seiferth, and Y. Stein. Improved time-space trade-offs for computing Voronoi diagrams. In Proc. 34th Sympos. Theoret. Aspects Comput. Sci. (STACS), pages 9:1–9:14, 2017.
  • [21] L. Barba, M. Korman, S. Langerman, K. Sadakane, and R. I. Silveira. Space-time trade-offs for stack-based algorithms. Algorithmica, 72(4):1097–1129, 2015.
  • [22] L. Barba, M. Korman, S. Langerman, and R. I. Silveira. Computing a visibility polygon using few variables. Comput. Geom., 47(9):918–926, 2014.
  • [23] P. Beame. A general sequential time-space tradeoff for finding unique elements. SIAM J. Comput., 20(2):270–277, 1991.
  • [24] M. de Berg, O. Cheong, M. van Kreveld, and M. Overmars. Computational Geometry: Algorithms and Applications. Springer-Verlag, Berlin, third edition, 2008.
  • [25] H. Brönnimann, T. M. Chan, and E. Y. Chen. Towards in-place geometric algorithms and data structures. In Proc. 20th Annu. Sympos. Comput. Geom. (SoCG), pages 239–246, 2004.
  • [26] H. Brönnimann, J. Iacono, J. Katajainen, P. Morin, J. Morrison, and G. T. Toussaint. In-place planar convex hull algorithms. In Proc. 5th Latin American Symp. Theoret. Inform. (LATIN), pages 494–507, 2002.
  • [27] T. M. Chan. Faster core-set constructions and data-stream algorithms in fixed dimensions. Comput. Geom., 35(1-2):20–35, 2006.
  • [28] T. M. Chan and E. Y. Chen. Multi-pass geometric algorithms. Discrete Comput. Geom., 37(1):79–102, 2007.
  • [29] T. M. Chan and E. Y. Chen. In-place 2-d nearest neighbor search. In Proc. 19th Annu. ACM-SIAM Sympos. Discrete Algorithms (SODA), pages 904–911, 2008.
  • [30] T. M. Chan and E. Y. Chen. Optimal in-place and cache-oblivious algorithms for 3-d convex hulls and 2-d segment intersection. Comput. Geom., 43(8):636–646, 2010.
  • [31] T. M. Chan, J. I. Munro, and V. Raman. Selection and sorting in the “restore” model. In Proc. 25th Annu. ACM-SIAM Sympos. Discrete Algorithms (SODA), pages 995–1004, 2014.
  • [32] B. Chazelle. Triangulating a simple polygon in linear time. Discrete Comput. Geom., 6(5):485–524, 1991.
  • [33] K. L. Clarkson and P. W. Shor. Applications of random sampling in computational geometry. II. Discrete Comput. Geom., 4(5):387–421, 1989.
  • [34] J. Cleve and W. Mulzer. An experimental study of algorithms for geodesic shortest paths in the constant workspace model. In Proc. 33rd European Workshop Comput. Geom. (EWCG), pages 165–168, 2017.
  • [35] O. Darwish and A. Elmasry. Optimal time-space tradeoff for the 2d convex-hull problem. In Proc. 22nd Annu. European Sympos. Algorithms (ESA), pages 284–295, 2014.
  • [36] D. Eppstein. Spanning trees and spanners. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, chapter 9, pages 425–461. Elsevier, 2000.
  • [37] M. R. Garey, D. S. Johnson, F. P. Preparata, and R. E. Tarjan. Triangulating a simple polygon. Inform. Process. Lett., 7(4):175–179, 1978.
  • [38] B. Gärtner, J. Matoušek, L. Rüst, and P. Skovron. Violator spaces: Structure and algorithms. Discrete Applied Mathematics, 156(11):2124–2141, 2008.
  • [39] S. K. Ghosh. Visibility Algorithms in the Plane. Cambridge University Press, New York, NY, USA, 2007.
  • [40] O. Goldreich. Computational Complexity. A conceptual perspective. Cambridge University Press, Cambridge, UK, 2008.
  • [41] S. Har-Peled. Quasi-polynomial time approximation scheme for sparse subsets of polygons. In Proc. 30th Annu. Sympos. Comput. Geom. (SoCG), pages 120–129, 2014.
  • [42] S. Har-Peled. Shortest path in a polygon using sublinear space. J. Comput. Geom., 7(2):19–45, 2016.
  • [43] M. He. Succinct and implicit data structures for computational geometry. In Space-Efficient Data Structures, Streams, and Algorithms–Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday, pages 216–235, 2013.
  • [44] N. Immerman. Nondeterministic space is closed under complementation. SIAM J. Comput., 17(5):935–938, 1988.
  • [45] P. Indyk. Streaming algorithms for geometric problems. In Proc. 24th Annu. Conf. Found. Software Technology and Theoret. Comput. Sci. (FSTTCS), pages 32–34, 2004.
  • [46] R. A. Jarvis. On the identification of the convex hull of a finite set of points in the plane. Inform. Process. Lett., 2(1):18–21, 1973.
  • [47] B. Joe and R. B. Simpson. Corrections to Lee’s visibility polygon algorithm. BIT, 27(4):458–473, 1987.
  • [48] D. Knuth. The Art of Computer Programming: Fundamental Algorithms, volume 1. Addison-Wesley, Redwood City, CA, USA, 3rd edition, 1997.
  • [49] M. Korman, W. Mulzer, A. van Renssen, M. Roeloffzen, P. Seiferth, and Y. Stein. Time-space trade-offs for triangulations and Voronoi diagrams. Comput. Geom., page available online, 2017. doi:10.1016/j.comgeo.2017.01.001.
  • [50] D.-T. Lee. On -nearest neighbor Voronoi diagrams in the plane. IEEE Trans. Computers, 31(6):478–487, 1982.
  • [51] D.-T. Lee. On finding the convex hull of a simple polygon. International Journal of Parallel Programming, 12(2):87–98, 1983.
  • [52] J. S. B. Mitchell. Shortest paths and networks. In J. E. Goodman, J. O’Rourke, and C. D. Tóth, editors, Handbook of Discrete and Computational Geometry, chapter 31, pages 811–848. CRC Press, Inc., 3rd edition, 2017.
  • [53] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1(2):117–236, 2005.
  • [54] G. Navarro. Compact Data Structures - A Practical Approach. Cambridge University Press, 2016.
  • [55] E. Oh and H.-K. Ahn. A New Balanced Subdivision of a Simple Polygon for Time-Space Trade-off Algorithms. In Proc. 28th Annu. Internat. Sympos. Algorithms Comput. (ISAAC), pages 61:1–61:12, 2017.
  • [56] O. Reingold. Undirected connectivity in log-space. J. ACM, 55(4):Art. #17, 24 pp., 2008.
  • [57] W. J. Savitch. Relationships between nondeterministic and deterministic tape complexities. J. Comput. System Sci., 4(2):177–192, 1970.
  • [58] R. Szelepcsényi. The method of forcing for nondeterministic automata. Bulletin of the EATCS, 33:96–99, 1987.