1 Introduction
Let be a family of disks in the plane. The disk graph for is the undirected graph with vertex set and edge set
If the disks in are partitioned into two sets and , one can also define a bipartite intersection graph by considering only the edges that come from an intersection between a disk in and a disk in . If all disks in have the same radius, we call a unitdisk graph. A directed version of disk graphs can be defined as follows: for , let denote the center of . The directed transmission graph is the directed graph with vertex set and edge set
If we ignore the direction of the edges in , we obtain a subgraph of .
Unit disk graphs are often used to model adhoc wireless communication networks and sensor networks [GG11, ZG04, HS95]. Disks of varying sizes become relevant when different sensors cover different areas. Moreover, general disk graphs may serve as a tool to approach other problems; for example, an application to the barrier resilience problem [KLA07] is discussed below. Directed transmission graphs model adhoc networks where different entities have different power ranges [PR10].
Minimum  cut in disk graphs.
Consider a graph with vertices and edges, and two nonadjacent vertices . A set of vertices is called an  (vertex) cut if contains no path from to . Two paths from to are (interior)vertexdisjoint if their only common vertices are and . By Menger’s theorem (see, for example, [KV10, Section 8.2]), the minimum size of an  cut equals the maximum number of vertexdisjoint  paths, both in directed and in undirected graphs. Using blocking flows, Even and Tarjan, as well as Karzanov [ET75, Kar73] showed that an  minimumcut can be computed in time . In the worst case, if , this is . This was an improvement over the previous algorithm by Dinitz [Din70]; see [Din06] for a great historical account of the algorithms. In particular, the use of DFS did not appear in his original description [Din70], but it was developed by Shimon Even and Alon Itai and included in Even’s textbook [Eve79]. The more recent time algorithm of Mądry [Mąd13] gives a better running time for sparse graphs, i.e., for .
The size of a minimum  vertex cut in a network
is a key estimator for its vulnerability. Since such networks often arise from geometric settings, it is natural to consider the case where
is a disk graph. A particularly interesting scenario of this kind is the barrier resilience problem, an optimization problem introduced by Kumar, Lai, and Arora [KLA07]. We are given a vertical strip bounded by two vertical lines, and , and a collection of disks. Each disk represents a region monitored by a sensor. Let be a point in above all disks of , and let a point in below all disks of . The task is to find a curve from to that lies in the strip and that intersects as few disks of as possible. This models the resilience of monitoring a boundary region with respect to (total) failures of the sensors. Kumar, Lai, and Arora show that the problem reduces to an  minimumcut problem in the intersection graph of .A variant of the problem, called minimum shrinkage, was recently introduced by Cabello et al. [CJLM20]. Here, the task is to shrink some of the disks, potentially by different amounts, such that there is an  curve that is disjoint from the interiors of all disks. The objective is to minimize the total amount of shrinkage. Cabello et al. provide an FPTAS by reducing the problem to a barrier resilience instance with disks of different radii.
Our Results.
We exploit the geometric structure to provide a new algorithm to find the minimum  cut in disk graphs and directed transmission graphs in expected time. For this, we adapt the approach of Even and Tarjan [ET75], extending it with suitable geometric data structures. Our method is similar in spirit to the algorithm by Efrat, Itai, and Katz [EIK01] for maximum bipartite matching in (unit) disk graphs. However, since our graph is not bipartite, the structure of the graph is more complex and additional care is needed.
2 Minimum  Cut in Disk Graphs
Let be a set of disks in the plane, and let be two nonintersecting disks. We show how to compute the maximum number of vertex disjoint paths between and in and in . This also provides a way to find a minimum  (vertex) cut. For this, we adapt the algorithm of Even and Tarjan [ET75] to our geometric setting. First, we suppose that certain geometric primitives are available as a black box, and we analyze the running time under this assumption. Then, we instantiate these primitives with appropriate data structures to obtain the desired result.
2.1 Generic algorithm
Let be a graph with vertices and edges, and let and be two nonadjacent vertices of . We want to find the maximum number of paths from to in that are pairwise vertex disjoint. The graph is assumed to be directed.^{1}^{1}1Otherwise, we replace each undirected edge by two directed edges and . An optimal solution to the directed instance directly gives an optimal solution to the undirected case.
First, we transform the graph into another graph in which every vertex other than and has indegree or outdegree . More precisely, for each vertex , we perform the following operation: we replace with two new vertices and , add the directed edge , replace every directed edge with , and replace every directed edge with ; see Figure 1. The vertices and remain untouched. The transformed graph has vertices and edges. It is bipartite, as can be seen by partitioning the vertices into the sets and . Vertexdisjoint  paths in directly correspond to vertex disjoint  paths in . Furthermore, in we have that edgedisjoint and vertexdisjoint  paths are equivalent, because every vertex (other than and ) has indegree or outdegree . Thus, it suffices to find the maximum number of edgedisjoint  paths in .
Assume we have a family of edgedisjoint  paths in . Let denote the set of all the directed edges on the paths of . See Figure 2 for an illustration of the following concepts and discussion. The residual graph is the directed graph with vertex set and edge set
The residual graph is bipartite with the same bipartition as . As in , every vertex in has indegree or outdegree at most .
For a vertex of , the level (with respect to ) of is the BFSdistance from to in , i.e., the minimum number of edges on a path from to in .^{2}^{2}2Recall that depends on both and . If is not reachable from in , we set . For every integer , the layer is the set of vertices at level , i.e., . The layered residual graph for and is the subgraph of the residual graph where only the directed edges from to , for , and the directed edges from to are kept. More precisely, this means that has vertex set and directed edge set
where
Let be a family of edgedisjoint  paths in the layered residual graph . By construction, all paths of have exactly edges. Using the paths of in and the paths of in , we can obtain edgedisjoint  paths in . For this, consider the edges
that are obtained from by canceling out directed edges that appear in both directions. The following observation is simple:
Lemma 1.
The set consists of edgedisjoint  paths in . Given and , we can construct and the corresponding edgedisjoint  paths in in total time.
Proof.
The definition of ensures that the edges all lie in , since for , we must have . Furthermore, every vertex of has indegree and outdegree both or both in . This is clear if appears on at most one path in . If appears on both a path from and from , then one incoming edge and one outgoing edge of must cancel, since has at most one incoming or outgoing edge in and the corresponding reverse edge must have appeared on a path in . The indegree of is and the outdegree of is . Moreover, the outdegree of is , because the outgoing edges from never cancel out. This means that defines paths from to . These paths can be found in time by constructing the graph explicitly. ∎
A family of  paths in the layered residual graph is blocking if contains no  path, i.e., every  path in contains at least one edge from .
Lemma 2.
Let be a layered residual graph. In time, we can find a blocking family of  paths in .
Proof.
This lemma is due to Even and Tarjan. We describe the algorithm because we will adapt it to our geometric setting below. We refer to the paper of Even and Tarjan [ET75] for the running time analysis and the proof of correctness.
We start with , , and . The algorithm proceeds in rounds. In round , we perform a DFS traversal from in . When we reach , the DFS stack contains a path from to in . We add the path to , and we obtain by removing from all the vertices (other than and ) that have been explored during the partial DFS traversal of . We finish when the graph of the current round does not contain any  path. This is detected during the DFS traversal of . ∎
The algorithm to find the maximum number of edgedisjoint  paths in is the following: we start with . Then, for , we construct the residual graph , the layered residual graph , a blocking family of  paths in , and we set to the set of (edgedisjoint)  paths defined by . We finish when contains no  path. The work performed for a single value of (constructing , , and ), is called a phase. Let denote the level of a vertex in the residual graph . Even and Tarjan [ET75] show that increases monotonically as a function of . Thus, using that the paths are vertexdisjoint and have length (whenever contains some  path), one obtains the following.
Theorem 3 (Even and Tarjan [Et75]).
The algorithm performs at most phases. When the algorithm finishes, contains the maximum possible number of vertexdisjoint  paths in .
2.2 Adaptation for neighbor queries
We want to adapt the algorithm from Section 2.1 to our geometric setting. For this, we extend the approach by Efrat, Itai, and Katz [EIK01] for finding maximum matchings in bipartite geometric intersection graphs. The idea is to avoid the explicit construction of the layered residual graphs , and to use instead an implicit representation that allows for an efficient DFS traversal of the current . For this, we identify which vertices belong to each layer of the current , and we use dynamic nearestneighbor data structures to find the directed edges between the layers. In order to encapsulate the geometric primitives, we assume that we have a certain geometric data structure to access the directed edges of . Note that the assumption is on the original graph , not in the transformed graph . Later, we will describe how such a data structure can be derived from known results about (semi)dynamic nearest neighbor searching.
Graph Encoding A.
Let be a directed graph with vertices. We assume that we have a data structure that semidynamically maintains a subset with the following operations:

constructing the data structure for elements takes time, where satisfies ;

a deletion of a vertex in can be done in time; and

for any query vertex , we can, in time, find an outgoing edge with , or correctly report that contains no such vertex.
Henceforth, we assume our vertex graph can be accessed as in Graph Encoding A. As before, we denote the corresponding transformed graph by . First, we show how to find the levels in the layered residual graph.
Lemma 4.
Let be a set of edgedisjoint paths in the transformed graph . In time , we can find the level of each vertex in the layered residual graph .
Proof.
Our goal is to perform a BFS in the residual graph without explicitly constructing the edge set of . In a preprocessing phase, for every vertex in that appears in some path of , we mark and store the unique vertices and such that and are directed edges in . This takes time .
Next, we set , construct the data structure of Graph Encoding A for . Thus, the current vertex set in is initially . In our algorithm, we iteratively compute the layers , for . In the process, we maintain the invariant that, after computing , the structure contains and the vertices in for which we do not yet know the level in .
To find , we repeatedly query with and remove from the reported item, until contains no further outneighbors of . This gives the set
of all outneighbors of in . Let be the set of corresponding outneighbors of in . We filter and remove those vertices that are in some path of and have . This gives a set with . For each vertex , we set . For each vertex , the level of in is not yet known. If supported insertions, we would insert the vertices with back into . Instead, we just construct the data structure anew for .
Then, for , while is not empty and does not contain , we compute . If is even, we iterate over the vertices of ; see Figure 3. The vertex has one outgoing edge in : if does not lie on some path of , then contains only the outgoing edge ; if lies on some path of , then contains only the outgoing edge . If does not belong to any path of , we set and add to . (In this case, the only incoming edge to in the residual graph is from , so we know that was not yet determined.) If belongs to some path of , we set and distinguish two cases. If , we do not need to do anything because is already set. If , we set and add to . (In this case, for some vertex and was not yet determined because is the only incoming edge to in the residual graph.)
If is odd, we iterate over the vertices of ; see Figure 4. If the vertex does not lie on some path of , the outgoing edges of in correspond to the outgoing edges of in ; if lies on some path of , then the outgoing edge in is replaced with the outgoing edge in . We proceed as follows: we query repeatedly with and delete the reported items. This gives the set of vertices that are stored in and have . Due to the invariant, the set contains exactly those outneighbors of in such that was not known before processing . If lies on some path of , then we already know the level of (it is ) because is the only incoming edge to in the residual graph, and therefore . For each , we set and add to . If belongs to some path of , we check if still has no level assigned, and if so, we set , add to , and delete from .
We finish when or when is empty. In the latter case, cannot be reached from in , and therefore already contains a maximum number of vertexdisjoint  paths. In the former case, we remove all elements from except for .
To bound the running time, we note first that it takes time to construct the data structure , and this is done twice. Next, we observe that every node of is deleted at most once from . Additionally, each query with a vertex of in leads either to a deletion in or does not yield an outneighbor of the vertex, but the latter happens at most once per vertex of . Thus, in total we are making queries and deletions in the data structure . The time bound follows. ∎
The next lemma shows how to find an actual blocking family in .
Lemma 5.
Consider a set of edgedisjoint paths in . In time, we can find a blocking family of  paths in the layered residual graph .
Proof.
Using Lemma 4, we compute the level of each vertex of . Recall the notation and from the proof of Lemma 4 to denote the predecessor and successor of a vertex on a path of . We adapt the algorithm in the proof of Lemma 2, which is based on a DFS traversal of .
For each odd with , we build a data structure as in Graph Encoding A for the set . This takes time because the sets are pairwise disjoint. During the algorithm, the data structure will contain the vertices such that has not yet been explored by the DFS traversal. Thus, in contrast to the approach in Lemma 2, we delete vertices as we explore them with the DFS traversal.
When we explore a vertex (at odd level ), there are two options; see Figure 3. If lies on some path of , we look at . If has been explored already, we return^{3}^{3}3This happens only if , as in any other case for some vertex and is the only incoming edge to in the residual graph and thus in the layered residual graph.. Otherwise, we continue the DFS traversal at . If does not belong to any path of , then has not been explored yet, as is the only incoming edge of , so we continue the DFS at . For each such vertex, we spend time plus the time for the recursive calls, if they occur.
Consider now the case that we explore a vertex , at even level ; see Figure 4. If , we check whether the edge belongs to . If so, we have found an  path in . We add to the output, and restart the DFS traversal from . If not, we return from the recursive call.
Consider the remaining case: we explore a vertex at even level and . If belongs to some path of , and has not been explored yet, we recursively explore and remove from . If does not belong to any path of or we have returned from the exploration of , we explore the outgoing edges from to by repeating the following procedure. We query with to obtain an edge of such that , we remove from , and we continue the DFS traversal from . The recursive call is correctly made along an edge of the layered residual graph because it cannot happen that is an edge of ; indeed, if were an edge in , then in the residual graph the edge would be the only edge incoming into , which would mean that in the DFS traversal we arrived to from , and would belong to instead of . When the query to with returns an empty answer, we return from the recursive call at .
Every vertex of , for odd, is returned and removed from at most once. Thus, each vertex of is deleted exactly once from exactly one data structure . Furthermore, for every vertex of , we make at most one query to the corresponding data structure that returns an empty answer. Thus, the running time is . ∎
The following lemma discusses how to find a minimum cut from a maximum family of  vertex disjoint paths.
Lemma 6.
Let be a maximum family of  vertex disjoint paths (in or in ). Given , we can obtain a minimum  cut in time.
Proof.
Consider the residual graph . Let be the set of vertices in that in the residual graph are reachable from . A standard result from the theory of maximum flows tells that the edges from to , denoted by , form a minimum edge  cut in and there are edges in such a cut .
Let be the set of vertices such that but or such that but . (Here, like in previous proofs, we use to denote the vertex such that belongs to .) Each edge of the cut contributes one vertex to . Then is a minimum  cut in .
If is not reachable from in , then a vertex is reachable from in the residual graph if and only if is reachable from in layered residual graph . Thus, to compute , we apply Lemma 4 to find the level of every vertex in . Then, the set is
as desired. ∎
Now, we we put everything together. By Theorem 3, we have phases, and each phase can be implemented in time because of Lemma 1 and Lemma 5.
Theorem 7.
Let be a directed graph with vertices and assume that a representation of its edges as given in Graph Encoding A is possible. Then, we can find in time the maximum number of vertexdisjoint  paths for any given . Similarly, we can find a minimum  cut.
Proof.
We use the algorithm described in Section 2.1, before Theorem 3. Because of Theorem 3, we have phases. At phase , we have a set of vertexdisjoint paths in , and we use Lemma 5 to find a blocking family of  paths in the layered residual graph . This takes time per phase. Because of Lemma 1, we can then obtain the new family of  paths in time per phase. The result for maximum number of vertexdisjoint  paths follows. For the minimum  cut, we use Lemma 6. ∎
3 Geometric Applications
Theorem 7 leads to several consequences for geometrically defined graphs, as we can use geometric data structures to realize Graph Encoding A efficiently. For unit disk graphs, there is the semidynamic data structure of Efrat, Itai, and Katz [EIK01]. The construction takes time, while each deletion and neighbor query takes amortized time. For arbitrary disks, we can use the structure of Kaplan et al. [KMR17].
Corollary 8.
Let be a set of unit disks in the plane and let and be two of the disks. We can find in time the minimum  cut in the intersection graph . For arbitrary disks, the running time becomes in expectation.
We can easily adapt the algorithm to the case where and are arbitrary shapes (and the other vertices are still represented as disks), by precomputing the disks that are intersected by and the disks that are intersected by . We get the following consequence.
Corollary 9.
The barrier resilience problem with unit disks can be solved in time. For arbitrary disks, the running time becomes .
For directed transmission graphs, we can use the data structure of Chan [Cha19] to report a disk center contained in a query disk. It takes amortized time per edition and query. (See [AM95, Cha10, CT16, KMR17, Liu20] for related bounds and for an alternative presentation of Chan’s data structure.)
Corollary 10.
Let be a set of disks of arbitrary radii in the plane and let and be two of the disks. We can find in time the minimum  cut in the directed transmission graph .
Similar results can be obtained for squares and rectangles using data structures for orthogonal range searching.
References
 [AM95] Pankaj K. Agarwal and Jiří Matoušek. Dynamic halfspace range reporting and its applications. Algorithmica, 13(4):325–345, 1995.
 [Cha10] Timothy M. Chan. A dynamic data structure for 3d convex hulls and 2d nearest neighbor queries. J. ACM, 57(3):16:1–16:15, 2010.
 [Cha19] Timothy M. Chan. Dynamic geometric data structures via shallow cuttings. In 35th International Symposium on Computational Geometry, SoCG 2019, pages 24:1–24:13, 2019.
 [CJLM20] Sergio Cabello, Kshitij Jain, Anna Lubiw, and Debajyoti Mondal. Minimum sharedpower edge cut. Networks, 75(3):321–333, 2020.
 [CT16] Timothy M. Chan and Konstantinos Tsakalidis. Optimal deterministic algorithms for 2d and 3d shallow cuttings. Discrete & Computational Geometry, 56(4):866–881, 2016.
 [Din70] E. A. Dinic. Algorithm for solution of a problem of maximum flow in a network with power estimation. Soviet Mathematics Doklady, 11:1277–1280, 1970.
 [Din06] Yefim Dinitz. Dinitz’ algorithm: The original version and Even’s version. In Theoretical Computer Science, Essays in Memory of Shimon Even, volume 3895 of Lecture Notes in Computer Science, pages 218–240. Springer, 2006.
 [EIK01] Alon Efrat, Alon Itai, and Matthew J. Katz. Geometry helps in bottleneck matching and related problems. Algorithmica, 31(1):1–28, 2001.
 [ET75] Shimon Even and Robert Endre Tarjan. Network flow and testing graph connectivity. SIAM J. Comput., 4(4):507–518, 1975.
 [Eve79] Shimon Even. Graph Algorithms. W. H. Freeman & Co., New York, NY, USA, 1979.
 [GG11] J. Gao and L. Guibas. Geometric algorithms for sensor networks. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 370(1958):27–51, 2011.
 [HS95] M. L. Huson and A. Sen. Broadcast scheduling algorithms for radio networks. In IEEE MILCOM ’95, volume 2, pages 647–651 vol.2, 1995.
 [Kar73] Alexander V. Karzanov. O nakhozhdenii maksimal’nogo potoka v setyakh spetsial’nogo vida i nekotorykh prilozheniyakh. Matematicheskie Voprosy Upravleniya Proizvodstvom (Mathematical Issues of Production Control), pages 81–94, 1973. A translation by the author with the title “On finding a maximum flow in a network with special structure and some applications” is available at http://alexanderkarzanov.net/ScannedOld/73_specnetflow_transl.pdf.
 [KLA07] Santosh Kumar, Ten H. Lai, and Anish Arora. Barrier coverage with wireless sensors. Wireless Networks, 13(6):817–834, 2007.
 [KMR17] Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Dynamic planar voronoi diagrams for general distance functions and their algorithmic applications. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, pages 2495–2504. SIAM, 2017.
 [KV10] B. Korte and J. Vygen. Combinatorial Optimization: Theory and Algorithms, volume 21 of Algorithms and Combinatorics. Springer, 4th edition, 2010.
 [Liu20] ChihHung Liu. Nearly optimal planar nearest neighbors queries under general distance functions. In Proceedings of the 2020 ACMSIAM Symposium on Discrete Algorithms, SODA 2020, pages 2842–2859, 2020.
 [Mąd13] Aleksander Mądry. Navigating central path with electrical flows: From flows to matchings, and back. In Proceedings of the 54th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2013, pages 253–262, 2013.
 [PR10] David Peleg and Liam Roditty. Localized spanner construction for ad hoc networks with variable transmission range. ACM Trans. Sen. Netw., 7(3):25:1–25:14, 2010.
 [ZG04] F. Zhao and L. Guibas. Wireless Sensor Networks: An Information Processing Approach. Elsevier/MorganKaufmann, 2004.
Comments
There are no comments yet.