Subgraph Isomorphism has applications for pattern discovery in biological networks (Artymiuk et al., 1992; Milo et al., 2002; Przulj et al., 2004), graph databases (Kuramochi and Karypis, 2001), and electronic circuit design (Ohlrich et al., 1993). It is also powerful subroutine to solve edge connectivity and vertex connectivity of planar graphs (Eppstein, 2000). The subgraph isomorphism problem is to look for occurrences of a pattern graph as a subgraph of a target graph . Subgraph isomorphism is a generalization of many -complete problems (such as finding a Maximum Clique, Longest Path, or Hamiltonian Cycle (Garey and Johnson, 1990)). The problem remains hard even in bounded degree graphs (Garey et al., 1976) and planar graphs (Plesník, 1979).
Hence, it is natural to consider parameterized versions of the problem that are tractable when some parameter is small. We focus our attention to the case when the pattern graph is relatively small, and give algorithms whose work grows slowly (i.e., close to linear) with the size of the target graph , but is allowed to grow quickly (i.e., exponential) in terms of the size of the pattern graph . This continues the development of fixed-parameter tractable (FPT) algorithms for NP-hard problems (Downey and Fellows, 1999).
We present a parallel fixed-parameter tractable algorithm with low depth for subgraph isomorphism in planar graphs. Planar graphs are an important class of graphs which arise naturally from problems in geometry (Loera et al., 2010), when trying to lay out electronic circuits without crossings (Aggarwal et al., 1991), and in image segmentation (Schmidt et al., 2009).
Drawing on existing FPT techniques (Schmidt et al., 2009; Eppstein, 1995; Baker, 1994), our algorithm exploits that local neighborhoods of a planar graph are well-behaved and can be efficiently decomposed. We overcome two fundamental challenges: The first challenge is the reliance on a breadth-first-search (of unbounded depth) to construct the local neighborhoods. We avoid this issue by applying a randomized clustering (Miller et al., 2015) into low-diameter parts. This decomposition works because we can bound the probability that an occurrence of the pattern is not in a single cluster by a constant. The second challenge is the work-efficient solution of a high depth dynamic program. We transform the problem into a directed acyclic graph and exploit the properties of the parametrized subgraph isomorphism problem to show that introducing shortcuts for only a small subset of nodes suffices to reduce the depth of the graph to poly-logarithmic in the target graph’s size (and linear in the pattern graph’s size).
Subgraph isomorphism is interested in occurrences of a graph pattern (with vertices and diameter ) as a subgraph of a target graph (with vertices). Formally, a subgraph isomorphism is an injective map from the vertices of to the vertices of such that if two vertices and are adjacent in , then and are adjacent in . The simplest variant of the subgraph isomorphism problem is to decide if any occurrence of the pattern exists in the target graph, but we can also consider counting the occurrences or listing them.
For any graph , we denote its vertex set as , its edge set as , and the subgraph of induced by a subset of its vertices by . A graph that is formed from the graph by contracting edges, deleting vertices, and deleting edges is a minor of . A family of graphs is minor-closed if every minor of every graph in the family is also in the family.
A graph with at least vertices is -vertex-connected if removing any vertices does not disconnect the graph. The vertex connectivity of a graph is the largest number for which the graph is -vertex-connected.
Tree Decomposition and Treewidth
A tree decomposition provides a recursive subdivision of a graph into overlapping subgraphs such that each subgraph is disconnected from the rest of the graph after removing few vertices. The decomposition tree records the recursive subdivision in a tree and labels the nodes of the tree with the vertices used to subdivide the graph (in a way that every edge occurs in at least one of the tree nodes). See Figure 1 for an example of how a decomposition tree represents a recursive subdivision of a graph.
The advantage of the tree decomposition is that it gives a way to describe a divide-and-conquer approach (along some graph decomposition) as a dynamic program on the decomposition tree instead. The dynamic program maintains partial results that correspond to the subgraphs of the current node in the decomposition tree and combines the partial results in a bottom-up fashion on this tree.
Formally, a tree decomposition (Bodlaender and Hagerup, 1995; Fomin et al., 2017; Bodlaender et al., 2013; Bodlaender and Kloks, 1996; Bodlaender, 1993; Lagergren, 1996) of a graph consists of a nonempty decomposition tree where each node of the tree is a subset of the vertices of , such that:
Every vertex of is contained in a contiguous nonempty subtree of the decomposition tree .
For every edge of the graph , there is a node of the tree where both endpoints and are in the node .
The maximum of over all nodes of the tree is the width of the tree decomposition. The smallest width of any tree decomposition of is the treewidth of .
We can assume for simplicity that every interior node in the decomposition tree has exactly two children, as we can split high-degree nodes and add empty leaf nodes without changing the width of the decomposition. Moreover, a minimum width tree decomposition of a graph with edges has nodes.
Model of Computation
We consider a synchronous shared memory parallel machine with concurrent reads and exclusive writes (CREW PRAM). We express our bounds in terms of the total number of operations performed by any execution of the algorithm by all processors (called work) and the length of the critical path in the computation (called depth) (Blelloch, 1996). By Brent’s scheduling algorithm (Blelloch, 1996; Reif, 1993), an algorithm with work and depth can be executed with processors in time on a CREW PRAM.
Numerous efficient parallel algorithms make use of some form of randomness (Geissmann and Gianinazzi, 2018). For some graph problems (such as minimum cuts (Geissmann and Gianinazzi, 2018) and minimum spanning trees (Cole et al., 1996)), a randomized algorithm has the lowest known bounds.
We assume each processor has access to an independent and uniformly distributed random word in each time step. If an event occurs with probability at leastfor all constants , we say it occurs with high probability (w.h.p.). An algorithm that returns the correct result with high probability is Monte Carlo.
1.2. Related Work
For the general case of subgraph isomorphism, no algorithm with less work than the naive is known. Ullmann presents an algorithm that uses a backtracking search (Ullmann, 1976).
Tree patterns of bounded size can be found efficiently in general graphs (Alon et al., 1995). Much attention has been put on subgraph isomorphism in special families of target graphs, which require some form of sparsity and additional structure (Eppstein et al., 2010; Eppstein, 1995; Alon et al., 1995; Chiba and Nishizeki, 1985).
The idea behind parameterized complexity (Downey and Fellows, 1999) is to identify (one or more) fundamental parameters of an NP-hard problem that characterize the difficult part of the problem. Then, a fixed-parameter tractable algorithm’s runtime separable into (or ) where is allowed to be any function of and has to be polynomial in (Downey and Fellows, 1999).
Using a Monte Carlo technique called Color Coding, Alon et al. (Alon et al., 1995) obtain work on a pattern of treewidth , which implies work for a planar pattern (as the treewidth of a planar graph with vertices is (Lipton and Tarjan, 1979; Robertson and Seymour, 1986)). The algorithm’s depth is poly-logarithmic in and polynomial in . Their key idea is to color the vertices in the target graph with random colors, which allows a dynamic programming approach that needs to keep an exponentially smaller state. Note that this algorithm is not FPT for the size of the pattern (nor the treewidth ), because its runtime grows with (or ).
Locally Bounded Treewidth
Eppstein presents the first FPT subgraph isomorphism algorithm for planar graphs that has a linear dependency on the size of the pattern graph (Eppstein, 1995). It runs in polynomial time in for patterns of size . The key insight is to exploit that local neighborhoods of planar graphs have bounded treewidth. Their algorithm generalizes to other minor-closed families with a relationship between diameter and treewidth, such as bounded-genus graphs (Eppstein, 2000). They use a breadth-first-search (BFS) to decompose the graph into these local neighborhoods.
Fomin et al. (Fomin et al., 2016) present a randomized sampling approach that produces subgraphs of sub-linear treewidth in . Then, they apply an existing FPT dynamic program.
1.3. Our Contributions
|Alon et al. (Alon et al., 1995)|
|Eppstein (Eppstein, 1995)|
|Dorn (Dorn, 2010)|
|Fomin et al. (Fomin et al., 2016)|
We present the first FPT work planar subgraph isomorphism algorithm with depth poly-logarithmic in and polynomial in . Our Monte Carlo algorithm has work and depth in planar graphs and has FPT work in all minor-closed families of graphs of locally bounded treewidth (see Section 4.3).
Table 1 contains the exact bounds and a comparison to the related works regarding planar graphs. Note that if the pattern graph occurs in the target graph, the expected work is . Our algorithm can also list all occurrences of a pattern with work and depth.
We use a low-diameter decomposition, which can ensure that the occurrences of the pattern graph are in the same low-diameter part of the graph with sufficient probability. Then, we show how to exploit the special structure of a tree decomposition based algorithm to compute its results work-efficiently in parallel. Finally, we provide a randomized extension to the algorithm that also handles disconnected pattern graphs.
More generally, we can find isomorphic subgraphs that separate a set of marked vertices (leaving them in different components after removal of the subgraph). Because there is a relation between finding certain separating cycles as subgraphs and planar vertex connectivity, our subgraph isomorphism algorithm yields better parallel bounds for deciding vertex connectivity in planar graphs.
2. From Planar To Low Treewidth
Planar graphs do not have bounded treewidth (it can be up to ), which prevents a direct application of bounded treewidth techniques (as we use in Section 3). Fortunately, a planar graph of diameter has treewidth at most (Eppstein, 1995), and each occurrence of a pattern with diameter is contained in a subgraph of diameter of the target graph.
Hence, a simple (but work-inefficient) approach to solve subgraph isomorphism in planar graphs would consist of building for every vertex in the target graph the subgraph induced by nodes at a distance at most , and then invoking an algorithm for bounded treewidth graphs on each of those subgraphs. This approach of covering the graph is inefficient because many vertices of the target graph could be in multiple (even all) of these subgraphs, leading to a total size of these subgraphs of .
Instead, Eppstein (Eppstein, 1995) proposed (based on an idea by Baker (Baker, 1994)) a covering approach based on a single BFS to cover all subgraphs of diameter at most with graphs of total size only . It is easy to see that naive BFS takes linear work and depth on a diameter graph, but we care exactly about the situation when the diameter is not bounded. Even on planar graphs, performing work-efficient and low-depth BFS is a challenging problem. An approach by Klein (Klein and Subramanian, 1993) achieves work and poly-logarithmic depth.
To avoid the issue of low-depth BFS, in our approach, we first decompose the graph into randomized clusters of small diameter (as illustrated in Figure 2 and LABEL:fig:k-cover). This allows us to then run a simple parallel BFS on those low diameter graphs and construct a covering for each of those clusters. In summary, one run of our subgraph isomorphism algorithm works as follows:
Cover the target graph with subgraphs of bounded treewidth (they might overlap, as detailed in Section 2.1).
Solve subgraph isomorphism for each such bounded treewidth subgraph in parallel (as described in Section 3).
Since our covering algorithm is randomized, an occurrence of a pattern may not be contained in any single subgraph in the cover. However, in expectation repetitions suffice to find an occurrence of the pattern if it exists. At most runs suffice to certify that no occurrence of a pattern exists with high probability. Our main result for planar graphs is the following:
Theorem 2.1 ().
Deciding (with high probability) if a connected pattern graph occurs as a subgraph of a planar target graph takes work and depth.
For a pattern of small diameter , we obtain better bounds:
Corollary 2.2 ().
Deciding (with high probability) if a connected pattern graph of diameter occurs as a subgraph of a planar graph takes work and depth.
To simplify the exposition, we assume (for now) that the pattern graph is connected and focus on the decision version of the problem. We then discuss how to remove the assumption of connectedness in Section 4.1 and show how to modify the algorithm to list all occurrences of a pattern graph in Section 4.2. Moreover, we generalize the approach from planar graphs to a class of graphs that contains all bounded-genus graphs in Section 4.3.
2.1. Parallel Low-Treewidth Cover
We show how to construct (in parallel) a set of subgraphs of low treewidth such that each occurrence of a connected pattern is in at least one of the subgraphs with constant probability. The first step is to use a low-diameter decomposition. The goal of a low-diameter decomposition is to partition the vertices of the graph into (vertex-disjoint) clusters of low diameter such that few edges of the graph connect vertices in different clusters.
Exponential Start Time Clustering (Miller et al., 2015) is especially well-suited for our purposes because it bounds the probability that an edge connects two different clusters. This observation allows us to bound the probability that a connected subgraph is split into multiple clusters, and thus the clustering preserves the occurrences of a graph pattern with nontrivial probability, as needed for our purposes.
A clustering of is a set of vertex-disjoint induced subgraphs called clusters that together contain all vertices. We say an edge crosses the clusters if it has endpoints in the vertex sets of two distinct clusters.
Lemma 2.3 (Exponential Start Time Clustering (Miller et al., 2015)).
With work and depth, Exponential Start Time -Clustering produces, w.h.p., clusters of diameter where each edge crosses the clusters with probability at most .
Note that Exponential Start-Time Clustering does not allow us to fix the number of clusters a priori. Instead, the number of clusters depends on the structure of the graph. For example, a clique will most likely end up as a single low-diameter cluster.
Because every edge crosses the clusters with small probability, the probability that a fixed occurrence of the pattern contains an edge that crosses the clusters is also relatively small (for an appropriate choice of parameter ). See Figure 2 for an illustration.
Observation 1 ().
The probability that no edge of a connected subgraph of the graph crosses a cluster of an Exponential Start Time -Clustering of is at least
The idea is that some spanning tree of the occurrence remains intact (i.e., no edge in the tree crosses a cluster) with the given probability, which implies the result. Consider an arbitrary spanning tree of . By Lemma 2.3, the probability that a particular edge of the spanning tree crosses the clusters is at most . By the union bound, the probability that any of the edges of the spanning tree crosses the clusters is at most . Hence, the probability that no edge crosses the clusters is at least . ∎
Parallel Treewidth --Cover.
Run Exponential Start Time -Clustering on .
For each cluster, choose an arbitrary root and run a naive parallel BFS within the cluster.
This yields a BFS tree for each cluster. For each level of the tree, output the subgraph induced by the vertices at distance through from (as illustrated in LABEL:fig:k-cover).
The algorithm guarantees that each of the subgraphs has low treewidth and that every occurrence of the pattern graph is in at least one of the subgraphs with constant probability:
Theorem 2.4 ().
For a planar target graph and a connected pattern graph with vertices and diameter , a Parallel Treewidth - Cover produces a set of induced subgraphs of such that:
Every graph has treewidth at most .
Every vertex of is contained in at most graphs .
Every fixed occurrence of is contained in at least one of the graphs with probability at least .
The algorithm takes, w.h.p., work and depth.
Each of the graphs is a subgraph of a planar graph with diameter . Hence, it has treewidth at most (Eppstein, 1995). By creftypecap 1, an occurrence of is in the same cluster with probability at least . If this is the case, consider the first vertex of the pattern occurrence encountered during the BFS done for the cluster and let be the distance of from the root of the BFS tree. Then, the occurrence is an induced subgraph of .
The clusters have diameter . Hence, the BFSes have depth. Each vertex and edge is part of at most subgraphs by construction, which implies that the work is . ∎
It remains to find (in parallel) occurrences of the pattern on each of the low treewidth subgraphs we constructed. The algorithm in Section 3 requires that a tree decomposition of the subgraph has already been computed. For a planar graph, constructing such a decomposition of width takes work and depth given a planar embedding of the graph (Eppstein, 1995; Baker, 1994). Computing a planar embedding takes work and depth (Klein and Reif, 1986).
3. Algorithm for Bounded Treewidth
The main result of this section is a parallel algorithm to solve subgraph isomorphism in parallel on graphs of bounded treewidth. It is based on a simplified version of the algorithm from Eppstein (Eppstein, 1995). We transform the original problem into a graph search problem. Exploiting the particular structure of the resulting acyclic graph allows us a low depth and work-efficient solution.
Lemma 3.1 ().
Deciding if a connected pattern graph is isomorphic to a subgraph of the target graph of treewidth takes depth and work. The bounds hold w.h.p..
The overall idea of the sequential algorithm is to gradually compute the subgraph isomorphism while traversing the decomposition tree in a bottom-up fashion. We start by discussing the partial matches (partially completed subgraph isomorphisms) the algorithm employs, which are crucial for the parallel algorithm as well.
3.1. Partial Matches
Every node in the decomposition tree corresponds to a subgraph induced by in the target graph with only a small number of vertices . Moreover, the descendants of the node (together with ) induce a subgraph of the graph that is separated from the rest of the graph by the vertices in the tree decomposition node . The idea of partial matches is to find occurrences of sub-patterns of the pattern within these subgraphs and combining them in a bottom-up fashion in the tree decomposition.
Partial matches exist between subgraphs of the pattern graph and these induced subgraphs . Because vertices that are in the subgraph but are not in the separating set are not directly connected to the rest of the graph , it is not necessary to explicitly store the mapping between pattern and target graph for these vertices in order to combine a partial match inside this subgraph with partial matches from the rest of the graph. Hence, when we build partial matches, only the different mappings for these vertices in the separating set are important. The remaining vertices that have already been matched in a child are recorded as such. See Figure 4 for an example.
Formally, a partial match of is a triple , where denotes the set of vertices matched in a child, the set of vertices marked as unmatched, and a subgraph isomorphism function from the subgraph to the subgraph . If a vertex of is matched in a child, the vertex is mapped to a vertex in which does not appear in . If a vertex of is unmatched, then it is not matched to any vertex that appears in the subgraph .
3.2. Eppstein’s Sequential Algorithm
The idea is to extend the partial matches while traversing the decomposition tree bottom-up. The goal is to construct a partial match of the root node where no vertex is unmatched. We focus on how to construct such a partial match for the root, from which a specific subgraph isomorphism can be recovered efficiently (by collecting appropriate partial isomorphisms in a top-down traversal of the tree; see also Section 4.2.1).
A partial match of a child node can be extended when going to a parent node by matching some additional vertices that were unmatched by the child match to , marking the vertices that have been matched by the child but are not in the parent as matched in a child, and leaving the rest of the partial isomorphism function the same (the vertices that were not newly matched in remain unmatched).
A partial match that can be extended to a parent’s partial match (possibly together with another child’s partial match) is called consistent with a parent’s partial match. The precise rules for being consistent follow. Consider a node of the decomposition tree, one of its children , and the partial matches of and of . For all vertices in :
If is matched by to a node in or by to a node in , then they map to the same value: . This prevents the partial matches and to map the same vertex in the pattern graph to different nodes in the target graph.
If the child partial match matches a vertex to a vertex not in the parent label set or marks the vertex as matched in a child (i.e., in , then the parent partial match marks the vertex as matched in a child (i.e., in ). In particular, we have .
Note that these rules imply that the child’s partial match does not match any vertex that is unmatched by the parent, i.e., .
The point of the following combination rule is to ensure (on top of consistency) that a vertex that is marked as matched in a child in the parent is matched in exactly one of the children. A partial match of node is compatible with a partial match of the left child of and partial match of the right child of if the following conditions hold:
The partial matches and are both consistent with the partial match .
If a vertex is marked as matched in a child by , then it is marked as unmatched in exactly one of the child matches and .
A partial match is valid, if it is compatible with two partial matches of its children, or if it does not mark any vertices as matched in a child. Note that the trivial partial match that marks everything as unmatched is always valid. A valid partial match of the root node that does not mark any vertex as unmatched certifies the existence of a subgraph isomorphism.
The sequential algorithm traverses the decomposition tree bottom-up and enumerates all possible partial matches for the current node, then checks which are valid (given the valid matches for the children). For a tree decomposition of width and a pattern of size , there are at most possible partial matches per node. There are at most combinations of partial matches of the parent and its two children and validating a combination takes time. Hence, the overall runtime is .
3.3. Parallel Algorithm
The issue is that even a low-diameter planar graph might have a decomposition tree that has a large height of . Therefore, parallelizing the computation at each node of the decomposition tree is not enough. It is possible to transform any tree decomposition into a decomposition of height with three times the treewidth (Bodlaender and Hagerup, 1995), which increases the work by a factor of .
To avoid this, we parallelize across the height of the decomposition tree. In order to obtain a simpler problem, we partition the tree into paths. Then, we solve the problem on each of the paths. A path can be solved once all paths that start at a child of a node in the path have been solved. We avoid the sequential bottleneck by transforming the problem of finding valid partial matches in these subpaths of the tree decomposition into a reachability question in an acyclic directed graph with special structure. The reachability question can be solved work-efficiently with a low depth on this acyclic graph by introducing shortcuts of exponentially increasing distance to a carefully selected subset of the vertices.
3.3.1. Decomposition into Paths
Let us start by discussing how to decompose the tree into suitable subpaths. Walk from every leaf towards the root until reaching a branching node (i.e., a node with at least children). Remove the visited paths from the tree, and proceed recursively. This decomposition can be implemented efficiently using parallel expression tree evaluation (tree contraction) (Reif, 1993; Miller and Reif, 1985):
Lemma 3.2 (Appendix A).
A tree can be decomposed into a set of paths where the paths are grouped into layers with the property that vertices in the -th layer have no children in a layer larger than . This decomposition takes work and depth.
3.3.2. The Graph of Partial Matches.
We can reason about how to construct the valid partial matches for a subpath of the tree decomposition, assuming we already solved all paths descending from a child of . Specifically, we derive locally at each node in the subpath a set of partial matches that are valid partial matches if at least one of the partial matches of a child node of is also a valid partial match. At the leaf of the path, we know which partial matches are valid (because both children have already been solved). This observation leads to the idea to construct a directed acyclic graph of partial matches where reachability models the validity of the partial matches, as follows.
Let be a subpath of the tree decomposition . Consider a node in the path and assume we already computed the partial matches for the left child of (the other child is the right child , where ). Then, we can check which partial matches of and the right child of are compatible with a partial match of . This yields for every partial match of a set of partial matches of that would validate the partial match .
We construct a directed acyclic graph based on this idea. For the leaf node of , there is a vertex in for every valid partial match. For every other node in , there is a vertex for every partial match of that node . Then, there is an edge from a partial match of the child of to a partial match if there is a valid partial match of the other child of such that is compatible with and .
Reachability in the graph can model which partial matches are valid: A partial match is tagged as valid if it does not mark any vertices as mapped by a child. The partial matches from the leaf node of are also tagged as valid. Then, the valid partial matches are those that are reachable from a partial match tagged as valid in the directed acyclic graph .
3.3.3. Finding Valid Partial Matches Via Reachability
Next, we discuss how to compute all the valid partial matches using the directed acyclic graph . Note that this graph still has a diameter equal to the length of the path , so we cannot directly use BFS. Hence, we introduce shortcuts of exponentially increasing distance to reduce the diameter to . After introducing the shortcuts, we use naive parallel BFS to determine all the reachable vertices. The details follow.
A simple (but a factor work-inefficient) way to solve reachability is to introduce shortcuts for every vertex (similarly to some list ranking and connected components algorithms (Reif, 1993)):
Introduce shortcuts in rounds .
Round creates shortcuts of length . The edges of the graph are shortcuts of length .
For round , for every vertex , look at all its outgoing edges of length . For each such edge , look at all edges of equal length and add an edge of length to .
This would result in depth, but also be work inefficient by up to a factor (when ) because every vertex in the graph does work.
The crucial observation to overcome this limitation is that any valid partial match is constructed by matching a “new” vertex at most times. Thus, there are at most edges in that match new vertices along any path in towards a valid partial match. The rest of the edges in do not introduce any new matches, but instead, translate from the partial match of a child to an equivalent partial match of the root. Since there is only one way not to introduce any new matches (see Figure 5), the subgraph of edges that do not introduce new edges is a directed forest (where edges are directed towards the roots). Hence, it suffices to introduce shortcuts in this forest .
Because the subgraph is a forest, shortcuts can be introduced work-efficiently in parallel: In each tree of , decompose the tree into paths using Lemma 3.2. In each path, choose every -th vertex as a vertex where shortcuts are introduced. Add a shortcut from every such vertex to the next, then add shortcuts of exponentially increasing distance between them (within the path). Moreover, add a shortcut from every vertex to the first vertex in a lower layer.
Lemma 3.3 ().
Computing the valid partial matches of the graph pattern in a subpath of a decomposition tree of width takes work and depth.
The work is linear in the number of vertices because we add the edges of exponentially increasing distances to a forest of vertices.
After introducing the shortcuts, the distance from a valid leaf node to any other valid node is : Consider any path in the original graph . It contains at most edges that are not in the forest . Therefore, it consists of at most subpaths where each is a subgraph of the forest . Each subpath is contained in a maximal tree of . By Lemma 3.2, intersects at most subpaths of . It takes hops to move from the first such subpath to the last (because of the shortcuts to a vertex in a lower layer). Then, it takes an additional hops to traverse the first and last subpath using the shortcuts within each subpath. We conclude that the overall number of hops to traverse the path is .
Together with the depth of constructing the shortcut graph, this means that the depth of the algorithm is . ∎
We generalize our algorithm to disconnected patterns, show how to list all occurrences of a graph pattern, and characterize the family of graphs for which the algorithm is still FPT.
4.1. Disconnected Patterns
We extend our algorithm so that it can handle arbitrary disconnected patterns. These patterns are challenging because (in particular) the algorithm for treewidth -cover cannot guarantee that every component of the pattern graph is in the same cluster.
Consider a pattern graph consisting of connected components. Number the components arbitrarily from to . A naive approach is to try out all possible ways to split the target graph into components. A randomized approach (inspired by color coding (Alon et al., 1995)) allows us to remove the exponential dependency on the number of vertices . It works as follows:
Color each vertex in independently and uniformly at random with a number between and .
For each color , let be the subgraph induced by the vertices that have color .
Search for occurrences of the -th component of in the subgraph of color vertices.
Return true if and only if each search is successful.
Lemma 4.1 ().
Finding (with high probability) an occurrence of a disconnected pattern with components and vertices takes more work than finding an occurrence of a connected pattern.
Consider a fixed occurrence of the pattern . The probability that all of its vertices are assigned to the correct component of is . Hence, repetitions suffice to find a particular occurrence of with constant probability, and repetitions suffice to certify that no occurrence exists with high probability. ∎
Note that this technique of finding disconnected patterns by reduction to the connected case is completely general and can be used in conjunction with any subgraph isomorphism algorithm.
4.2. Listing all Occurrences
We describe the modifications necessary to make our algorithm list all occurrences of a pattern. The first step is to modify the algorithm such that it returns a particular occurrence of a pattern with probability at least . Then, we can repeatedly generate a new set of occurrences, remove duplicates (by hashing), until we are confident enough that we have found all occurrences. The main difficulty is that the number of iterations necessary to find all the occurrences depends on the number of occurrences, which we do not know in advance.
However, since every particular occurrence is found with probability at least in each iteration, if there is an occurrence that has not yet been found, at least one new occurrence is found with probability at least . This argument shows that the process is related to getting many heads in a row when flipping coins: it is unlikely that many iterations in a row do not find a new occurrence.
Observation 2 ().
For all , the probability that in a sequence of independent coin flips heads occur in a row is at most .
The probability that heads occur in a row starting from the -th coin flip is at most . By a union bound over the possible start positions, the bound follows. ∎
This observation still holds even for biased coins, as long as the probability that heads comes up is at most .
Therefore, we iterate until after iterations we have seen no new occurrence for iterations in a row to guarantee that we have found all occurrences with high probability in .
Theorem 4.2 ().
Listing w.h.p. all occurrences of a connected pattern graph in a planar target graph takes depth and work.
Every iteration finds a specific occurrence with probability at least . Hence, after iterations, the probability that we have not found a specific occurrence is at most . By a union bound over the occurrences, the probability that we have not found all occurrences is at most . Hence, after iterations, the algorithm will, with high probability, not find any new occurrences (because there are none) and by construction terminate after an additional iterations. Overall, the algorithm takes at most iterations to terminate with high probability. Together with the bounds from Section 4.2.1 this implies the work and depth bounds.
We show that the probability that the algorithm terminates before all occurrences have been found is at most . Consider the longest prefix of iterations of the algorithm where it has not found all occurrences. Model these iterations as coin flips, where the coin of an iteration turns up heads if this iteration finds no new occurrence. Heads comes up with probability at most because each such iteration finds a new occurrence with probability at least . By creftypecap 2, the probability that (for any in this sequence) after coin flips heads comes up times in a row is at most . This situation is the only one in which the algorithm terminates before finding all occurrences.
Hence, if we can find every occurrence that does not cross a cluster, we can find all occurrences with high probability. It remains to describe how to find these occurrences.
4.2.1. Recovering All Occurrences for a Cluster
Every valid partial match of the root of the tree decomposition that does not map any vertex as unmatched can be attributed to one or more subgraph isomorphisms. We construct these subgraph isomorphisms top down while traversing the shortcut graph of valid partial matches in reverse order (only following edges that lead to a valid partial match). The algorithm keeps a set of current subgraph isomorphisms at every vertex in the graph and does a parallel BFS of limited depth. When visiting a new vertex of the shortcut graph (which contains a partial mapping ), every subgraph isomorphism in the list from the predecessor node is extended by and stored in the new vertex.
As for the decision problem, we observe that only edges introduce a new vertex to the mapping. The other edges are shortcut so that overall at most edges need to be traversed in between those edges. However, we now need to construct the possible subgraph isomorphism even through those shortcuts explicitly. Fortunately, as illustrated in Figure 5, there is a unique way to extend a partial match through these shortcut edges, namely, do not change the current mapping at all. Hence, the overall depth of the reconstruction is .
By considering only occurrences that contain at least one vertex that is closest to the root of the BFS tree of the - cover, every traversed path leads to at least one subgraph isomorphism, and the work is bounded by the size of all the subgraph isomorphisms.
4.3. Bounded Genus & Apex-Minor-Free Graphs
Our results generalize to all (minor-closed) families of graphs where a bounded diameter graph has bounded treewidth. Observe that our treewidth -cover algorithm from Section 2.1 does not use anything specific to planar graphs. It outputs subgraphs of diameter that cover all occurrences of the pattern with constant probability. Moreover, our algorithm for bounded treewidth in Section 3 only requires a treewidth decomposition of low width. We start by giving the characterization of the graphs where our results hold and then discuss the few necessary changes.
4.3.1. Locally Bounded Treewidth
A family of graphs has locally bounded treewidth (Eppstein, 2000) if every graph of diameter has treewidth at most , for some function . Surprisingly, all minor-closed families of graphs that have locally bounded treewidth have locally linear treewidth (Demaine and Hajiaghayi, 2004), meaning that a graph of diameter has treewidth .
The graphs of locally bounded treewidth have been characterized with respect to having certain excluded minors. A graph that has a vertex that is connected to all other vertices in that becomes planar after removing is an apex-graph. Such graphs do not have locally bounded treewidth. For example, consider the grid with an additional vertex connected to all other vertices. This graph has diameter , but because the grid has treewidth (Robertson and Seymour, 1986) this apex graph has treewidth at least . Note that some apex graphs are planar (like the clique ) while others are not (like the clique ).
Interestingly, a minor-closed family of graphs of locally bounded treewidth must have an apex graph as an excluded minor (Eppstein, 2000) . For example, planar graphs exclude the apex graph as a minor (by Kuratowski’s theorem (Wilson, 1996)). Examples of apex-minor-free graphs include bounded-genus-graphs.
4.3.2. Parallel Tree Decomposition
The missing piece to our parallel subgraph isomorphism algorithm on apex-minor-free graphs is a parallel tree decomposition algorithm. The algorithm from Lagergren (Lagergren, 1996) achieves poly-logarithmic depth for constant treewidth, but the depth of the algorithm is not polynomial in . It becomes the bottleneck in our subgraph isomorphism algorithm.
Theorem 4.3 (Lagergren (Lagergren, 1996)).
For a graph with treewidth , computing a tree decomposition of width takes work and depth.
Theorem 4.4 ().
Deciding (with high probability) if a connected pattern graph occurs as a subgraph of an apex-minor-free graph takes work and depth.
5. Planar Vertex Connectivity
Vertex connectivity is a classic graph problem with applications in networking (Censor-Hillel et al., 2014) and operations research (Nagamochi et al., 2001). Sequentially, -vertex connectivity can be solved in linear time for planar graphs (Eppstein, 1995) and, more generally, in time deterministically (Henzinger et al., 1996) and time with high probability (Nanongkai et al., 2019a). Recently, a sub-quadratic time deterministic algorithm (Gao et al., 2019) and a near-linear work (Nanongkai et al., 2019b) algorithm have been announced.
Two-connectivity and -connectivity have long been solved (optimally) for general graphs with linear work and logarithmic depth (Tarjan and Vishkin, 1985; Miller and Ramachandran, 1987). In contrast, no sub-quadratic work poly-logarithmic depth -connectivity algorithm was available even for planar graphs prior to our work.
We show that vertex connectivity can be solved with work and depth in planar graphs. This result is possible because the vertex connectivity is closely related to certain separating cycles in a target graph that is constructed based on a planar embedding of the original graph (details below). Moreover, we use that the work of our subgraph isomorphism algorithm is for any constant size pattern. Eppstein (Eppstein, 1995) uses this idea (attributed to Nishizeki) for his sequential linear work vertex connectivity algorithm. We describe the approach and the necessary changes to our parallel algorithm.
5.1. From Connectivity to Separating Cycles
Embed the graph in the plane. Use the embedding to construct a bipartite target graph from as follows. One side of the bipartite graph consists of the vertices from . The vertices on this side are the original vertices. The other side has a vertex for each face in the original graph . The vertices on this side are the face vertices. A face vertex of and an original vertex of are connected if and only if the face contains the vertex in the graph . Observe that because the graph is bipartite, all its cycles have even length.
A subgraph of a graph separates the vertex set if the graph we get from removing all vertices of from contains at least two vertices from in two different connected components.
Lemma 5.1 (Nishizeki / Eppstein (Eppstein, 1995)).
If is -connected and the shortest cycle in the bipartite graph that separates the set of original vertices has length , then has vertex connectivity .
This leads us to our algorithm to decide planar vertex connectivity in parallel. First, check if the graph is -connected and if it is -connected using existing algorithms (Tarjan and Vishkin, 1985; Miller and Ramachandran, 1987). If the graph is -connected, check if there is a cycle of length in that separates the original vertices of . If so, the graph has vertex connectivity . Otherwise, the graph has vertex connectivity .
Lemma 5.2 ().
Deciding Planar Vertex Connectivity (w.h.p) takes work and depth.
The algorithm is correct by Lemma 5.1 and the fact that the vertex connectivity of a planar graph is at most . This follows from Euler’s formula, which implies that every planar graph has a vertex of degree at most (Wilson, 1996). Removing the neighbors of this vertex disconnects the graph, hence the graph is not -connected.
Hence, we need to augment our subgraph isomorphism algorithm so that it can find a subgraph that separates a set of vertices (the original vertices in the case of the graph ).
A simple approach to find all separating cycles of a given length would be to enumerate all cycles of a given length using the algorithm from Section 4.2.1 and check which are separating. However, there can be many length cycles in a planar graph (Hakimi and Schmeichel, 1979), so this would be too much work.
5.2. Separating Subgraph Isomorphism
We generalize our parallel subgraph isomorphism algorithm so that it can find subgraphs that separate a given set of vertices. Two modifications are necessary. These are similar to what was necessary for the sequential algorithm (Eppstein, 1995) for cycles. The first modification is to the parallel treewidth cover algorithm from Section 2.1. This modification ensures that a subgraph that is separating in the original graph is also separating in each of the graphs in the cover. The second modification concerns the algorithm for bounded treewidth subgraph isomorphism from Section 3. It extends the state space of the recursion to keep track of which vertices are separated by the subgraph and which can be in the same component after removing the subgraph.
-Separating Subgraph Isomorphism asks if there exists an occurrence of the pattern graph in the target graph that separates the vertex set . If the pattern graph is a cycle, the problem is called -Separating Cycle.
5.2.1. How to Modify the --Cover
Start by clustering the graph as usual. Then, for each cluster, merge all neighboring clusters into a single vertex each (do not choose these as the source for the BFS). Then, in each cluster, instead of returning the graph (which is an induced subgraph of the cluster), merge all connected components of the cluster that result after removing into a single vertex each. This produces a set of minors of the graph (instead of a set of induced subgraphs), as shown in Figure 7.
When proceeding to find an -separating subgraph in these minors, consider each merged vertex that contains at least one vertex of the set to be in the set . Moreover, do not allow the occurrence of the pattern to contain any of the merged vertices (the other vertices are in a set of allowed vertices ).
5.2.2. How to Modify the Bounded Treewidth Algorithm
The generalized algorithm must separate and only contain vertices from the set of allowed vertices . To restrict the found occurrences to only contain vertices from the set of allowed vertices, it suffices to restrict the mapping at each tree decomposition node to .
The idea to find an occurrence that separates is that we record which vertices are separated by the occurrence. Removing such an occurrence creates at least two connected components. We call one of these components the inside vertices and the rest of the vertices the outside vertices. Observe that after removing a separating occurrence from the graph, every resulting connected component must either consist of only inside or consist of only outside vertices.
We extend the construction of partial matches. A partial match for node has an additional set of on the inside vertices and a set of on the outside vertices. Moreover, it has a boolean to keep track if any of the vertices in that occur in the subgraph induced by the current tree decomposition node are on the inside (and a boolean to store if any of those vertices are on the outside). This bookkeeping ensures that at least one vertex is on both sides – otherwise, the subgraph would not be separating.
We adapt the semantics of the combination rules accordingly to reflect the intuition that partial matches keep track of which vertices are on the inside or outside. Consider a node of the decomposition tree, one of its children , and the (extended) partial matches of and of . Then, for the partial matches to be valid, ensure the following:
Every connected component of the subgraph of induced by the vertices in that are not mapped onto by the function is either fully in or fully in . Similarly for .
The inside and outside of and have to be consistent: For any vertex , if then if and only if and if and only if .
The parent match has to ‘remember’ if any vertex is in and on the inside or outside. Specifically, for a vertex , implies and implies . Moreover, implies and implies .
Finally, a valid partial match at the root must separate (which means and are both true at the root).
Lemma 5.3 ().
Deciding Planar -Separating Subgraph Isomorphism (w.h.p.) for a connected pattern graph with vertices takes depth and work.
Computing connected components and contracting the edges takes work and depth (Gazit, 1991). The number of states for the recursion increases by at most . Hence, the number of considered combinations with the children increases by at most at every node. ∎
When is a constant, the algorithm takes work and depth. In Section 5.1, the only missing piece to solve planar vertex connectivity in work and depth is to find -Separating -cycles, which we have just described how to solve in the stated bounds.
6. Conclusion and Future Work
We presented a randomized algorithm to decide planar subgraph isomorphism in work and depth for constant size patterns. We used this result for deciding planar vertex connectivity in the same parallel bounds.
There are many interesting avenues for future work. Although we could use our subgraph listing algorithm to count the number of occurrences, this is not work-efficient as the runtime grows with the number of occurrences. The difficulty comes from the randomized way in which we cluster the graph to construct a - cover. A deterministic parallel - cover would solve this issue and yield a deterministic algorithm overall.
Reducing the work dependency on the size of the pattern could be an essential step in improving the practicality of the approach. There are indications that is a lower bound for the dependency on for any planar subgraph isomorphism algorithm with polynomial dependency in (Fomin et al., 2016), but there remains room for improvement regarding the exponential dependency on . Moreover, faster parallel algorithms for tree decomposition would directly improve our bounds for apex-minor-free graphs.
For planar vertex connectivity, we reduced the gap between the work of our algorithm and the best sequential algorithm to . It is natural to ask if it is possible to solve planar vertex connectivity in work and poly-logarithmic depth. More generally, in light of the recently announced sequential near-linear time vertex connectivity algorithm for sparse graphs (Nanongkai et al., 2019b), it might be interesting to see if we can solve vertex connectivity in sparse graphs in near-linear work and low depth.
Appendix A Decomposing a Tree into Paths
We prove Lemma 3.2 using expression tree evaluation techniques. This means that we transform the problem into a problem of evaluating an expression tree of suitable operations. To evaluate this expression tree efficiently, we need to decompose the operations into unary functions satisfying certain properties, as described below.
Recall that the Lemma requires the tree to be split into layers each consisting of disjoint paths. The idea is to compute for each vertex in the tree the layer in which the vertex occurs. This computes for each node a layer number, where the layer number of the leafs is zero and the layer number of nodes closer to the root is monotonically increasing (as detailed below).
Each layer (i.e. subgraph induced by vertices with the same layer number) consists of a forest where each connected component is a path. Hence, it is easy to find and order these paths (using list ranking) once we have the layer numbers.
Next, we describe the recursive function that computes the layer numbers. In a general rooted tree, the parent has the same layer number as the maximum layer number of any of its children if this maximum is unique (i.e., only one child has this layer number). Otherwise, the layer number of the parent is one larger than that maximum. In summary, the layer number of node with children with layer numbers is given recursively:
The layer number of a leaf is . This recursive description works because the case where the maximum is unique corresponds to when the parent is part of the same path as the child that obtains this maximum. If two children have the same layer number, the parent must start its own path and a new layer.
Moreover, observe that it becomes clear why there are layers: For a parent to have a larger layer number than one of its children, there need to be at least two children of the same maximal layer number. This means that the number of nodes in a layer decreases by at least a factor when going to a higher layer.
We proceed to describe the conditions for applying the efficient tree contraction based expression tree evaluation techniques, as summarized in Lemma A.1. A family of unary functions is closed under composition if the composition of any two functions in the family is also in the family. A family of unary functions over the domain is closed under projection with respect to a -ary function if for all tuples and all indexes (between and ) the function (a unary function of ) is in the family .
Lemma A.1 ().
If there is a family of -computable functions that is closed under composition and closed under projection with respect to all the operations in an expression tree of nodes, then evaluating the expression tree takes work and depth (Reif, 1993).
The intuition is that the expression tree evaluation repeatedly contracts the expression tree. For this procedure to be well-defined, the algorithm needs to express partially evaluated subtrees using these unary functions. Next, we exhibit such a suitable family of unary functions for the function that maps the layer number of the children to the layer number of the parent.
We define a set of unary functions over the domain of natural numbers, where for each natural number , there are two functions: a function and a function . Intuitively, the functions record a state where the maximum (so far) is unique and equal to . The functions record the state where the maximum is not unique and equal to . Formally, we set:
We check that the function class is closed under composition. For any natural numbers and , the following holds:
To check that the function class is closed under projection with respect to , consider a sequence of layer values . Let be the maximum of . For any valid index we have that:
-  (1991) Multilayer grid embeddings for vlsi. Algorithmica 6 (1-6), pp. 129–151. Cited by: §1.
-  (1995) Color-coding. J. ACM 42 (4), pp. 844–856. External Links: Cited by: §1.2, §1.2, Table 1, §4.1.
-  (1992) Similarity searching in databases of three-dimensional molecules and macromolecules. Journal of Chemical Information and Computer Sciences 32 (6), pp. 617–630. External Links: Cited by: §1.
-  (1994) Approximation algorithms for np-complete problems on planar graphs. J. ACM 41 (1), pp. 153–180. External Links: Cited by: §1, §2.1, §2.1, §2.
-  (2015) Fast parallel fixed-parameter algorithms via color coding. In 10th International Symposium on Parameterized and Exact Computation, IPEC 2015, September 16-18, 2015, Patras, Greece, pp. 224–235. External Links: Cited by: §1.2.
-  (1996-03) Programming parallel algorithms. Commun. ACM 39 (3), pp. 85–97. External Links: Cited by: §1.1.
-  (2013) An o(c^k n) 5-approximation algorithm for treewidth. See DBLP:conf/focs/2013, pp. 499–508. External Links: Cited by: §1.1.
-  (1995) Parallel algorithms with optimal speedup for bounded treewidth. See DBLP:conf/icalp/1995, pp. 268–279. External Links: Cited by: §1.1, §3.3.
-  (1996) Efficient and constructive algorithms for the pathwidth and treewidth of graphs. J. Algorithms 21 (2), pp. 358–402. External Links: Cited by: §1.1.
-  (1988) NC-algorithms for graphs with small treewidth. See DBLP:conf/wg/1988, pp. 1–10. External Links: Cited by: §1.2.
-  (1993) A linear time algorithm for finding tree-decompositions of small treewidth. See DBLP:conf/stoc/STOC25, pp. 226–234. External Links: Cited by: §1.1.
-  (2014) A new perspective on vertex connectivity. See DBLP:conf/soda/2014, pp. 546–561. External Links: Cited by: §5.
-  (1995) NC algorithms for partitioning planar graphs into induced forests and approximating np-hard problems. In Graph-Theoretic Concepts in Computer Science, 21st International Workshop, WG ’95, Aachen, Germany, June 20-22, 1995, Proceedings, pp. 275–289. External Links: Cited by: §1.2.
-  (1985) Arboricity and subgraph listing algorithms. SIAM J. Comput. 14 (1), pp. 210–223. External Links: Cited by: §1.2.
-  (1996) Finding minimum spanning forests in logarithmic time and linear work using random sampling. See DBLP:conf/spaa/1996, pp. 243–250. External Links: Cited by: §1.1.
-  (2004) Equivalence of local treewidth and linear local treewidth and its algorithmic applications. In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2004, New Orleans, Louisiana, USA, January 11-14, 2004, pp. 840–849. External Links: Cited by: §4.3.1.
-  (2010) Planar subgraph isomorphism revisited. In 27th International Symposium on Theoretical Aspects of Computer Science, STACS 2010, March 4-6, 2010, Nancy, France, pp. 263–274. External Links: Cited by: Table 1.
-  (1999) Parameterized complexity. Springer New York. External Links: Cited by: §1.2, §1.
-  (2010) Listing all maximal cliques in sparse graphs in near-optimal time. See DBLP:conf/isaac/2010-1, pp. 403–414. External Links: Cited by: §1.2.
-  (1995) Subgraph isomorphism in planar graphs and related problems. In Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, 22-24 January 1995. San Francisco, California, USA, pp. 632–640. External Links: Cited by: §1.2, §1.2, Table 1, §1, §2.1, §2.1, §2.1, §2, §2, §3, §5.1, §5.2, Lemma 5.1, §5, §5.
-  (2000) Diameter and treewidth in minor-closed graph families. Algorithmica 27 (3), pp. 275–291. External Links: Cited by: §1.2, §1, §4.3.1, §4.3.1.
-  (2016) Subexponential parameterized algorithms for planar and apex-minor-free graphs via low treewidth pattern covering. In IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pp. 515–524. External Links: Cited by: §1.2, Table 1, §6.
-  (2017) Fully polynomial-time parameterized computations for graphs and matrices of low treewidth. See DBLP:conf/soda/2017, pp. 1419–1432. External Links: Cited by: §1.1.
-  (2019) Deterministic graph cuts in subquadratic time: sparse, balanced, and k-vertex. CoRR abs/1910.07950. External Links: Cited by: §5.
-  (1976) Some simplified np-complete graph problems. Theor. Comput. Sci. 1 (3), pp. 237–267. External Links: Cited by: §1.
-  (1990) Computers and intractability; a guide to the theory of np-completeness. W. H. Freeman & Co., USA. External Links: Cited by: §1.
-  (1991) Optimal EREW parallel algorithms for connectivity, ear decomposition and st-numbering of planar graphs. In The Fifth International Parallel Processing Symposium, Proceedings, Anaheim, California, USA, April 30 - May 2, 1991, pp. 84–91. External Links: Cited by: §5.2.2.
-  (2018) Parallel minimum cuts in near-linear work and low depth. See DBLP:conf/spaa/2018, pp. 1–11. External Links: Cited by: §1.1.
-  (1979) On the number of cycles of length k in a maximal planar graph. Journal of Graph Theory 3 (1), pp. 69–86. External Links: Cited by: §5.1.
-  (1996) Computing vertex connectivity: new bounds from old techniques. See DBLP:conf/focs/1996, pp. 462–471. External Links: Cited by: §5.
-  (1986) An efficient parallel algorithm for planarity. See DBLP:conf/focs/FOCS27, pp. 465–477. External Links: Cited by: §2.1, §5.1.
-  (1993) A linear-processor polylog-time algorithm for shortest paths in planar graphs. In 34th Annual Symposium on Foundations of Computer Science, Palo Alto, California, USA, 3-5 November 1993, pp. 259–270. External Links: Cited by: §2.
-  (2001) Frequent subgraph discovery. See DBLP:conf/icdm/2001, pp. 313–320. External Links: Cited by: §1.
-  (1996) Efficient parallel algorithms for graphs of bounded tree-width. J. Algorithms 20 (1), pp. 20–44. External Links: Cited by: §1.1, §4.3.2, Theorem 4.3.
-  (1979) A separator theorem for planar graphs. SIAM Journal on Applied Mathematics 36 (2), pp. 177–189. External Links: Cited by: §1.2.
-  (2010) Triangulations. Springer Berlin Heidelberg. External Links: Cited by: §1.
-  (2015) Improved parallel algorithms for spanners and hopsets. See DBLP:conf/spaa/2015, pp. 192–201. External Links: Cited by: §1, §2.1, Lemma 2.3.
-  (1987) A new graph triconnectivity algorithm and its parallelization. See DBLP:conf/stoc/STOC19, pp. 335–344. External Links: Cited by: §1.3, §5.1, §5.
-  (1985) Parallel tree contraction and its application. See DBLP:conf/focs/FOCS26, pp. 478–489. External Links: Cited by: §3.3.1.
-  (2002) Network motifs: simple building blocks of complex networks. Science 298 (5594), pp. 824–827. External Links: Cited by: §1.
-  (2001) Minimum cost source location problem with vertex-connectivity requirements in digraphs. Inf. Process. Lett. 80 (6), pp. 287–293. External Links: Cited by: §5.
-  (2019) Breaking quadratic time for small vertex connectivity and an approximation scheme. See DBLP:conf/stoc/2019, pp. 241–252. External Links: Cited by: §5.
-  (2019) Computing and testing small vertex connectivity in near-linear time and queries. CoRR abs/1905.05329. External Links: Cited by: §5, §6.
-  (1993) SubGemini: identifying subcircuits using a fast subgraph isomorphism algorithm. See DBLP:conf/dac/1993, pp. 31–37. External Links: Cited by: §1.
-  (1979) The np-completeness of the hamiltonian cycle problem in planar digraphs with degree bound two. Inf. Process. Lett. 8 (4), pp. 199–201. External Links: Cited by: §1.
-  (2004-07) Modeling interactome: scale-free or geometric?. Bioinformatics 20 (18), pp. 3508–3515. External Links: Cited by: §1.
-  (1993) Synthesis of parallel algorithms. 1st edition, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. External Links: Cited by: Lemma A.1, §1.1, §3.3.1, §3.3.3.
-  (1986) Graph minors. v. excluding a planar graph. J. Comb. Theory, Ser. B 41 (1), pp. 92–114. External Links: Cited by: §1.2, §4.3.1.
Efficient planar graph cuts with applications in computer vision. See DBLP:conf/cvpr/2009, pp. 351–356. External Links: Cited by: §1, §1.
-  (1985) An efficient parallel biconnectivity algorithm. SIAM J. Comput. 14 (4), pp. 862–874. External Links: Cited by: §1.3, §5.1, §5.
-  (1976) An algorithm for subgraph isomorphism. J. ACM 23 (1), pp. 31–42. External Links: Cited by: §1.2.
-  (1996) Introduction to graph theory. Longman. External Links: Cited by: §4.3.1, §5.1.