The classification of problems according to their complexity is one of the main goals in computer science. This goal was partly achieved by the theory of NP-completeness which helps to identify the problems that are unlikely to have polynomial-time algorithms. However, there are still many problems in P for which it is not known if the running time of the best current algorithms can be improved. Such problems arise in various domains such as computational geometry, string matching or graphs. Here we focus on the existence and the design of linear-time algorithms, for solving several graph problems when restricted to classes of bounded clique-width. The problems considered comprise the detection of short cycles (e.g., Girth and Triangle Counting), some distance problems (e.g., Diameter, Hyperbolicity, Betweenness Centrality) and the computation of maximum matchings in graphs. We refer to Sections 3.1, 4.1 and 5, respectively, for a recall of their definitions.
Clique-width is an important graph parameter in structural graph theory, that intuitively represents the closeness of a graph to a cograph — a.k.a., -free graphs [24, 32]. Some classes of perfect graphs, including distance-hereditary graphs, and so, trees, have bounded clique-width . Furthermore, clique-width has many algorithmic applications. Many algorithmic schemes and metatheorems have been proposed for classes of bounded clique-width [31, 28, 40]. Perhaps the most famous one is Courcelle’s theorem, that states that every graph problem expressible in Monadic Second Order logic () can be solved in -time when restricted to graphs with clique-width at most , for some computable function that only depends on . Some of the problems considered in this work can be expressed as an formula. However, the dependency on the clique-width in Courcelle’s theorem is super-polynomial, that makes it less interesting for the study of graphs problems in P. Our goal is to derive a finer-grained complexity of polynomial graph problems when restricted to classes of bounded clique-width, that requires different tools than Courcelle’s theorem.
Our starting point is the recent theory of “Hardness in P” that aims at better hierarchizing the complexity of polynomial-time solvable problems . This approach mimics the theory of NP-completeness. Precisely, since it is difficult to obtain unconditional hardness results, it is natural to obtain hardness results assuming some complexity theoretic conjectures. In other words, there are key problems that are widely believed not to admit better algorithms such as 3-SAT (k-SAT), 3SUM and All-Pairs Shortest Paths (APSP). Roughly, a problem in P is hard if the existence of a faster algorithm for this problem implies the existence of a faster algorithm for one of these fundamental problems mentioned above. In their seminal work, Williams and Williams  prove that many important problems in graph theory are all equivalent under subcubic reductions. That is, if one of these problems admits a truly sub-cubic algorithms, then all of them do. Their results have extended and formalized prior work from, e.g., [51, 67]. The list of such problems was further extended in [1, 17].
Besides purely negative results (i.e., conditional lower-bounds) the theory of “Hardness in P” also comes with renewed algorithmic tools in order to leverage the existence, or the nonexistence, of improved algorithms for some graph classes. The tools used to improve the running time of the above mentioned problems are similar to the ones used to tackle NP-hard problems, namely approximation and FPT algorithms. Our work is an example of the latter, of which we first survey the most recent results.
Related work: Fully polynomial parameterized algorithms.
FPT algorithms for polynomial-time solvable problems were first considered by Giannopoulou et al. . Such a parameterized approach makes sense for any problem in P for which a conditional hardness result is proved, or simply no linear-time algorithms are known. Interestingly, the authors of  proved that a matching of cardinality at least in a graph can be computed in -time. We stress that Maximum Matching is a classical and intensively studied problem in computer science [37, 45, 46, 49, 66, 72, 71, 86]. The well known -time algorithm in  is essentially the best so far for Maximum Matching. Approximate solutions were proposed by Duan and Pettie .
More related to our work is the seminal paper of Abboud, Williams and Wang . They obtained rather surprising results when using treewidth: another important graph parameter that intuitively measures the closeness of a graph to a tree . Treewidth has tremendous applications in pure graph theory  and parameterized complexity . Furthermore, improved algorithms have long been known for ”hard” graph problems in P, such as Diameter and Maximum Matching, when restricted to trees . However, it has been shown in  that under the Strong Exponential Time Hypothesis, for any there can be no -time algorithm for computing the diameter of graphs with treewidth at most . This hardness result even holds for pathwidth, that leaves little chance to find an improved algorithm for any interesting subclass of bounded-treewidth graphs while avoiding an exponential blow-up in the parameter. We show that the situation is different for clique-width than for treewidth, in the sense that the hardness results for clique-width do not hold for important subclasses.
We want to stress that a familiar reader could ask why the hardness results above do not apply to clique-width directly since it is upper-bounded by a function of treewidth . However, clique-width cannot be polynomially upper-bounded by the treewidth . Thus, the hardness results from  do not preclude the existence of, say, an -time algorithm for computing the diameter of graphs with clique-width at most .
On a more positive side, the authors in  show that Radius and Diameter can be solved in -time, where is treewidth. Husfeldt  shows that the eccentricity of every vertex in an undirected graph on vertices can be computed in time , where and are the treewidth and the diameter of the graph, respectively. More recently, a tour de force was achieved by Fomin et al.  who were the first to design parameterized algorithms with polynomial dependency on the treewidth, for Maximum Matching and Maximum Flow. Furthermore they proved that for graphs with treewidth at most , a tree decomposition of width can be computed in -time. We observe that their algorithm for Maximum Matching is randomized, whereas ours are deterministic.
We are not aware of the study of another parameter than treewidth for polynomial graph problems. However, some authors choose a different approach where they study the parameterization of a fixed graph problem for a broad range of graph invariants [11, 43, 71]. As examples, clique-width is part of the graph invariants used in the parameterized study of Triangle Listing . Nonetheless, clique-width is not the main focus in . Recently, Mertzios, Nichterlein and Niedermeier  propose algorithms for Maximum Matching that run in time , for several parameters such as feedback vertex set or feedback edge set. Moreover, the authors in  suggest that Maximum Matching may become the “drosophila” of the study of the FPT algorithms in P. We advance in this research direction.
1.1 Our results
In this paper we study the parameterized complexity of several classical graph problems under a wide range of parameters such as clique-width and its upper-bounds modular-width , split-width , neighbourhood diversity  and -sparseness . The results are summarized in Table 1.
Roughly, it turns out that some hardness assumptions for general graphs do not hold anymore for graph classes of bounded clique-width. This is the case in particular for Triangle Detection and other cycle problems that are subcubic equivalent to it such as, e.g., Girth, that all can be solved in linear-time, with quadratic dependency on the clique-width, with the help of dynamic programming (Theorems 2 and 3). The latter complements the results obtained for Triangle Listing in . However many hardness results for distance problems when using treewidth are proved to also hold when using clique-width (Theorems 5, 6 and 7). These negative results have motivated us to consider some upper-bounds for clique-width as parameters, for which better results can be obtained than for clique-width. Another motivation stems from the fact that the existence of a parameterized algorithm for computing the clique-width of a graph remains a challenging open problem . We consider some upper-bounds for clique-width that are defined via linear-time computable graph decompositions. Thus if these parameters are small enough, say, in for some , we get truly subcubic or even truly subquadratic algorithms for a wide range of problems.
|Problem||Parameterized time complexity|
|Triangle Detection, Triangle Counting, Girth||for any|
Graph parameters and decompositions considered
Let us describe the parameters considered in this work as follows. The following is only an informal high level description (formal definitions are postponed to Section 2).
A join is a set of edges inducing a complete bipartite subgraph. Roughly, clique-width can be seen as a measure of how easy it is to reconstruct a graph by adding joins between some vertex-subsets. A split is a join that is also an edge-cut. By using pairwise non crossing splits, termed “strong splits”, we can decompose any graph into degenerate and prime subgraphs, that can be organized in a treelike manner. The latter is termed split decomposition .
We take advantage of the treelike structure of split decomposition in order to design dynamic programming algorithms for distance problems such as Diameter, Gromov Hyperbolicity and Betweenness Centrality (Theorems 8, 9 and 11, respectively). Although clique-width is also related to some treelike representations of graphs , the same cannot be done for clique-width as for split decomposition because the edges in the treelike representations for clique-width may not represent a join.
Then, we can improve the results obtained with split decomposition by further restricting the type of splits considered. As an example, let be a bipartition of the vertex-set that is obtained by removing a split. If every vertex of is incident to some edges of the split then is called a module of . That is, for every vertex , is either adjacent or nonadjacent to every vertex of . The well-known modular decomposition of a graph is a hierarchical decomposition that partitions the vertices of the graph with respect to the modules . Split decomposition is often presented as a refinement of modular decomposition . We formalize the relationship between the two in Lemma 10, that allows us to also apply our methods for split decomposition to modular decomposition.
However, we can often do better with modular decomposition than with split decomposition. In particular, suppose we partition the vertex-set of a graph into modules, and then we keep exactly one vertex per module. The resulting quotient graph keeps most of the distance properties of . Therefore, in order to solve a distance problem for , it is often the case that we only need to solve it for . We so believe that modular decomposition can be a powerful Kernelization tool in order to solve graph problems in P. As an application, we improve the running time for some of our algorithms, from time when parameterized by the split-width (maximum order of a prime subgraph in the split decomposition), to -time when parameterized by the modular-width (maximum order of a prime subgraph in the modular decomposition). See Theorem 13.
Furthermore, for some more graph problems, it may also be useful to further restrict the internal structures of modules. We briefly explore this possibility through a case study for neighbourhood diversity. Roughly, in this latter case we only consider modules that are either independent sets (false twins) or cliques (true twins). New kernelization results are obtained for Hyperbolicity and Betweenness Centrality when parameterized by the neighbourhood diversity (Theorems 16 and 17, respectively). It is worth pointing out that so far, we have been unable to obtain kernelization results for Hyperbolicity and Betweenness Centrality when only parameterized by the modular-width. It would be very interesting to prove separability results between split-width, modular-width and neighbourhood diversity in the field of fully polynomial parameterized complexity.
Graphs with few ’s.
We finally use modular decomposition as our main tool for the design of new linear-time algorithms when restricted to graphs with few induced ’s. The -graphs have been introduced by Babel and Olariu in . They are the graphs in which no set of at most vertices can induce more than paths of length four. Every graph is a -graph for some large enough values of and . Furthermore when and are fixed constants, , the class of -graphs has bounded clique-width . We so define the -sparseness of a given graph , denoted by , as the minimum such that is a -graph. The structure of the quotient graph of a -graph, being a constant, has been extensively studied and characterized in the literature [5, 7, 8, 6, 64]. We take advantage of these existing characterizations in order to generalize our algorithms with modular decomposition to -time algorithms (Theorems 18 and 20).
Let us give some intuition on how the -sparseness can help in the design of improved algorithms for hard graph problems in P. We consider the class of split graphs (i.e., graphs that can be bipartitioned into a clique and an independent set). Deciding whether a given split graph has diameter or is hard . However, suppose now that the split graph is a -graph , for some fixed . An induced in has its two ends in the independent set, and its two middle vertices are, respectively, in and . Furthermore, when is a -graph, it follows from the characterization of [5, 7, 8, 6, 64] either it has a quotient graph of bounded order or it is part of a well-structured subclass where the vertices of all neighbourhoods in the independent set follow a rather nice pattern (namely, spiders and a subclass of -trees, see Section 2). As a result, the diameter of can be computed in -time when is a split graph. We generalize this result to every -graph by using modular decomposition.
All the parameters considered in this work have already received some attention in the literature, especially in the design of FPT algorithms for NP-hard problems [6, 53, 56, 50, 78]. However, we think we are the first to study clique-width and its upper-bounds for polynomial problems. There do exist linear-time algorithms for Diameter, Maximum Matching and some other problems we study when restricted to some graph classes where the split-width or the -sparseness is bounded (e.g., cographs , distance-hereditary graphs [35, 36], -tidy graphs , etc.). Nevertheless, we find the techniques used for these specific subclasses hardly generalize to the case where the graph has split-width or -sparseness at most , being any fixed constant. For instance, the algorithm that is proposed in  for computing the diameter of a given distance-hereditary graph is based on some properties of LexBFS orderings. Distance-hereditary graphs are exactly the graphs with split-width at most two . However it does not look that simple to extend the properties found for their LexBFS orderings to bounded split-width graphs in general. As a byproduct of our approach, we also obtain new linear-time algorithms when restricted to well-known graph families such as cographs and distance-hereditary graphs.
Highlight of our Maximum Matching algorithms
Finally we emphasize our algorithms for Maximum Matching. Here we follow the suggestion of Mertzios, Nichterlein and Niedermeier  that Maximum Matching may become the “drosophila” of the study of the FPT algorithms in P. Precisely, we propose -time algorithms for Maximum Matching when parameterized either by modular-width or by the -sparseness of the graph (Theorems 22 and 24). The latter subsumes many algorithms that have been obtained for specific subclasses [46, 86].
Let us sketch the main lines of our approach. Our algorithms for Maximum Matching are recursive. Given a partition of the vertex-set into modules, first we compute a maximum matching for the subgraph induced by every module separately. Taking the union of all the outputted matchings gives a matching for the whole graph, but this matching is not necessarily maximum. So, we aim at increasing its cardinality by using augmenting paths .
In an unpublished paper , Novick followed a similar approach and, based on an integer programming formulation, he obtained an -time algorithm for Maximum Matching when parameterized by the modular-width. Our approach is more combinatorial than his.
Our contribution in this part is twofold. First we carefully study the possible ways an augmenting path can cross a module. Our analysis reveals that in order to compute a maximum matching in a graph of modular-width at most we only need to consider augmenting paths of length . Then, our second contribution is an efficient way to compute such paths. For that, we design a new type of characteristic graph of size . The same as the classical quotient graph keeps most distance properties of the original graph, our new type of characteristic graph is tailored to enclose the main properties of the current matching in the graph. We believe that the design of new types of characteristic graphs can be a crucial tool in the design of improved algorithms for graph classes of bounded modular-width.
We have been able to extend our approach with modular decomposition to an -time algorithm for computing a maximum matching in a given -graph. However, a characterization of the quotient graph is not enough to do that. Indeed, we need to go deeper in the -connectedness theory of  in order to better characterize the nontrivial modules in the graphs (Theorem 23). Furthermore our algorithm for -graph not only makes use of the algorithm with modular decomposition. On our way to solve this case we have generalized different methods and reduction rules from the literature [66, 86], that is of independent interest.
We suspect that our algorithm with modular decomposition can be used as a subroutine in order to solve Maximum Matching in linear-time for bounded split-width graphs. However, this is left for future work.
1.2 Organization of the paper
In Section 2 we introduce definitions and basic notations.
Then, in Section 3 we show FPT algorithms when parameterized by the clique-width. The problems considered are Triangle Counting and Girth. To the best of our knowledge, we present the first known polynomial parameterized algorithm for Girth (Theorem 3). Roughly, the main idea behind our algorithms is that given a labeled graph obtained from a -expression, we can compute a minimum-length cycle for by keeping up to date the pairwise distances between every two label classes. Hence, if a -expression of length is given as part of the input we obtain algorithms running in time and space .
In Section 4 we consider distance related problems, namely: Diameter, Eccentricities, Hyperbolicity and Betweenness Centrality.
We start proving, in Section 4.2, none of these problems above can be solved in time , for any , when parameterized by the clique-width (Theorems 5—7). These are the first known hardness results for clique-width in the field of “Hardness in P”. Furthermore, as it is often the case in this field, our results are conditioned on the Strong Exponential Time Hypothesis . In summary, we take advantage of recent hardness results obtained for bounded-degree graphs . Clique-width and treewidth can only differ by a constant-factor in the class of bounded-degree graphs [28, 59]. Therefore, by combining the hardness constructions for bounded-treewidth graphs and for bounded-degree graphs, we manage to derive hardness results for graph classes of bounded clique-width.
In Section 4.3 we describe fully polynomial FPT algorithms for Diameter, Eccentricity, Hyperbolicity and Betweenness centrality parameterized by the split-width. Our algorithms use split-decomposition as an efficient preprocessing method. Roughly, we define weighted versions for every problem considered (some of them admittedly technical). In every case, we prove that solving the original distance problem can be reduced in linear-time to the solving of its weighted version for every subgraph of the split decomposition separately.
Then, in Section 4.4 we apply the results from Section 4.3 to modular-width. First, since for any graph , all our algorithms parameterized by split-width are also algorithms parameterized by modular-width. Moreover for Eccentricities, and for Hyperbolicity and Betweenness Centrality when parameterized by the neighbourhood diversity, we show that it is sufficient only to process the quotient graph of . We thus obtain algorithms that run in -time, or -time, for all these problems.
In Section 4.5 we generalize our previous algorithms to be applied to the -graphs. We obtain our results by carefully analyzing the cases where the quotient graph has size . These cases are given by Lemma 4.
Section 5 is dedicated to our main result, linear-time algorithms for Maximum Matching. First in Section 5.1 we propose an algorithm parameterized by the modular-width that runs in -time. In Section 5.2 we generalize this algorithm to -graphs.
Finally, in Section 6 we discuss applications to other graph classes.
We use standard graph terminology from [15, 34]. Graphs in this study are finite, simple (hence without loops or multiple edges) and unweighted – unless stated otherwise. Furthermore we make the standard assumption that graphs are encoded as adjacency lists.
We want to prove the existence, or the nonexistence, of graph algorithms with running time of the form , being some fixed graph parameter. In what follows, we introduce the graph parameters considered in this work.
A labeled graph is given by a pair where is a graph and is called a labeling function. A k-expression can be seen as a sequence of operations for constructing a labeled graph , where the allowed four operations are:
Addition of a new vertex with label (the labels are taken in ), denoted ;
Disjoint union of two labeled graphs and , denoted ;
Addition of a join between the set of vertices labeled and the set of vertices labeled , where , denoted ;
Renaming label to label , denoted .
See Fig. 1 for examples. The clique-width of , denoted by , is the minimum such that, for some labeling , the labeled graph admits a -expression . We refer to  and the references cited therein for a survey of the many applications of clique-width in the field of parameterized complexity.
Computing the clique-width of a given graph is NP-hard . However, on a more positive side the graphs with clique-width two are exactly the cographs and they can be recognized in linear-time [24, 32]. Clique-width three graphs can also be recognized in polynomial-time . The parameterized complexity of computing the clique-width is open. In what follows, we focus on upper-bounds on clique-width that are derived from some graph decompositions.
A module in a graph is any subset such that for any , either or . Note that for every are trivial modules of . A graph is called prime for modular decomposition if it only has trivial modules.
A module is strong if it does not overlap any other module, i.e., for any module of , either one of or is contained in the other or and do not intersect. Furthermore, let be the family of all inclusion wise maximal strong modules of that are proper subsets of . The quotient graph of is the graph with vertex-set and an edge between every two such that every vertex of is adjacent to every vertex of .
Modular decomposition is based on the following structure theorem from Gallai.
Theorem 1 ( ).
For an arbitrary graph exactly one of the following conditions is satisfied.
its complement is disconnected;
or its quotient graph is prime for modular decomposition.
Theorem 1 suggests the following recursive procedure in order to decompose a graph, that is sometimes called modular decomposition. If (i.e., is complete, edgeless or prime for modular decomposition) then we output . Otherwise, we output the quotient graph of and, for every strong module of , the modular decomposition of . The modular decomposition of a given graph can be computed in linear-time . See Fig. 2 for an example.
Furthermore, by Theorem 1 the subgraphs from the modular decomposition are either edgeless, complete, or prime for modular decomposition. The modular-width of , denoted by , is the minimum such that any prime subgraph in the modular decomposition has order (number of vertices) at most 111This term has another meaning in . We rather follow the terminology from .. The relationship between clique-width and modular-width is as follows.
Lemma 1 ( ).
For every , we have , and a -expression defining can be constructed in linear-time.
We refer to  for a survey on modular decomposition. In particular, graphs with modular-width two are exactly the cographs, that follows from the existence of a cotree . Cographs enjoy many algorithmic properties, including a linear-time algorithm for Maximum Matching . Furthermore, in  Gajarskỳ, Lampis and Ordyniak prove that for some -hard problems when parameterized by clique-width there exist FPT algorithms when parameterized by modular-width.
A split in a connected graph is a partition such that: ; and there is a complete join between the vertices of and . For every split of , let be arbitrary. The vertices are termed split marker vertices. We can compute a “simple decomposition” of into the subgraphs and .
There are two cases of “indecomposable” graphs. Degenerate graphs are such that every bipartition of their vertex-set is a split. They are exactly the complete graphs and the stars . A graph is prime for split decomposition if it has no split.
A split decomposition of a connected graph is obtained by applying recursively a simple decomposition, until all the subgraphs obtained are either degenerate or prime. A split decomposition of an arbitrary graph is the union of a split decomposition for each of its connected components. Every graph has a canonical split decomposition, with minimum number of subgraphs, that can be computed in linear-time . The split-width of , denoted by , is the minimum such that any prime subgraph in the canonical split decomposition of has order at most . See Fig. 3 for an illustration.
Lemma 2 ( ).
For every , we have , and a -expression defining can be constructed in linear-time.
We refer to [53, 56, 78] for some algorithmic applications of split decomposition. In particular, graphs with split-width at most two are exactly the distance-hereditary graphs . Linear-time algorithms for solving Diameter and Maximum Matching for distance-hereditary graphs are presented in [36, 35].
We stress that split decomposition can be seen as a refinement of modular decomposition. Indeed, if is a module of and then is a split. In what follows, we prove most of our results with the more general split decomposition.
Graphs with few ’s
A -graph is such that for any , , induces at most paths on four vertices . The -sparseness of , denoted by , is the minimum such that is a -graph.
Lemma 3 ( ).
For every , every -graph has clique-width at most , and a -expression defining it can be computed in linear-time.
The algorithmic properties of several subclasses of -graphs have been considered in the literature. We refer to  for a survey. Furthermore, there exists a canonical decomposition of -graphs, sometimes called the primeval decomposition, that can be computed in linear-time . Primeval decomposition can be seen as an intermediate between modular and split decomposition. We postpone the presentation of primeval decomposition until Section 5. Until then, we state the results in terms of modular decomposition.
More precisely, given a -graphs , the prime subgraphs in its modular decomposition may be of super-constant size . However, if they are then they are part of one of the well-structured graph classes that we detail next.
A disc is either a cycle , or a co-cycle , for some .
A spider is a graph with vertex set and edge set such that:
is a partition of and may be empty;
the subgraph induced by and is the complete join , and separates and , i.e. any path from a vertex in and a vertex in contains a vertex in ;
is a stable set, is a clique, , and there exists a bijection such that, either for all vertices , or . Roughly speaking, the edges between and are either a matching or an anti-matching. In the former case or if , is called thin, otherwise is thick. See Fig. 4.
If furthermore then we call a prime spider.
Let be a path of length at least five. A spiked -chain is a supergraph of , possibly with the additional vertices such that: and . See Fig. 5. Note that one or both of and may be missing. In particular, is a spiked -chain . A spiked -chain is the complement of a spiked -chain .
Let be the graph with vertex-set such that, for every , and . A spiked -chain is a supergraph of , possibly with the additional vertices such that:
Any of the vertices can be missing, so, in particular, is a spiked -chain . See Fig. 6. A spiked -chain is the complement of a spiked -chain .
Finally, we say that a graph is a prime -tree if it is either: a spiked -chain , a spiked -chain , a spiked -chain , a spiked -chain , or part of the seven graphs of order at most that are listed in .
Let , , be a connected -graph such that and are connected. Then, one of the following must hold for its quotient graph :
either is a prime spider;
or is a disc;
or is a prime -tree;
A simpler version of Lemma 4 holds for the subclass of -graphs:
Lemma 5 ( ).
Let , , be a connected -graph such that and are connected. Then, one of the following must hold for its quotient graph :
is a prime spider;
The subclass of -graphs has received more attention in the literature than -graphs. Our results hold for the more general case of -graphs.
3 Cycle problems on bounded clique-width graphs
Clique-width is the smallest parameter that is considered in this work. We start studying the possibility for -time algorithms on graphs with clique-width at most . Positive results are obtained for two variations of Triangle Detection, namely Triangle Counting and Girth. We define the problems studied in Section 3.1, then we describe the algorithms in order to solve these problems in Section 3.2.
3.1 Problems considered
We start introducing our basic cycle problem.
Problem 1 (Triangle Detection).Input: A graph . Question: Does there exist a triangle in ?
Note that for general graphs, Triangle Detection is conjectured not to be solvable in -time, for any , with a combinatorial algorithm . It is also conjectured not to be solvable in -time for any , with being the exponent for fast matrix multiplication . Our results in this section show that such assumptions do not hold when restricted to bounded clique-width graphs.
More precisely, we next describe fully polynomial parameterized algorithms for the two following generalizations of Triangle Detection.
Problem 2 (Triangle Counting).Input: A graph . Output: The number of triangles in .
Problem 3 (Girth).Input: A graph . Output: The girth of , that is the minimum size of a cycle in .
In , the three of Triangle Detection, Triangle Counting and Girth are proved to be subcubic equivalent when restricted to combinatorial algorithms.
Roughly, our algorithms in what follows are based on the following observation. Given a labeled graph (obtained from a -expression), in order to detect a triangle in , resp. a minimum-length cycle in , we only need to store the adjacencies, resp. the distances, between every two label classes. Hence, if a -expression of length is given as part of the input we obtain algorithms running in time and space .
Our first result is for Triangle Counting (Theorem 2). It shares some similarities with a recent algorithm for listing all triangles in a graph . However, unlike the authors in , we needn’t use the notion of -modules in our algorithms. Furthermore, since we only ask for counting triangles, and not to list them, we obtain a better time complexity than in .
For every , Triangle Counting can be solved in -time if a -expression of is given.
We need to assume the -expression is irredundant, that is, when we add a complete join between the vertices labeled and the verticed labeled , there was no edge before between these two subsets. Given a -expression of , an irredundant -expression can be computed in linear-time . Then, we proceed by dynamic programming on the irredundant -expression.
More precisely, let be a labeled graph, . We denote by the number of triangles in . In particular, if is empty. Furthermore, if is obtained from by: the addition of a new vertex with any label, or the identification of two labels. If is the disjoint union of and then .
Finally, suppose that is obtained from by adding a complete join between the set of vertices labeled and the set of vertices labeled . For every , we denote by the number of edges in with one end in and the other end in . Let be the number of (non necessarily induced) ’s with an end in and the other end in . Note that we are only interested in the number of induced ’s for our algorithm, but this looks more challenging to compute. Nevertheless, since the -expression is irredundant, is exactly the number of induced ’s with one end in and the other in . Furthermore after the join is added we get: new triangles per edge in , new triangles per edge in , and one triangle for every with one end in and the other in . Summarizing:
In order to derive the claimed time bound, we are now left to prove that, after any operation, we can update the values , in -time. Clearly, these values cannot change when we add a new (isolated) vertex, with any label, and they can be updated by simple summation when we take the disjoint union of two labeled graphs. We now need to distinguish between the two remaining cases. In what follows, let and represent the former values.
Suppose that label is identified with label . Then:
Otherwise, suppose that we add a complete join between the set of vertices labeled and the set of vertices labeled . Then, since the -expression is irredundant:
For every and we create a new . Similarly, for every and we create a new . These are the only new ’s with two edges from the complete join. Furthermore, for every edge in and for every we can create the two new ’s and . Similarly, for every edge in and for every we can create the two new ’s and . Finally, for every edge with , we create new ’s, and for every edge with , we create new ’s. Altogether combined, we deduce the following update rules:
Our next result is about computing the girth of a graph (size of a smallest cycle). To the best of our knowledge, the following Theorem 3 gives the first known polynomial parameterized algorithm for Girth.
For every , Girth can be solved in -time if a -expression of is given.
The same as for Theorem 2, we assume the -expression to be irredundant. It can be enforced up to linear-time preprocessing . We proceed by dynamic programming on the -expression. More precisely, let be a labeled graph, . We denote by the girth of . By convention, if is empty, or more generally if is a forest. Furthermore, if is obtained from by: the addition of a new vertex with any label, or the identification of two labels. If is the disjoint union of and then .
Suppose that is obtained from by adding a complete join between the set of vertices labeled and the set of vertices labeled . For every , we are interested in the minimum length of a nonempty path with an end in and an end in . However, for making easier our computation, we consider a slightly more complicated definition. If then we define as the minimum length of a -path of . Otherwise, , we define as the minimum length taken over all the paths with two distinct ends in , and all the nontrivial closed walks that intersect (i.e., there is at least one edge in the walk, we allow repeated vertices or edges, however a same edge does not appear twice consecutively). Intuitively, may not represent the length of a path only in some cases where a cycle of length at most is already ensured to exist in the graph (in which case we needn’t consider this value). Furthermore note that such paths or closed walks as defined above may not exist. So, we may have . Then, let us consider a minimum-size cycle of . We distinguish between four cases.
If does not contain an edge of the join, then it is a cycle of .
Else, suppose that contains exactly one edge of the join. Then removing this edge leaves a -path in ; this path has length at least . Conversely, if then there exists a cycle of length in , and so, .
Else, suppose that contains exactly two edges of the join. In particular, since is of minimum-size, and so, it is an induced cycle, the two edges of the join in must have a common end in the cycle. It implies that removing the two edges from leaves a path of with either its two ends in or its two ends in . Such paths have respective length at least and . Conversely, there exist closed walks of respective length and in . Hence, .
Otherwise, contains at least three edges of the join. Since is induced, it implies that is a cycle of length four with two vertices in and two vertices in . Such a (non necessarily induced) cycle exists if and only if .
In order to derive the claimed time bound, we are now left to prove that, after any operation, we can update the values , in -time. Clearly, these values cannot change when we add a new (isolated) vertex, with any label, and they can be updated by taking the minimum values when we take the disjoint union of two labeled graphs. We now need to distinguish between the two remaining cases. In what follows, let represent the former values.
Suppose that label is identified with label . Then:
Otherwise, suppose that we add a complete join between the set of vertices labeled and the set of vertices labeled . The values can only be decreased by using the edges of the join. In particular, using the fact that the -expression is irredundant, we obtain: