1 Introduction
The classification of problems according to their complexity is one of the main goals in computer science. This goal was partly achieved by the theory of NPcompleteness which helps to identify the problems that are unlikely to have polynomialtime algorithms. However, there are still many problems in P for which it is not known if the running time of the best current algorithms can be improved. Such problems arise in various domains such as computational geometry, string matching or graphs. Here we focus on the existence and the design of lineartime algorithms, for solving several graph problems when restricted to classes of bounded cliquewidth. The problems considered comprise the detection of short cycles (e.g., Girth and Triangle Counting), some distance problems (e.g., Diameter, Hyperbolicity, Betweenness Centrality) and the computation of maximum matchings in graphs. We refer to Sections 3.1, 4.1 and 5, respectively, for a recall of their definitions.
Cliquewidth is an important graph parameter in structural graph theory, that intuitively represents the closeness of a graph to a cograph — a.k.a., free graphs [24, 32]. Some classes of perfect graphs, including distancehereditary graphs, and so, trees, have bounded cliquewidth [57]. Furthermore, cliquewidth has many algorithmic applications. Many algorithmic schemes and metatheorems have been proposed for classes of bounded cliquewidth [31, 28, 40]. Perhaps the most famous one is Courcelle’s theorem, that states that every graph problem expressible in Monadic Second Order logic () can be solved in time when restricted to graphs with cliquewidth at most , for some computable function that only depends on [31]. Some of the problems considered in this work can be expressed as an formula. However, the dependency on the cliquewidth in Courcelle’s theorem is superpolynomial, that makes it less interesting for the study of graphs problems in P. Our goal is to derive a finergrained complexity of polynomial graph problems when restricted to classes of bounded cliquewidth, that requires different tools than Courcelle’s theorem.
Our starting point is the recent theory of “Hardness in P” that aims at better hierarchizing the complexity of polynomialtime solvable problems [84]. This approach mimics the theory of NPcompleteness. Precisely, since it is difficult to obtain unconditional hardness results, it is natural to obtain hardness results assuming some complexity theoretic conjectures. In other words, there are key problems that are widely believed not to admit better algorithms such as 3SAT (kSAT), 3SUM and AllPairs Shortest Paths (APSP). Roughly, a problem in P is hard if the existence of a faster algorithm for this problem implies the existence of a faster algorithm for one of these fundamental problems mentioned above. In their seminal work, Williams and Williams [85] prove that many important problems in graph theory are all equivalent under subcubic reductions. That is, if one of these problems admits a truly subcubic algorithms, then all of them do. Their results have extended and formalized prior work from, e.g., [51, 67]. The list of such problems was further extended in [1, 17].
Besides purely negative results (i.e., conditional lowerbounds) the theory of “Hardness in P” also comes with renewed algorithmic tools in order to leverage the existence, or the nonexistence, of improved algorithms for some graph classes. The tools used to improve the running time of the above mentioned problems are similar to the ones used to tackle NPhard problems, namely approximation and FPT algorithms. Our work is an example of the latter, of which we first survey the most recent results.
Related work: Fully polynomial parameterized algorithms.
FPT algorithms for polynomialtime solvable problems were first considered by Giannopoulou et al. [55]. Such a parameterized approach makes sense for any problem in P for which a conditional hardness result is proved, or simply no lineartime algorithms are known. Interestingly, the authors of [55] proved that a matching of cardinality at least in a graph can be computed in time. We stress that Maximum Matching is a classical and intensively studied problem in computer science [37, 45, 46, 49, 66, 72, 71, 86]. The well known time algorithm in [72] is essentially the best so far for Maximum Matching. Approximate solutions were proposed by Duan and Pettie [37].
More related to our work is the seminal paper of Abboud, Williams and Wang [3]. They obtained rather surprising results when using treewidth: another important graph parameter that intuitively measures the closeness of a graph to a tree [13]. Treewidth has tremendous applications in pure graph theory [79] and parameterized complexity [27]. Furthermore, improved algorithms have long been known for ”hard” graph problems in P, such as Diameter and Maximum Matching, when restricted to trees [65]. However, it has been shown in [3] that under the Strong Exponential Time Hypothesis, for any there can be no time algorithm for computing the diameter of graphs with treewidth at most . This hardness result even holds for pathwidth, that leaves little chance to find an improved algorithm for any interesting subclass of boundedtreewidth graphs while avoiding an exponential blowup in the parameter. We show that the situation is different for cliquewidth than for treewidth, in the sense that the hardness results for cliquewidth do not hold for important subclasses.
We want to stress that a familiar reader could ask why the hardness results above do not apply to cliquewidth directly since it is upperbounded by a function of treewidth [25]. However, cliquewidth cannot be polynomially upperbounded by the treewidth [25]. Thus, the hardness results from [3] do not preclude the existence of, say, an time algorithm for computing the diameter of graphs with cliquewidth at most .
On a more positive side, the authors in [3] show that Radius and Diameter can be solved in time, where is treewidth. Husfeldt [62] shows that the eccentricity of every vertex in an undirected graph on vertices can be computed in time , where and are the treewidth and the diameter of the graph, respectively. More recently, a tour de force was achieved by Fomin et al. [44] who were the first to design parameterized algorithms with polynomial dependency on the treewidth, for Maximum Matching and Maximum Flow. Furthermore they proved that for graphs with treewidth at most , a tree decomposition of width can be computed in time. We observe that their algorithm for Maximum Matching is randomized, whereas ours are deterministic.
We are not aware of the study of another parameter than treewidth for polynomial graph problems. However, some authors choose a different approach where they study the parameterization of a fixed graph problem for a broad range of graph invariants [11, 43, 71]. As examples, cliquewidth is part of the graph invariants used in the parameterized study of Triangle Listing [11]. Nonetheless, cliquewidth is not the main focus in [11]. Recently, Mertzios, Nichterlein and Niedermeier [71] propose algorithms for Maximum Matching that run in time , for several parameters such as feedback vertex set or feedback edge set. Moreover, the authors in [71] suggest that Maximum Matching may become the “drosophila” of the study of the FPT algorithms in P. We advance in this research direction.
1.1 Our results
In this paper we study the parameterized complexity of several classical graph problems under a wide range of parameters such as cliquewidth and its upperbounds modularwidth [32], splitwidth [78], neighbourhood diversity [68] and sparseness [8]. The results are summarized in Table 1.
Roughly, it turns out that some hardness assumptions for general graphs do not hold anymore for graph classes of bounded cliquewidth. This is the case in particular for Triangle Detection and other cycle problems that are subcubic equivalent to it such as, e.g., Girth, that all can be solved in lineartime, with quadratic dependency on the cliquewidth, with the help of dynamic programming (Theorems 2 and 3). The latter complements the results obtained for Triangle Listing in [11]. However many hardness results for distance problems when using treewidth are proved to also hold when using cliquewidth (Theorems 5, 6 and 7). These negative results have motivated us to consider some upperbounds for cliquewidth as parameters, for which better results can be obtained than for cliquewidth. Another motivation stems from the fact that the existence of a parameterized algorithm for computing the cliquewidth of a graph remains a challenging open problem [23]. We consider some upperbounds for cliquewidth that are defined via lineartime computable graph decompositions. Thus if these parameters are small enough, say, in for some , we get truly subcubic or even truly subquadratic algorithms for a wide range of problems.
Problem  Parameterized time complexity  

Diameter, Eccentricities  
Betweenness Centrality  
Hyperbolicity  
Maximum Matching  
Triangle Detection, Triangle Counting, Girth  for any 
Graph parameters and decompositions considered
Let us describe the parameters considered in this work as follows. The following is only an informal high level description (formal definitions are postponed to Section 2).
Split Decomposition.
A join is a set of edges inducing a complete bipartite subgraph. Roughly, cliquewidth can be seen as a measure of how easy it is to reconstruct a graph by adding joins between some vertexsubsets. A split is a join that is also an edgecut. By using pairwise non crossing splits, termed “strong splits”, we can decompose any graph into degenerate and prime subgraphs, that can be organized in a treelike manner. The latter is termed split decomposition [56].
We take advantage of the treelike structure of split decomposition in order to design dynamic programming algorithms for distance problems such as Diameter, Gromov Hyperbolicity and Betweenness Centrality (Theorems 8, 9 and 11, respectively). Although cliquewidth is also related to some treelike representations of graphs [30], the same cannot be done for cliquewidth as for split decomposition because the edges in the treelike representations for cliquewidth may not represent a join.
Modular Decomposition.
Then, we can improve the results obtained with split decomposition by further restricting the type of splits considered. As an example, let be a bipartition of the vertexset that is obtained by removing a split. If every vertex of is incident to some edges of the split then is called a module of . That is, for every vertex , is either adjacent or nonadjacent to every vertex of . The wellknown modular decomposition of a graph is a hierarchical decomposition that partitions the vertices of the graph with respect to the modules [60]. Split decomposition is often presented as a refinement of modular decomposition [56]. We formalize the relationship between the two in Lemma 10, that allows us to also apply our methods for split decomposition to modular decomposition.
However, we can often do better with modular decomposition than with split decomposition. In particular, suppose we partition the vertexset of a graph into modules, and then we keep exactly one vertex per module. The resulting quotient graph keeps most of the distance properties of . Therefore, in order to solve a distance problem for , it is often the case that we only need to solve it for . We so believe that modular decomposition can be a powerful Kernelization tool in order to solve graph problems in P. As an application, we improve the running time for some of our algorithms, from time when parameterized by the splitwidth (maximum order of a prime subgraph in the split decomposition), to time when parameterized by the modularwidth (maximum order of a prime subgraph in the modular decomposition). See Theorem 13.
Furthermore, for some more graph problems, it may also be useful to further restrict the internal structures of modules. We briefly explore this possibility through a case study for neighbourhood diversity. Roughly, in this latter case we only consider modules that are either independent sets (false twins) or cliques (true twins). New kernelization results are obtained for Hyperbolicity and Betweenness Centrality when parameterized by the neighbourhood diversity (Theorems 16 and 17, respectively). It is worth pointing out that so far, we have been unable to obtain kernelization results for Hyperbolicity and Betweenness Centrality when only parameterized by the modularwidth. It would be very interesting to prove separability results between splitwidth, modularwidth and neighbourhood diversity in the field of fully polynomial parameterized complexity.
Graphs with few ’s.
We finally use modular decomposition as our main tool for the design of new lineartime algorithms when restricted to graphs with few induced ’s. The graphs have been introduced by Babel and Olariu in [7]. They are the graphs in which no set of at most vertices can induce more than paths of length four. Every graph is a graph for some large enough values of and . Furthermore when and are fixed constants, , the class of graphs has bounded cliquewidth [70]. We so define the sparseness of a given graph , denoted by , as the minimum such that is a graph. The structure of the quotient graph of a graph, being a constant, has been extensively studied and characterized in the literature [5, 7, 8, 6, 64]. We take advantage of these existing characterizations in order to generalize our algorithms with modular decomposition to time algorithms (Theorems 18 and 20).
Let us give some intuition on how the sparseness can help in the design of improved algorithms for hard graph problems in P. We consider the class of split graphs (i.e., graphs that can be bipartitioned into a clique and an independent set). Deciding whether a given split graph has diameter or is hard [17]. However, suppose now that the split graph is a graph , for some fixed . An induced in has its two ends in the independent set, and its two middle vertices are, respectively, in and . Furthermore, when is a graph, it follows from the characterization of [5, 7, 8, 6, 64] either it has a quotient graph of bounded order or it is part of a wellstructured subclass where the vertices of all neighbourhoods in the independent set follow a rather nice pattern (namely, spiders and a subclass of trees, see Section 2). As a result, the diameter of can be computed in time when is a split graph. We generalize this result to every graph by using modular decomposition.
All the parameters considered in this work have already received some attention in the literature, especially in the design of FPT algorithms for NPhard problems [6, 53, 56, 50, 78]. However, we think we are the first to study cliquewidth and its upperbounds for polynomial problems. There do exist lineartime algorithms for Diameter, Maximum Matching and some other problems we study when restricted to some graph classes where the splitwidth or the sparseness is bounded (e.g., cographs [86], distancehereditary graphs [35, 36], tidy graphs [46], etc.). Nevertheless, we find the techniques used for these specific subclasses hardly generalize to the case where the graph has splitwidth or sparseness at most , being any fixed constant. For instance, the algorithm that is proposed in [36] for computing the diameter of a given distancehereditary graph is based on some properties of LexBFS orderings. Distancehereditary graphs are exactly the graphs with splitwidth at most two [56]. However it does not look that simple to extend the properties found for their LexBFS orderings to bounded splitwidth graphs in general. As a byproduct of our approach, we also obtain new lineartime algorithms when restricted to wellknown graph families such as cographs and distancehereditary graphs.
Highlight of our Maximum Matching algorithms
Finally we emphasize our algorithms for Maximum Matching. Here we follow the suggestion of Mertzios, Nichterlein and Niedermeier [71] that Maximum Matching may become the “drosophila” of the study of the FPT algorithms in P. Precisely, we propose time algorithms for Maximum Matching when parameterized either by modularwidth or by the sparseness of the graph (Theorems 22 and 24). The latter subsumes many algorithms that have been obtained for specific subclasses [46, 86].
Let us sketch the main lines of our approach. Our algorithms for Maximum Matching are recursive. Given a partition of the vertexset into modules, first we compute a maximum matching for the subgraph induced by every module separately. Taking the union of all the outputted matchings gives a matching for the whole graph, but this matching is not necessarily maximum. So, we aim at increasing its cardinality by using augmenting paths [12].
In an unpublished paper [73], Novick followed a similar approach and, based on an integer programming formulation, he obtained an time algorithm for Maximum Matching when parameterized by the modularwidth. Our approach is more combinatorial than his.
Our contribution in this part is twofold. First we carefully study the possible ways an augmenting path can cross a module. Our analysis reveals that in order to compute a maximum matching in a graph of modularwidth at most we only need to consider augmenting paths of length . Then, our second contribution is an efficient way to compute such paths. For that, we design a new type of characteristic graph of size . The same as the classical quotient graph keeps most distance properties of the original graph, our new type of characteristic graph is tailored to enclose the main properties of the current matching in the graph. We believe that the design of new types of characteristic graphs can be a crucial tool in the design of improved algorithms for graph classes of bounded modularwidth.
We have been able to extend our approach with modular decomposition to an time algorithm for computing a maximum matching in a given graph. However, a characterization of the quotient graph is not enough to do that. Indeed, we need to go deeper in the connectedness theory of [8] in order to better characterize the nontrivial modules in the graphs (Theorem 23). Furthermore our algorithm for graph not only makes use of the algorithm with modular decomposition. On our way to solve this case we have generalized different methods and reduction rules from the literature [66, 86], that is of independent interest.
We suspect that our algorithm with modular decomposition can be used as a subroutine in order to solve Maximum Matching in lineartime for bounded splitwidth graphs. However, this is left for future work.
1.2 Organization of the paper
In Section 2 we introduce definitions and basic notations.
Then, in Section 3 we show FPT algorithms when parameterized by the cliquewidth. The problems considered are Triangle Counting and Girth. To the best of our knowledge, we present the first known polynomial parameterized algorithm for Girth (Theorem 3). Roughly, the main idea behind our algorithms is that given a labeled graph obtained from a expression, we can compute a minimumlength cycle for by keeping up to date the pairwise distances between every two label classes. Hence, if a expression of length is given as part of the input we obtain algorithms running in time and space .
In Section 4 we consider distance related problems, namely: Diameter, Eccentricities, Hyperbolicity and Betweenness Centrality.
We start proving, in Section 4.2, none of these problems above can be solved in time , for any , when parameterized by the cliquewidth (Theorems 5—7). These are the first known hardness results for cliquewidth in the field of “Hardness in P”. Furthermore, as it is often the case in this field, our results are conditioned on the Strong Exponential Time Hypothesis [63]. In summary, we take advantage of recent hardness results obtained for boundeddegree graphs [41]. Cliquewidth and treewidth can only differ by a constantfactor in the class of boundeddegree graphs [28, 59]. Therefore, by combining the hardness constructions for boundedtreewidth graphs and for boundeddegree graphs, we manage to derive hardness results for graph classes of bounded cliquewidth.
In Section 4.3 we describe fully polynomial FPT algorithms for Diameter, Eccentricity, Hyperbolicity and Betweenness centrality parameterized by the splitwidth. Our algorithms use splitdecomposition as an efficient preprocessing method. Roughly, we define weighted versions for every problem considered (some of them admittedly technical). In every case, we prove that solving the original distance problem can be reduced in lineartime to the solving of its weighted version for every subgraph of the split decomposition separately.
Then, in Section 4.4 we apply the results from Section 4.3 to modularwidth. First, since for any graph , all our algorithms parameterized by splitwidth are also algorithms parameterized by modularwidth. Moreover for Eccentricities, and for Hyperbolicity and Betweenness Centrality when parameterized by the neighbourhood diversity, we show that it is sufficient only to process the quotient graph of . We thus obtain algorithms that run in time, or time, for all these problems.
In Section 4.5 we generalize our previous algorithms to be applied to the graphs. We obtain our results by carefully analyzing the cases where the quotient graph has size . These cases are given by Lemma 4.
Section 5 is dedicated to our main result, lineartime algorithms for Maximum Matching. First in Section 5.1 we propose an algorithm parameterized by the modularwidth that runs in time. In Section 5.2 we generalize this algorithm to graphs.
Finally, in Section 6 we discuss applications to other graph classes.
2 Preliminaries
We use standard graph terminology from [15, 34]. Graphs in this study are finite, simple (hence without loops or multiple edges) and unweighted – unless stated otherwise. Furthermore we make the standard assumption that graphs are encoded as adjacency lists.
We want to prove the existence, or the nonexistence, of graph algorithms with running time of the form , being some fixed graph parameter. In what follows, we introduce the graph parameters considered in this work.
Cliquewidth
A labeled graph is given by a pair where is a graph and is called a labeling function. A kexpression can be seen as a sequence of operations for constructing a labeled graph , where the allowed four operations are:

Addition of a new vertex with label (the labels are taken in ), denoted ;

Disjoint union of two labeled graphs and , denoted ;

Addition of a join between the set of vertices labeled and the set of vertices labeled , where , denoted ;

Renaming label to label , denoted .
See Fig. 1 for examples. The cliquewidth of , denoted by , is the minimum such that, for some labeling , the labeled graph admits a expression [29]. We refer to [31] and the references cited therein for a survey of the many applications of cliquewidth in the field of parameterized complexity.
Computing the cliquewidth of a given graph is NPhard [42]. However, on a more positive side the graphs with cliquewidth two are exactly the cographs and they can be recognized in lineartime [24, 32]. Cliquewidth three graphs can also be recognized in polynomialtime [23]. The parameterized complexity of computing the cliquewidth is open. In what follows, we focus on upperbounds on cliquewidth that are derived from some graph decompositions.
Modularwidth
A module in a graph is any subset such that for any , either or . Note that for every are trivial modules of . A graph is called prime for modular decomposition if it only has trivial modules.
A module is strong if it does not overlap any other module, i.e., for any module of , either one of or is contained in the other or and do not intersect. Furthermore, let be the family of all inclusion wise maximal strong modules of that are proper subsets of . The quotient graph of is the graph with vertexset and an edge between every two such that every vertex of is adjacent to every vertex of .
Modular decomposition is based on the following structure theorem from Gallai.
Theorem 1 ( [52]).
For an arbitrary graph exactly one of the following conditions is satisfied.

is disconnected;

its complement is disconnected;

or its quotient graph is prime for modular decomposition.
Theorem 1 suggests the following recursive procedure in order to decompose a graph, that is sometimes called modular decomposition. If (i.e., is complete, edgeless or prime for modular decomposition) then we output . Otherwise, we output the quotient graph of and, for every strong module of , the modular decomposition of . The modular decomposition of a given graph can be computed in lineartime [83]. See Fig. 2 for an example.
Furthermore, by Theorem 1 the subgraphs from the modular decomposition are either edgeless, complete, or prime for modular decomposition. The modularwidth of , denoted by , is the minimum such that any prime subgraph in the modular decomposition has order (number of vertices) at most ^{1}^{1}1This term has another meaning in [77]. We rather follow the terminology from [32].. The relationship between cliquewidth and modularwidth is as follows.
Lemma 1 ( [31]).
For every , we have , and a expression defining can be constructed in lineartime.
We refer to [60] for a survey on modular decomposition. In particular, graphs with modularwidth two are exactly the cographs, that follows from the existence of a cotree [82]. Cographs enjoy many algorithmic properties, including a lineartime algorithm for Maximum Matching [86]. Furthermore, in [50] Gajarskỳ, Lampis and Ordyniak prove that for some hard problems when parameterized by cliquewidth there exist FPT algorithms when parameterized by modularwidth.
Splitwidth
A split in a connected graph is a partition such that: ; and there is a complete join between the vertices of and . For every split of , let be arbitrary. The vertices are termed split marker vertices. We can compute a “simple decomposition” of into the subgraphs and .
There are two cases of “indecomposable” graphs. Degenerate graphs are such that every bipartition of their vertexset is a split. They are exactly the complete graphs and the stars [33]. A graph is prime for split decomposition if it has no split.
A split decomposition of a connected graph is obtained by applying recursively a simple decomposition, until all the subgraphs obtained are either degenerate or prime. A split decomposition of an arbitrary graph is the union of a split decomposition for each of its connected components. Every graph has a canonical split decomposition, with minimum number of subgraphs, that can be computed in lineartime [20]. The splitwidth of , denoted by , is the minimum such that any prime subgraph in the canonical split decomposition of has order at most . See Fig. 3 for an illustration.
Lemma 2 ( [78]).
For every , we have , and a expression defining can be constructed in lineartime.
We refer to [53, 56, 78] for some algorithmic applications of split decomposition. In particular, graphs with splitwidth at most two are exactly the distancehereditary graphs [9]. Lineartime algorithms for solving Diameter and Maximum Matching for distancehereditary graphs are presented in [36, 35].
We stress that split decomposition can be seen as a refinement of modular decomposition. Indeed, if is a module of and then is a split. In what follows, we prove most of our results with the more general split decomposition.
Graphs with few ’s
A graph is such that for any , , induces at most paths on four vertices [7]. The sparseness of , denoted by , is the minimum such that is a graph.
Lemma 3 ( [70]).
For every , every graph has cliquewidth at most , and a expression defining it can be computed in lineartime.
The algorithmic properties of several subclasses of graphs have been considered in the literature. We refer to [8] for a survey. Furthermore, there exists a canonical decomposition of graphs, sometimes called the primeval decomposition, that can be computed in lineartime [10]. Primeval decomposition can be seen as an intermediate between modular and split decomposition. We postpone the presentation of primeval decomposition until Section 5. Until then, we state the results in terms of modular decomposition.
More precisely, given a graphs , the prime subgraphs in its modular decomposition may be of superconstant size . However, if they are then they are part of one of the wellstructured graph classes that we detail next.
A disc is either a cycle , or a cocycle , for some .
A spider is a graph with vertex set and edge set such that:

is a partition of and may be empty;

the subgraph induced by and is the complete join , and separates and , i.e. any path from a vertex in and a vertex in contains a vertex in ;

is a stable set, is a clique, , and there exists a bijection such that, either for all vertices , or . Roughly speaking, the edges between and are either a matching or an antimatching. In the former case or if , is called thin, otherwise is thick. See Fig. 4.
If furthermore then we call a prime spider.
Let be a path of length at least five. A spiked chain is a supergraph of , possibly with the additional vertices such that: and . See Fig. 5. Note that one or both of and may be missing. In particular, is a spiked chain . A spiked chain is the complement of a spiked chain .
Let be the graph with vertexset such that, for every , and . A spiked chain is a supergraph of , possibly with the additional vertices such that:

;

Any of the vertices can be missing, so, in particular, is a spiked chain . See Fig. 6. A spiked chain is the complement of a spiked chain .
Finally, we say that a graph is a prime tree if it is either: a spiked chain , a spiked chain , a spiked chain , a spiked chain , or part of the seven graphs of order at most that are listed in [70].
Lemma 4 ( [6, 70]).
Let , , be a connected graph such that and are connected. Then, one of the following must hold for its quotient graph :

either is a prime spider;

or is a disc;

or is a prime tree;

or .
A simpler version of Lemma 4 holds for the subclass of graphs:
Lemma 5 ( [6]).
Let , , be a connected graph such that and are connected. Then, one of the following must hold for its quotient graph :

is a prime spider;

or .
The subclass of graphs has received more attention in the literature than graphs. Our results hold for the more general case of graphs.
3 Cycle problems on bounded cliquewidth graphs
Cliquewidth is the smallest parameter that is considered in this work. We start studying the possibility for time algorithms on graphs with cliquewidth at most . Positive results are obtained for two variations of Triangle Detection, namely Triangle Counting and Girth. We define the problems studied in Section 3.1, then we describe the algorithms in order to solve these problems in Section 3.2.
3.1 Problems considered
We start introducing our basic cycle problem.
Problem 1 (Triangle Detection).
Input: A graph . Question: Does there exist a triangle in ?Note that for general graphs, Triangle Detection is conjectured not to be solvable in time, for any , with a combinatorial algorithm [85]. It is also conjectured not to be solvable in time for any , with being the exponent for fast matrix multiplication [2]. Our results in this section show that such assumptions do not hold when restricted to bounded cliquewidth graphs.
More precisely, we next describe fully polynomial parameterized algorithms for the two following generalizations of Triangle Detection.
Problem 2 (Triangle Counting).
Input: A graph . Output: The number of triangles in .
Problem 3 (Girth).
Input: A graph . Output: The girth of , that is the minimum size of a cycle in .In [85], the three of Triangle Detection, Triangle Counting and Girth are proved to be subcubic equivalent when restricted to combinatorial algorithms.
3.2 Algorithms
Roughly, our algorithms in what follows are based on the following observation. Given a labeled graph (obtained from a expression), in order to detect a triangle in , resp. a minimumlength cycle in , we only need to store the adjacencies, resp. the distances, between every two label classes. Hence, if a expression of length is given as part of the input we obtain algorithms running in time and space .
Our first result is for Triangle Counting (Theorem 2). It shares some similarities with a recent algorithm for listing all triangles in a graph [11]. However, unlike the authors in [11], we needn’t use the notion of modules in our algorithms. Furthermore, since we only ask for counting triangles, and not to list them, we obtain a better time complexity than in [11].
Theorem 2.
For every , Triangle Counting can be solved in time if a expression of is given.
Proof.
We need to assume the expression is irredundant, that is, when we add a complete join between the vertices labeled and the verticed labeled , there was no edge before between these two subsets. Given a expression of , an irredundant expression can be computed in lineartime [32]. Then, we proceed by dynamic programming on the irredundant expression.
More precisely, let be a labeled graph, . We denote by the number of triangles in . In particular, if is empty. Furthermore, if is obtained from by: the addition of a new vertex with any label, or the identification of two labels. If is the disjoint union of and then .
Finally, suppose that is obtained from by adding a complete join between the set of vertices labeled and the set of vertices labeled . For every , we denote by the number of edges in with one end in and the other end in . Let be the number of (non necessarily induced) ’s with an end in and the other end in . Note that we are only interested in the number of induced ’s for our algorithm, but this looks more challenging to compute. Nevertheless, since the expression is irredundant, is exactly the number of induced ’s with one end in and the other in . Furthermore after the join is added we get: new triangles per edge in , new triangles per edge in , and one triangle for every with one end in and the other in . Summarizing:
In order to derive the claimed time bound, we are now left to prove that, after any operation, we can update the values , in time. Clearly, these values cannot change when we add a new (isolated) vertex, with any label, and they can be updated by simple summation when we take the disjoint union of two labeled graphs. We now need to distinguish between the two remaining cases. In what follows, let and represent the former values.

Suppose that label is identified with label . Then:

Otherwise, suppose that we add a complete join between the set of vertices labeled and the set of vertices labeled . Then, since the expression is irredundant:
For every and we create a new . Similarly, for every and we create a new . These are the only new ’s with two edges from the complete join. Furthermore, for every edge in and for every we can create the two new ’s and . Similarly, for every edge in and for every we can create the two new ’s and . Finally, for every edge with , we create new ’s, and for every edge with , we create new ’s. Altogether combined, we deduce the following update rules:
∎
Our next result is about computing the girth of a graph (size of a smallest cycle). To the best of our knowledge, the following Theorem 3 gives the first known polynomial parameterized algorithm for Girth.
Theorem 3.
For every , Girth can be solved in time if a expression of is given.
Proof.
The same as for Theorem 2, we assume the expression to be irredundant. It can be enforced up to lineartime preprocessing [32]. We proceed by dynamic programming on the expression. More precisely, let be a labeled graph, . We denote by the girth of . By convention, if is empty, or more generally if is a forest. Furthermore, if is obtained from by: the addition of a new vertex with any label, or the identification of two labels. If is the disjoint union of and then .
Suppose that is obtained from by adding a complete join between the set of vertices labeled and the set of vertices labeled . For every , we are interested in the minimum length of a nonempty path with an end in and an end in . However, for making easier our computation, we consider a slightly more complicated definition. If then we define as the minimum length of a path of . Otherwise, , we define as the minimum length taken over all the paths with two distinct ends in , and all the nontrivial closed walks that intersect (i.e., there is at least one edge in the walk, we allow repeated vertices or edges, however a same edge does not appear twice consecutively). Intuitively, may not represent the length of a path only in some cases where a cycle of length at most is already ensured to exist in the graph (in which case we needn’t consider this value). Furthermore note that such paths or closed walks as defined above may not exist. So, we may have . Then, let us consider a minimumsize cycle of . We distinguish between four cases.

If does not contain an edge of the join, then it is a cycle of .

Else, suppose that contains exactly one edge of the join. Then removing this edge leaves a path in ; this path has length at least . Conversely, if then there exists a cycle of length in , and so, .

Else, suppose that contains exactly two edges of the join. In particular, since is of minimumsize, and so, it is an induced cycle, the two edges of the join in must have a common end in the cycle. It implies that removing the two edges from leaves a path of with either its two ends in or its two ends in . Such paths have respective length at least and . Conversely, there exist closed walks of respective length and in . Hence, .

Otherwise, contains at least three edges of the join. Since is induced, it implies that is a cycle of length four with two vertices in and two vertices in . Such a (non necessarily induced) cycle exists if and only if .
Summarizing:
In order to derive the claimed time bound, we are now left to prove that, after any operation, we can update the values , in time. Clearly, these values cannot change when we add a new (isolated) vertex, with any label, and they can be updated by taking the minimum values when we take the disjoint union of two labeled graphs. We now need to distinguish between the two remaining cases. In what follows, let represent the former values.

Suppose that label is identified with label . Then:

Otherwise, suppose that we add a complete join between the set of vertices labeled and the set of vertices labeled . The values can only be decreased by using the edges of the join. In particular, using the fact that the expression is irredundant, we obtain:
Comments
There are no comments yet.