# Typical Sequences Revisited - Algorithms for Linear Orderings of Series Parallel Digraphs

In this paper, we show that the Cutwidth, Modified Cutwidth, and Vertex Separation problems can be solved in O(n^2) time for series parallel digraphs on n vertices. To obtain the result, we give a lemma of independent interest on merges of typical sequences, a notion that was introduced in 1991 [Lagergren and Arnborg, Bodlaender and Kloks, both ICALP '91] to obtain constructive linear time parameterized algorithms for treewidth and pathwidth.

There are no comments yet.

## Authors

• 22 publications
• 17 publications
• 12 publications
11/15/2021

### Recognizing Series-Parallel Matrices in Linear Time

A series-parallel matrix is a binary matrix that can be obtained from an...
10/01/2021

### Spirality and Rectilinear Planarity Testing of Independent-Parallel SP-Graphs

We study the long-standing open problem of efficiently testing rectiline...
04/24/2020

### A linear fixed parameter tractable algorithm for connected pathwidth

The graph parameter of pathwidth can be seen as a measure of the topolog...
02/21/2022

### Efficient computation of oriented vertex and arc colorings of special digraphs

In this paper we study the oriented vertex and arc coloring problem on e...
03/10/2022

### Parameterized Algorithms for Upward Planarity

We obtain new parameterized algorithms for the classical problem of dete...
07/12/2021

### Sparsifying, Shrinking and Splicing for Minimum Path Cover in Parameterized Linear Time

A minimum path cover (MPC) of a directed acyclic graph (DAG) G = (V,E) i...
11/19/2018

### Experimental Evaluation of Parameterized Algorithms for Graph Separation Problems: Half-Integral Relaxations and Matroid-based Kernelization

In the recent years we have witnessed a rapid development of new algorit...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we show that the Cutwidth, Modified Cutwidth, and Vertex Separation problems can be solved in polynomial, or, more precisely, time for series parallel digraphs on vertices. The result is obtained by revisiting an old key technique from what currently are the theoretically fastest parameterized algorithms for treewidth and pathwidth, namely the use of typical sequences, and give additional structural insights for this technique. In particular, we show a structural lemma, which we call the Merge Dominator Lemma. The technique of typical sequences brings with it a partial ordering on sequences of integers, and a notion of possible merges of two integer sequences; surprisingly, the Merge Dominator Lemma states that for any pair of integer sequences there exists a merge that dominates all merges of these integer sequences, and this dominating merge can be found in linear time. While this lemma (so far) does not lead to asymptotically faster parameterized algorithms for treewidth and pathwidth, it can be used to obtain a number of unexpected algorithmic results. Based upon the Merge Dominator Lemma, we are able to show that the directed vertex separation number, directed cutwidth, and directed modified cutwidth can be computed in time on series parallel digraphs.

The notion of typical sequences was introduced independently in 1991 by Lagergren and Arnborg [10] and Bodlaender and Kloks [5]. In both papers, the notion is a key element in an explicit dynamic programming algorithm that given a tree decomposition of bounded width , decides if the pathwidth or treewidth of the input graph is at most a constant . Lagergren and Arnborg build upon this result and show that the set of forbidden minors of graphs of treewidth (or pathwidth) at most is computable; Bodlaender and Kloks show that the algorithm can also construct a tree or path decomposition of width at most , if existing, in the same asymptotic time bounds. The latter result is a main subroutine in Bodlaender’s linear time algorithm [1] for treewidth-. If one analyses the running time of Bodlaender’s algorithm for treewidth or pathwidth , then one can observe that the bottleneck is in the subroutine that calls the Bodlaender-Kloks dynamic programming subroutine, with both the subroutine and the main algorithm having time for treewidth, and for pathwidth. See also the recent work by Fürer for pathwidth [8]. Now, over a quarter of a century after the discovery of these results, these bounds still are the best known, as a function of , i.e., no algorithm for treewidth, and no algorithm for pathwidth is known. An interesting question, and a long-standing open problem in the field [2, Problem 2.7.1], is whether such algorithms can be obtained. Possible approaches to answer such a question is to design (e.g. ETH or SETH based) lower bounds, find an entirely new approach to compute treewidth or pathwidth in a parameterized setting, or improve upon the dynamic programming algorithms of [10] and [5]. Our Merge Dominator Lemma gives a small improvement for the latter approach, as it will reduce the size of tables after a join operation, but insufficient to affect the asymptotic running time.

The algorithms of Lagergren and Arnborg [10] and Bodlaender and Kloks [5] are based upon tabulating characteristics of tree or path decompositions of subgraphs of the input graph; a characteristic consists of an intersection model, that tells how the vertices in the current top bag interact, and for each part of the intersection model, a typical sequence of bag sizes. This approach was later used in several follow up results to obtain explicit constructive parameterized algorithms for other graph width measures, like cutwidth [13, 14], branchwidth [6], different types of search numbers like linear width [7], and directed vertex separation number [4]. For the latter, see the discussion below.

Bodlaender and Kloks [5] noted that the parameterized linear time algorithm for pathwidth- can also be used to obtain a polynomial time algorithm for pathwidth of graphs having bounded treewidth, or differently phrased, pathwidth parameterized by treewidth is in XP. That result follows by noting that the pathwidth of a graph is at most times its treewidth, and if numbers in typical sequences are bounded by , then the number of different characteristics in the dynamic programming algorithm is polynomial, which ensures polynomial time of the algorithm.

We use the Merge Dominator Lemma to obtain polynomial time algorithms for three linear ordering problems on series parallel digraphs. These are the directed variants of well known linear ordering problems on undirected graphs. In the directed setting, the input graph is a directed acyclic graph, and solutions are restricted to topological orderings, i.e., the tail of each arc is before its head in the ordering. The Vertex Separation Number on acyclic digraphs has an application in compiler optimization, namely it is equivalent to scheduling a sequence of expressions (a “basic block” or “straight-line code”) such that the number of used registers is minimized. The problem was shown to be NP-hard by Sethi [11] while Kessler [9] gave a faster exact algorithm. Sethi and Ullman [12] showed in 1970 that the problem is linear time solvable if the acyclic digraph is a tree. The current paper, after almost 50 years, adds an  time algorithm for series parallel digraphs.

Our algorithm for Cutwidth of series parallel digraphs has the same structure as the dynamic programming algorithm for undirected Cutwidth (see [3]), but, in addition to obeying directions of edges, we have a step that only keeps characteristics that are not dominated by another characteristic in a table of characteristics. Now, with help of our Merge Dominator Lemma, we can show that in the case of series parallel digraphs, there is a unique dominating characteristic; the dynamic programming algorithm reverts to computing for each intermediate graph a single ‘optimal partial solution’. Note that the cutwidth of a directed acyclic graph is at least the maximum indegree or outdegree of a vertex; e.g., a series parallel digraph formed by the parallel composition of paths with three vertices has vertices and cutwidth . Some additional technical ideas are added to obtain the algorithms for Modified Cutwidth and Vertex Separation Number for series parallel digraphs.

This paper is organized as follows. In Section 2, we give a number of preliminary definitions, and review existing results, including several results on typical sequences from [5]. In Section 3, we state and prove the main technical result of this work, the Merge Dominator Lemma. Section 4 gives our algorithmic applications of this lemma, and shows that the directed cutwidth, directed modified cutwidth, and directed vertex separation number of a series parallel digraph can be computed in polynomial time. Some final remarks are made in the conclusions Section 5.

## 2 Preliminaries

We use the following notation. For two integers with , we let and for , we let . If is a set of size , then a linear order is a bijection . Given a subset of size , we define the restriction of to as the bijection which is such that for all , if and only if .

#### Sequences and Matrices.

We denote the elements of a sequence by . We sometimes denote the length of by , i.e. . For two sequences and , we denote their concatenation by . For two sets of sequences and , we let . For a sequence of length and a set , we denote by the subsequence of induced by , i.e. let such that for all , . Then, .

Let be a set. A matrix is said to have rows and columns. For sets and , we denote by the submatrix of induced by and , which consists of all the entries from whose indices are in . For sets and , we use the shorthand ‘’ for .

#### Integer Sequences.

Let be an integer sequence of length . We use the shorthand ‘’ for ‘’ and ‘’ for ‘’. For an integer , we denote by the -th largest element in and by the index of the -th largest element in . Formally, we define them inductively as follows. We let

1. and ,

2. and for , we let , and

3. .

We define , , , and accordingly.

###### Definition 1

Let and be two integer sequences of the same length .

1. If for all , , then we write ‘’.

2. We write for the integer sequence with for all .

###### Definition 2

Let be a sequence of length . We define the set of extensions of as the set of sequences that are obtained from by repeating each of its elements an arbitrary number of times. Formally, we let

###### Definition 3

Let and be integer sequences. We say that dominates , in symbols ‘’, if there are extensions and of the same length such that . If and , then we say that and are equivalent, and we write .

If is an integer sequence and is a set of integer sequences, then we say that dominates , in symbols ‘’, if for all , .

###### Remark 1 (Transitivity of ‘\dominates’)

In [5, Lemma 3.7], it is shown that the relation ‘’ is indeed transitive. As this is fairly intuitive, we may use this fact without stating it explicitly throughout this text.

###### Definition 4

Let and be two integer sequences. We define the set of all merges of and , denoted by , as

### 2.1 Typical Sequences

We now define typical sequences and restate several lemmas due to Bodlaender and Kloks [5] that will be used throughout this text.

###### Definition 5

Let be an integer sequence of length . The typical sequence of , denoted by , is obtained from by an exhaustive application of the following two operations:

1. [label=0,ref=0]

2. (Removal of equal consecutive elements). If there is an index such that , then we change the sequence from to .

3. (Typical Operation). If there exist such that and for all , , or for all , , then we change the sequence from to , i.e. we remove all elements between and .

We summarize several lemmas from [5] regarding integer sequences and typical sequences that we will use in this work.

###### Lemma 1 (Bodlaender and Kloks [5])

Let and be two integer sequences.

1. [label=()]

2. (Cor. 3.11 in [5]). We have that if and only if .

3. (Lem. 3.13 in [5]). Suppose and are of the same length and let . Let and . Then there is a sequence such that .

4. (Lem. 3.14 in [5]). Let . Then, there is a sequence such that .

5. (Lem. 3.15 in [5]). Let . Then, there is an integer sequence with and .

6. (Lem. 3.19 in [5]). Let and be two more integer sequences. If and , then .

Next, we show that given an integer sequence, we can compute its typical sequence in linear time.

###### Lemma 2

Let be an integer sequence of length . Then, one can compute , the typical sequence of , in time .

###### Proof

First, we check for each whether . If we find such an index , we remove . We can assume that for all , . Next, we find and . Suppose wlog. that . We initialize , , and . We furthermore keep a set of marked indices and initialize . We execute the loop depicted in section 2.1. [h] Main loop in the algorithm of lemma 2.

After the execution of this loop, we add and to . We run the same algorithm starting from the last element and going until , to collect all marked indices in . It is not difficult to verify that precisely consists of the subsequence of induced by the indices in and that the whole procedure takes time .

### 2.2 Directed Acyclic Graphs

A directed graph (or digraph) is a pair of a set of vertices and an ordered set of arcs . (If is a multiset, we call multidigraph.) We say that an arc is directed from to , and we call the tail of and the head of . We use the shorthand ‘’ for ‘’. A sequence of vertices is called a walk in if for all , . A cycle is a walk with and all vertices pairwise distinct. If does not contain any cycles, then we call acyclic or a directed acyclic graph, DAG for short.

Let be a DAG on vertices. A topological order of is a linear order such that for all arcs , we have that . We denote the set of all topological orders of by . We now define the width measures studied in this work. Note that we restrict the orderings of the vertices that we consider to topological orderings.

###### Definition 6

Let be a directed acyclic graph and let be a topological order of .

1. The cutwidth of is .

2. The modified cutwidth of is .

3. The vertex separation number of is

 \vertexseparationnumber(\linord)\defeqmaxi∈[n]\card{u∈V(G)∣∃v∈V(G):uv∈A(G)∧\linord(u)≤i∧\linord(v)>i}.

We define the cutwidth, modified cutwidth, and vertex separation number of a directed acyclic graph as the minimum of the respective measure over all topological orders of .

We now introduce series-parallel digraphs. Note that the following definition coincides with the notion of ‘edge series-parallel multidigraphs’ in [15].

###### Definition 7 (Series-Parallel Digraph (SPD))

A (multi-)digraph

with an ordered pair of

terminals is called series-parallel digraph (SPD), often denoted by , if one of the following hold.

1. is a single arc directed from to , i.e. , .

2. can be obtained from two series-parallel digraphs and by one of the following operations.

1. Series Composition. is obtained by taking the disjoint union of and , identifying and , and letting and . In this case we write or simply .

2. Parallel Composition. is obtained by taking the disjoint union of and , identifying and , and identifying and , and letting and . In this case we write , or simply .

It is not difficult to see that each series-parallel digraph is acyclic. One can naturally associate a notion of decomposition trees with series-parallel digraphs as follows. A decomposition tree is a rooted and ordered binary tree whose leaves are labeled with a single arc, and each internal node with left child and right child is either a series node or a parallel node. We then associate an SPD with that is if is a series node and if is a parallel node. It is clear that for each SPD , there is a decomposition tree with root such that . In that case we say that yields . Valdes et al. [15] have shown that one can decide in linear time whether a directed graph is an SPD and if so, find a decomposition tree that yields .

###### Theorem 1 (Valdes et al. [15])

Let be a directed graph on vertices and arcs. There is an algorithm that decides in time whether is a series-parallel digraph and if so, it outputs a decomposition tree that yields .

## 3 The Merge Dominator Lemma

In this section we prove the main technical result of this work. It states that given two integer sequences, one can find in linear time a merge that dominates all merges of those two sequences.

###### Lemma 3 (Merge Dominator Lemma)

Let and be integer sequence of length and , respectively. There exists a dominating merge of and , i.e. an integer sequence such that , and this dominating merge can be computed in time .

#### Outline of the proof of the Merge Dominator Lemma.

First, we show that we can restrict our search to finding a dominating path in a matrix that, roughly speaking, contains all merges of and of length at most . The goal of this step is mainly to increase the intuitive insight to the proofs in this section. Next, we prove the ‘Split Lemma’ (lemma 6 in section 3.2) which asserts that we can obtain a dominating path in our matrix by splitting into a submatrix that lies in the ‘bottom left’ of and another submatrix in the ‘top right’ of along a minimum row and a minimum column, and appending a dominating path in to a dominating path in . In , the last row and column are a minimum row and column, respectively, and in , the first row and column are a minimum row and column, respectively. This additional structure will be exploited in section 3.3 where we prove the ‘Chop Lemmas’ that show that in , we can find a dominating path by repeatedly ‘chopping away’ the last two rows or columns of and the first two rows or columns of and remembering a vertical or horizontal length- path in each step and case. The proofs of the Chop Lemmas only hold when and are typical sequences, and in section 3.4 we present the ‘Split-and-Chop Algorithm’ that computes a dominating path in a merge matrix of two typical sequences. Finally, in section 3.5, we generalize this result to arbitrary integer sequences, using the Split-and-Chop Algorithm and one additional construction.

We will in fact prove the Merge Dominator Lemma in terms of a more strict notion of domination which we call strong domination. This is not necessary to prove the lemma, however we will need the result in this stronger form for one of the applications presented in section 4.

### 3.1 The Merge Matrix, Paths, and Strong Domination

Let us begin by defining the basic notions of a merge matrix and paths in matrices.

###### Definition 8 (Merge Matrix)

Let and be two integer sequences of length and , respectively. Then, the merge matrix of and is an integer matrix such that for , .

###### Definition 9 (Path in a Matrix)

Let be an matrix. A path in is a sequence of entries from such that

1. and , and

2. for , let be the index of in ; then, .

We denote by the set of all paths in . A sequence that satisfies the second condition but not necessarily the first is called a partial path in .

A (partial) path is called non-diagonal if the second condition is replaced by the following.

1. [label=()’.]

2. For , let be the index of in . Then, .

We introduce one more notion of domination that only applies to pairs of merges of integer sequences rather than pairs of integer sequences which is of importance to one of the algorithmic applications presented in section 4.

###### Definition 10 (Strong Domination Property, ‘\sdproperty’)

Let be an integer matrix and let and be two (partial) paths in . Let and such that and have the same length . We say that has the strong domination property over , in symbols ‘’ if the following holds. For each , let be the index of in , i.e. , and be the index of in , i.e. . Then, , or , or both.

###### Definition 11 (Strong Domination, ‘\stronglydominates’)

Let be an integer matrix and let . We say that strongly dominates if there are extensions of and of of the same length such that the following hold.

1. .

2. has the strong domination property over , i.e. .

If strongly dominates , then we write . If additionally, also strongly dominates , we write . If strongly dominates all paths in , we write .

Intuitively speaking, a path strongly dominates another path , if there are extensions of and of the same length that witness that dominates and in those extensions, any element in the path with index in , say, , is never used to ‘dominate’ an element in whose index in is where and . Note that the relation ‘’ is transitive as well and that lemma 1item 5 holds for strong domination as well, i.e. if are integer sequences, such that and , then .

A consequence of lemma 1items 4 and 1 is that we can restrict ourselves to all paths in a merge matrix when trying to find a dominating merge of two integer sequences: it is clear from the definitions that in a merge matrix of integer sequences and , contains all merges of and of length at most .

###### Corollary 1

Let and be integer sequences and be the merge matrix of and . There is a dominating merge in , i.e. an integer sequence such that , if and only if there is a dominating path in , i.e. a path such that .

We now consider a type of merge that corresponds to non-diagonal paths in the merge matrix. These merges will be used in a construction presented in section 3.5, and in the algorithmic applications of the Merge Dominator Lemma given in section 4. For two integer sequences and , we denote by the set of all non-diagonal merges of and , which are not allowed to have ‘diagonal’ steps: we have that for all and all , if , then . The next lemma will allow us to conclude that all results that we prove in this section for (not necessarily non-diagonal) merges hold for non-diagonal merges as well.

###### Lemma 4

Let and be two integer sequences of length and , respectively. For any merge , there is a non-diagonal merge such that . Furthermore, given , can be found in time .

###### Proof

This can be shown by the following local observation. Let be such that is a diagonal step, i.e. there are indices and such that and . Then, we insert the element between and . Since

 min{a(ia)+b(ib+1),a(ia+1)+b(ib)}≤max{a(ia)+b(ib),a(ia+1),b(ib+1)},

we have that the resulting sequence remains (strongly) equivalent to . We let be the sequence obtained from by applying this operation to all diagonal steps. It is clear that this can be implemented to run in time .

Next, we define two special paths in a matrix that will reappear in several places throughout this section. These paths can be viewed as the ‘corner paths’, where one follows the first row until it hits the last column and then follows the last column (), and the other one follows the first column until it hits the last row and then follows the last row (). Formally, we define them as follows:

 \rightupp(M) \defeqM[1,1],…,M[1,n],…,M[m,n] \uprightp(M) \defeqM[1,1],…,M[m,1],…,M[m,n]

We use the shorthands ‘’ for ‘’ and ‘’ for ‘’ whenever is clear from the context.

For instance, these paths appear in the following special cases of the Merge Dominator Lemma, which will be useful for several proofs in this section.

###### Lemma 5

Let and be integer sequences of length and , respectively, and let be the merge matrix of and . Let and .

1. If and , then strongly dominates all paths in , i.e. .

2. If and , then strongly dominates all paths in , i.e. .

###### Proof

item 1 For an illustration of this proof see fig. 1. Let be any path in and let . Let furthermore be the index of in , i.e. . We divide and in three consecutive parts each to show that dominates .

• We let and .

• We let and .

• We let and .

Since is the minimum row in , we have that for all , . This implies that there is an extension of of length such that . Furthermore, in this extension , we have that for all , and are from the same column in , hence has the strong domination property over . Similarly, there is an extension of of length such that and . Finally, let be an extension of that repeats its only element, , times. Since is the maximum element on the path and is the minimum row and the minimum column in , we have that . It is clear that .

We define an extension of as and an extension of as . Note that . By the above argument we have that , and that , which finishes the proof. item 2 follows from a symmetric argument.

### 3.2 The Split Lemma

In this section we prove the first main step towards the Merge Dominator Lemma. It is fairly intuitive that a dominating merge has to contain a minimum element of a merge matrix. (Otherwise, there is a path that cannot be dominated by that merge.) The Split Lemma states that in fact, we can split the matrix into two smaller submatrices, one that has the minimum element in the top right corner, and one the has the minimum element in the bottom left corner, compute a (strong) dominating path for each of them, and paste them together to obtain a (strong) dominating path for .

###### Lemma 6 (Split Lemma)

Let and be integer sequences of length and , respectively, and let be the merge matrix of and . Let and . Let and and for , let be a strong dominating path in , i.e. . Then, is a strong dominating path in , i.e. .

###### Proof

Let be any path in . If goes through , then has two consecutive parts, say and , such that and . Hence, and , and for , there are extensions of and of of the same length such that and . We can conclude that and which implies that .

Suppose does not go through . Then, either goes through some with , or through some , for some . We show how to construct extensions of and that witness that dominates in the first case, and remark that the second case can be shown symmetrically. Let and be as above, and suppose that goes through some , where . Then, . In this case, also goes through some where . Let be the index of in and denote the index of in . We derive the following sequences from .

• We let and .

• We let .

• We let and .

Since and , we have that , similarly that and considering , we have by lemma 5item 1 that strongly dominates . Accordingly, we consider the following extensions of these sequences.

1. [label=()., ref=()]

2. We let and such that , and .

3. We let , and such that , and .

4. We let , and such that , and .

We construct extensions and as follows. First, let be the index of the last repetition in of the element , i.e. the element that appears just before in . We let and . By item 1, and .

For , we inductively construct and using and , for an illustration see fig. 3. We maintain as an invariant that and that and . Let denote the indices of the occurrences of in , and denote the indices of the occurrences of in . We let:

 e′x\defeqe′x−1\concate1[a1,…,ac] and f′x\defeqf′x−1\concatf12[b1,…,bd], if c=de′x\defeqe′x−1\concate1[a1,…,ac]\concatd−c timese1[ac],…,e1[ac] and f′x\defeqf′x−1\concatf12[b1,…,bd], % if c