1 Introduction
A page book embedding of a graph is a drawing that maps the vertices of to distinct points on a line, called spine, and each edge to a simple curve drawn inside one of halfplanes bounded by the spine, called pages, such that no two edges on the same page cross [20, 25]; see Fig. 1 for an illustration. This kind of layout can be alternatively defined in combinatorial terms as follows. A page book embedding of is a linear order of its vertices and a coloring of its edges which guarantee that no two edges , of the same color have their vertices ordered as . The minimum such that admits a page book embedding is the book thickness of , denoted by , also known as the stack number of . Book embeddings have been extensively studied in the literature, among others due to their applications in bioinformatics, VLSI, and parallel computing (see, e.g., [7, 19] and refer also to [11] for a survey). A famous result by Yannakakis [29] states that every planar graph has book thickness at most four. Several other bounds are known for special graph families, for instance planar graphs with vertex degree at most four have book thickness two [3], while graphs of treewidth have book thickness [12, 17].
Given a graph and a positive integer , the problem of determining whether , called Book Thickness, is known to be complete. Namely, Bernhart and Kainen [4] proved that if and only if is subhamiltonian, i.e., is a subgraph of a planar Hamiltonian graph. Since deciding whether a graph is subhamiltonian is an complete problem, Book Thickness is also complete in general [7]. Book Thickness has been studied also when the linear order of the vertices is fixed, indeed, this is one of the original formulations of the problem, which arises in the context of sorting with parallel stacks [7]. We call this problem FixedOrder Book Thickness and we denote by the fixedorder book thickness of a graph . Obviously, we have , see Fig. 1. Deciding whether corresponds to testing the bipartiteness of a suitable conflict graph, and thus it can be solved in linear time. On the other hand, deciding if is equivalent to finding a 4coloring of a circle graph and hence is an complete problem [28].
Our Results. In this paper we study the parameterized complexity of Book Thickness and FixedOrder Book Thickness. For both problems, when the answer is positive, we naturally also expect to be able to compute a corresponding page book embedding as a witness. While both problems are complete already for small fixed values of on general graphs, it is natural to ask which structural properties of the input (formalized in terms of structural parameters) allow us to solve these problems efficiently. Indeed, already Dujmovic and Wood [13] asked whether Book Thickness can be solved in polynomial time when the input graph has bounded treewidth [27]—a question which has turned out to be surprisingly resilient to existing algorithmic techniques and remains open to this day. Bannister and Eppstein [2] made partial progress towards answering Dujmovic and Wood’s question by showing that Book Thickness is fixedparameter tractable parameterized by the treewidth of when .
We provide the first fixedparameter algorithms for FixedOrder Book Thickness and also the first such algorithm for Book Thickness that can be used when . In particular, we provide fixedparameter algorithms for:

FixedOrder Book Thickness parameterized by the vertex cover number of the graph;

FixedOrder Book Thickness parameterized by the pathwidth of the graph and the vertex order; and

Book Thickness parameterized by the vertex cover number of the graph.
Results 1 and 2 are obtained by combining dynamic programming techniques with insights about the structure of an optimal book embedding. Result 3 then applies a kernelization technique to obtain an equivalent instance of bounded size (which can then be solved, e.g., by brute force). All three of our algorithms can also output a corresponding page book embedding as a witness (if it exists).
The remainder of this paper is organized as follows. Section 2 contains preliminaries and basic definitions. Results 1 and 2 on FixedOrder Book Thickness are presented in Section 3, while Result 3 on Book Thickness is described in Section 4. Conclusions and open problems are found in Section 5. Statements with a proof in the appendix are marked by an asterisk (*).
2 Preliminaries
We use standard terminology from graph theory [9]. For , we write as shorthand for the set . Parameterized complexity [8, 10] focuses on the study of problem complexity not only with respect to the input size but also a parameter . The most desirable complexity class in this setting is FPT (fixedparameter tractable), which contains all problems that can be solved by an algorithm running in time , where is a computable function. Algorithms running in this time are called fixedparameter algorithms.
A page book embedding of a graph will be denoted by a pair , where is a linear order of , and is a function that maps each edge of to one of pages . In a page book embedding it is required that for no pair of edges with the vertices are ordered as , i.e., each page is crossingfree.
We consider two graph parameters for our algorithms. A vertex cover of a graph is a subset such that each edge in has at least one endvertex in . The vertex cover number of , denoted by , is the size of a minimum vertex cover of . The second parameter is pathwidth, a classical graph parameter [26] which admits several equivalent definitions. The definition that will be most useful here is the one tied to linear orders [21]; see also [22, 23] for recent works using this formulation. Given an vertex graph with a linear order of such that , the pathwidth of is the minimum number such that for each vertex (), there are at most vertices left of that are adjacent to or a vertex right of . Formally, for each we call the set such that the guard set for , and the pathwidth of is simply . The elements of the guard sets are called the guards (for ). We remark that the pathwidth of is equal to the minimum pathwidth over all linear orders .
3 Algorithms for FixedOrder Book Thickness
Recall that in FixedOrder Book Thickness the input consists of a graph , a linear order of , and a positive integer . We assume that is indexed such that . The task is to decide if there is a page assignment such that is a page book embedding of , i.e., whether . If the answer is ‘YES’ we shall return a corresponding page book embedding as a witness. In fact, our algorithms will return a book embedding with the minimum number of pages.
3.1 Parameterization by the Vertex Cover Number
As our first result, we will show that FixedOrder Book Thickness is fixedparameter tractable when parameterized by the vertex cover number. We note that the vertex cover number is a graph parameter which, while restricting the structure of the graph in a fairly strong way, has been used to obtain fixedparameter algorithms for numerous difficult problems [1, 14, 15].
Let be a minimum vertex cover of size ; we remark that such a vertex cover can be computed in time [6]. Moreover, let . Our first observation shows that the problem becomes trivial if .
Observation 1
Every vertex graph with a vertex cover of size admits a page book embedding with any vertex order . Moreover, if and are given as input, such a book embedding can be computed in time.
Proof
Let be a vertex cover of size and let be a page assignment on pages defined as follows. For each all edges with are assigned to page . Now, consider the edges assigned to any page . By construction, they are all incident to vertex , and thus no two of them cross each other. Therefore, the pair is a page book embedding of and can be computed in time.∎
We note that the bound given in Observation 1 is tight, since it is known that complete bipartite graphs with bipartitions of size and have book thickness [4] and vertex cover number .
We now proceed to a description of our algorithm. For ease of presentation, we will add to an additional vertex of degree , add it to , and place it at the end of (observe that this does not change the solution to the instance).
If then we are done by creftype 1. Otherwise, let be the set of all possible noncrossing page assignments of the edges whose both endpoints lie in , and note that and can be constructed in time (recall that by assumption). As its first step, the algorithm branches over each choice of , where no pair of edges assigned to the same page crosses.
For each such noncrossing assignment , the algorithm performs a dynamic programming procedure that runs on the vertices of the input graph in sequential (lefttoright) order. We will define a record set that the algorithm is going to compute for each individual vertex in lefttoright order. Let be the ordering of vertices of , and let be the ordering of vertices of .
In order to formalize our records, we need the notion of visibility. Let and let be the set of all edges with one endpoint outside of that lies to the left of . We call a valid partial page assignment if maps edges to pages in a noncrossing fashion. Now, consider a valid partial page assignment . We say that a vertex is visible to (for ) on page if it is possible to draw an edge from to on page without crossing any other edge mapped to page by . Fig. 2 shows the visibilities of a vertex in two pages.
Based on this notion of visibility, for an index we can define a visibility matrix , where an entry of is if is visible to on page and otherwise (see Fig. 2). Intuitively, this visibility matrix captures information about the reachability via crossingfree edges (i.e., visibility) to the vertices in from on individual pages given a particular assignment of edges in . Note that for a given tuple , it is straightforward to compute in polynomial time.
Observe that while the number of possible choices of valid partial page assignments (for some ) is not bounded by a function of , for each the number of possible visibility matrices is upperbounded by . On a high level, the core idea in the algorithm is to dynamically process the vertices in in a lefttoright fashion and compute, for each such vertex, a boundedsize “snapshot” of its visibility matrices—whereas for each such snapshot we will store only one (arbitrarily chosen) valid partial page assignment. We will later (in Lemma 1) show that all valid partial page assignments leading to the same visibility matrices are “interchangeable”.
With this basic intuition, we can proceed to formally defining our records. Let is the immediate successor of in be the set of indices of vertices in which occur immediately after a cover vertex; we will denote the integers in as (in ascending order), and we note that . For a vertex , we define our record set as follows: . Note that each entry in captures one possible set (a “snapshot”) of at most visibility matrices: the visibility matrix for itself, and the visibility matrices for the noncover vertices which follow immediately after the vertices in . The intuition behind these latter visibility matrices is that they allow us to update our visibility matrix when our lefttoright dynamic programming algorithm reaches a vertex in (in particular, as we will see later, for it is not possible to update the visibility matrix only based on ). Along with , we also store a mapping from to valid partial page assignments of which maps to some such that .
Let us make some observations about our records . First, . Second, if for some , since is a dummy vertex of degree 0, then there is a valid partial page assignment such that is a noncrossing page assignment of all edges in . Hence we can output a page book embedding by invoking on any entry in . Third:
Observation 2 (*)
If for all it holds that , then is a NOinstance of FixedOrder Book Thickness.
The above implies that in order to solve our instance, it suffices to compute for each . As mentioned earlier, we do this dynamically, with the first step consisting of the computation of . Since , the visibility matrices required to populate depend only on and are easy to compute in polynomial time.
Finally, we proceed to the dynamic step. Assume we have computed . We branch over each possible page assignment of the (at most ) edges incident to , and each tuple . For each such and , we check whether is a valid partial page assignment (i.e., whether is noncrossing); if this is not the case, we discard this pair of . Otherwise we compute the visibility matrices , add the corresponding tuple into , and set to map this tuple to . We remark that here the use of allows us not to distinguish between and —in both cases, the partial page assignment will correctly capture the visibility matrix for .
Lemma 1
The above procedure correctly computes from .
Proof
Consider an entry computed by the above procedure from some . Since we explicitly checked that is a valid partial page assignment, this implies that , as desired.
For the opposite direction, consider a tuple . By definition, there exists some valid partial page assignment of such that , , …, . Now let be the restriction of to the edges incident to , and let be the restriction of to all other edges (i.e., all those not incident to ). Since is noncrossing and in particular is a valid partial page assignment for , must contain an entry —let .
To conclude the proof, it suffices to show that (1) is a valid partial page assignment, and (2) , which is the original tuple corresponding to the hypothetical , is equal to , which is the entry our algorithm computes from and . Point (1) follows from the fact that in conjunction with the fact that is adjacent only to vertices in . Point (2) then follows by the same argument, but applied to each visibility matrix in the respective tuples: for each we have —meaning that the visibilities of were identical before considering the edges incident to —and so assigning these edges to pages as prescribed by leads to an identical outcome in terms of visibility. ∎
This proves the correctness of our algorithm. The runtime is upperbounded by the product of (the initial branching factor), (the number of times we compute a new record set ), and (to consider all combinations of and so to compute a new record set from the previous one). A minimumpage book embedding can be computed by trying all possible choices for . We summarize Result 1 below.
Theorem 3.1
There is an algorithm which takes as input an vertex graph with a vertex order , runs in time where is the vertex cover number of , and computes a page assignment such that is a page book embedding of .
3.2 Parameterization by the Pathwidth of the Vertex Ordering
As our second result, we show that FixedOrder Book Thickness is fixedparameter tractable parameterized by the pathwidth of . We note that while the pathwidth of is always upperbounded by the vertex cover number, this does not hold when we consider a fixed ordering , and hence this result is incomparable to Theorem 3.1. For instance, if is a path, it has arbitrarily large vertex cover number while may have a pathwidth of , while on the other hand if is a star, it has a vertex cover number of while may have arbitrarily large pathwidth. To begin, we can show that the pathwidth of provides an upper bound on the number of pages required for an embedding.
Lemma 2 (*)
Every vertex graph with a linear order of such that has pathwidth admits a page book embedding , which can be computed in time.
We note that the bound given in Lemma 2 is also tight for the same reason as for creftype 1: complete bipartite graphs with bipartitions of size and have book thickness [4], but admit an ordering with pathwidth .
We now proceed to a description of our algorithm. Our input consists of the graph , the vertex ordering , and an integer that upperbounds the desired number of pages in a book embedding. Let be our parameter, i.e., the pathwidth of ; observe that due to Lemma 2, we may assume that . The algorithm performs a dynamic programming procedure on the vertices of the input graph in righttoleft order along . For technical reasons, we initially add a vertex of degree to and place it to the left of in ; note that this does not increase the pathwidth of .
We now adapt the concept of visibility introduced in Section 3.1 for use in this algorithm. First, let us expand our notion of guard set (see Section 2) as follows: for a vertex , let where for each , is the th guard of in reverse order of (i.e., is the guard that is nearest to in ), and . For a vertex , let be the set of all edges with at least one endpoint to the right of and let be the restriction of to edges between a vertex to the right of and a guard in . An assignment is called a valid partial page assignment if maps the edges in to pages in a noncrossing manner. Given a valid partial page assignment and a vertex with , we say a vertex () is visible to on a page if it is possible to draw the edge in page without crossing any other edge mapped to by .
Before we proceed to describing our algorithm, we will show that the visibilities of vertices w.r.t. valid partial page assignments exhibit a certain regularity property. Given , , and a valid partial page assignment of , let the important edge of be the edge with the following properties: (1) , (2) , and (3) is minimum among all such edges in . If multiple edges with these properties exist, we choose the edge with minimum . Intuitively, the important edge of is simply the shortest edge of which encloses on page ; note that it may happen that has no important edge. Observe that, if the edge exists, its left endpoint is , and we call the important guard of . The next observation easily follows from the definition of important edge, see also Fig. 3.
Observation 3
If has no important edge, then every vertex with is visible to . If the important guard of is , then is visible to if and only if .
creftype 3 not only provides us with a way of handling vertex visibilities in the pathwidth setting, but also allows us to store all the information we require about vertex visibilities in a more concise way than via the matrices used in Section 3.1. For an index , a vertex where and a valid partial page assignment , we define the visibility vector as follows: the th component of is the important guard of , and if has no important guard. Observe that since the number of pages is upperbounded by by assumption and the cardinality of is at most , there are at most
possible distinct visibility vectors for any fixed
.Observe that thanks to creftype 3 the visibility vector provides us with complete information about the visibility of vertices () from —notably, is not visible to on page if and only if lies to the left of the important guard (and, in particular, if then every such is visible to on page ). On a high level, the algorithm will traverse vertices in righttoleft order along and store the set of all possible visibility vectors at each vertex. To this end, it will use the following observation to update its visibility vectors.
Observation 4
Let be a valid partial page assignment of and be a page. If , then a vertex is visible to on page if and only if is visible to on page .
Proof
By definition and are consecutive in . Let (for ) be a vertex that is visible to on page . If is not visible to on , then there must be a vertex between and that is incident to an edge in separating and on page . But this contradicts that and are consecutive in . The other direction follows by the same argument. ∎
There is, however, a caveat: creftype 4 does not (and in fact cannot) allow us to compute the new visibility vector if . To circumvent this issue, our algorithm will not only store the visibility vector but also the visibility vectors for each guard of . We now prove that we can compute the visibility vector for any vertex from the visibility vectors of the guards—this is important when updating our records, since we will need to obtain the visibility records for new guards that are introduced at some step of the algorithm.
Lemma 3
Let , be a valid partial page assignment of , be a page, and assume . Let be such that and is minimized, i.e., is the first guard to the right of . Then .
Proof
Let for be any vertex that is visible to in page and assume is not visible to . Then there must be an edge separating from in page , i.e., . But in that case is a guard in closer to contradicting the choice of . Conversely, let for be a vertex that is not visible to in page . Then there must be an edge separating from on page . Then edge also separates from and is not visible to . Therefore, the visibility vectors and corresponding to the vertices and , respectively, are equal. ∎
We can now formally define our record set as , where each individual element (record) in can be seen as a queue starting with the visibility vector for and then storing the visibility vectors for individual guards (note that there is no reason to store an “empty” visibility vector for ). To facilitate the construction of a solution, we will also store a function from to valid partial page assignments of which maps each tuple to some such that .
Let us make some observations about our records . First of all, since there are at most many visibility vectors, . Second, if then, since , the mapping will produce a valid page assignment of for any . On the other hand, if admits a page book embedding with order , then witnesses the fact that cannot be empty. Hence, the algorithm can return one, once it correctly computes and .
The computation is carried out dynamically and starts by setting , where , and . For the inductive step, assume that we have correctly computed and , and the aim is to compute and . For each , we compute an intermediate record which represents the visibility vector of w.r.t. as follows:

if , then , and

if , then (Recall creftype 4).
We now need to update our intermediate record to take into account the new guards. In particular, we expand by adding, for each new guard , an intermediate visibility vector at the appropriate position in (i.e., mirroring the ordering of guards in ). Recalling Lemma 3, we compute this new intermediate visibility vector by copying the visibility vector that immediately succeeds it in .
Next, let be the at most new edges that we need to account for, and let us branch over all assignments . For each such , we check whether is a valid partial page assignment of , i.e., whether the new edges in do not cross with each other or other edges in when following the chosen assignment and the assignment obtained from . As expected, we discard any such that is not valid.
Our final task is now to update the intermediate visibility vectors (with being a placeholder) to . This can be done in a straightforward way by, e.g., looping over each edge , obtaining the page that is mapped to, reading and replacing that value by the guard incident to if occurs to the right of and to the left of . Finally, we enter the resulting record into .
Lemma 4
The above procedure correctly computes from .
Proof
Consider an entry computed by the above procedure from some and . Since we explicitly checked that is a valid partial page assignment for , there must exist a record , and by recalling creftype 3, Lemma 3 and creftype 4 it can be straightforwardly verified that this record is equal to .
For the opposite direction, consider a tuple that arises from the valid partial page assignment of , and let , be the restrictions of to and , respectively. Since is a valid partial page assignment of , there must exist a tuple that arises from . Let . To conclude the proof, it suffices to note that during the branching stage the algorithm will compute a record from a combination of (due to being in ) and , and the record computed in this way will be precisely . ∎
This proves the correctness of the algorithm. The runtime is upper bounded by (the product of the number of times we compute a new record, the number of records and the branching factor for ). A minimumpage book embedding can be obtained by trying all possible choices for ). We summarize Result 2 below.
Theorem 3.2
There is an algorithm which takes as input an vertex graph with a vertex ordering and computes a page assignment of such that is a page book embedding of . The algorithm runs in time where is the pathwidth of .
4 Algorithms for Book Thickness
We now turn our attention to the general definition of book thickness (without a fixed vertex order). We show that, given a graph , in polynomial time we can construct an equivalent instance whose size is upperbounded by a function of . Such an algorithm is called a kernelization and directly implies the fixedparameter tractability of the problem with this parameterization [8, 10].
Theorem 4.1
There is an algorithm which takes as input an vertex graph and a positive integer , runs in time where is the vertex cover number of , and decides whether . If the answer is positive, it can also output a page book embedding of .
Proof
If , by creftype 1 we can immediately conclude that admits a page book embedding. Hence we shall assume that . We will also compute a vertex cover of size in time using wellknown results [6].
For any subset we say that a vertex of is of type if its set of neighbors is equal to . This defines an equivalence relation on and partitions into at most distinct types. In what follows, we denote by the set of vertices of type . We claim the following.
Claim
Let such that . Then admits a page book embedding if and only if does. Moreover, a page book embedding of can be extended to such an embedding for in linear time.
Proof (of the Claim)
One direction is trivial, since removing a vertex from a book embedding preserves the property of being a book embedding of the resulting graph. So let be a page book embedding of . We prove that a page book embedding of can be easily constructed by inserting right next to a suitable vertex in and by assigning the edges of to the same pages as the corresponding edges of . We say that two vertices are page equivalent, if for each vertex , the edges and are both assigned to the same page according to . Each vertex in has degree exactly , hence this relation partitions the vertices of into at most sets. Since , at least three vertices of this set, which we denote by , , and , are page equivalent. Consider now the graph induced by the edges of these three vertices that are assigned to a particular page. By the above argument, such a graph is a , for some . However, since already does not admit a page book embedding, we have , that is, each has at most one edge on each page. Then we can extend by introducing right next to and assign each edge to the same page as . Since each such edge runs arbitrarily close to the corresponding crossingfree edge , this results in a page book embedding of and concludes the proof of the claim. ∎
We now construct a kernel from of size
as follows. We first classify each vertex of
based on its type. We then remove an arbitrary subset of vertices from each set with until . Thus, constructing can be done in time, where is the number of types and is the maximum number of edges of . From our claim above we can conclude that admits a page book embedding if and only if does. Determining the book thickness of can be done by guessing all possible linear orders and and page assignments in time. A page book embedding of (if any) can be extended to one of by iteratively applying the constructive procedure from the proof of the above claim, in time. ∎The next corollary easily follows from Theorem 4.1, by applying a binary search on the number of pages and by observing that a vertex cover of minimum size can be computed in time [6].
Corollary 1
Let be a graph with vertices and vertex cover number . A book embedding of with minimum number of pages can be computed in time.
5 Conclusions and Open Problems
We investigated the parameterized complexity of Book Thickness and FixedOrder Book Thickness. We proved that both problems can be parameterized by the vertex cover number of the graph, and that the second problem can be parameterized by the pathwidth of the fixed linear order. The algorithm for Book Thickness is the first fixedparameter algorithm that works for general values of , while, to the best of our knowledge, no such algorithms were known for FixedOrder Book Thickness.
We believe that our techniques can be extended to the setting in which we allow edges on the same page to cross, with a given budget of at most crossings over all pages. This problem has been studied by Bannister and Eppstein [2] with the number of pages restricted to be either or . It would also be interesting to investigate the setting where an upper bound on the maximum number of crossings per edge is given as part of the input, which is studied in [5].
The main question that remains open is whether Book Thickness (and FixedOrder Book Thickness) can be solved in polynomial time (and even fixedparameter time) for graphs of bounded treewidth, which was asked by Dujmović and Wood [13]. As an intermediate step towards solving this problem, we ask whether the two problems can be solved efficiently when parameterized by the treedepth [24] of the graph. Treedepth restricts the graph structure in a stronger way than treewidth, and has been used to obtain algorithms for several problems which have proven resistant to parameterization by treewidth [16, 18].
References
 [1] Bannister, M.J., Cabello, S., Eppstein, D.: Parameterized complexity of 1planarity. J. Graph Algorithms Appl. 22(1), 23–49 (2018). https://doi.org/10.7155/jgaa.00457
 [2] Bannister, M.J., Eppstein, D.: Crossing minimization for 1page and 2page drawings of graphs with bounded treewidth. J. Graph Algorithms Appl. 22(4), 577–606 (2018). https://doi.org/10.7155/jgaa.00479
 [3] Bekos, M.A., Gronemann, M., Raftopoulou, C.N.: Twopage book embeddings of 4planar graphs. Algorithmica 75(1), 158–185 (2016). https://doi.org/10.1007/s0045301500168
 [4] Bernhart, F., Kainen, P.C.: The book thickness of a graph. J. Comb. Theory, Ser. B 27(3), 320–331 (1979). https://doi.org/10.1016/00958956(79)900212
 [5] Binucci, C., Di Giacomo, E., Hossain, M.I., Liotta, G.: 1page and 2page drawings with bounded number of crossings per edge. Eur. J. Comb. 68, 24–37 (2018). https://doi.org/10.1016/j.ejc.2017.07.009
 [6] Chen, J., Kanj, I.A., Xia, G.: Improved upper bounds for vertex cover. Theor. Comput. Sci. 411(4042), 3736–3756 (2010). https://doi.org/10.1016/j.tcs.2010.06.026
 [7] Chung, F., Leighton, F., Rosenberg, A.: Embedding graphs in books: A layout problem with applications to VLSI design. SIAM J. Alg. Discr. Meth. 8(1), 33–58 (1987). https://doi.org/10.1137/0608002
 [8] Cygan, M., Fomin, F.V., Kowalik, L., Lokshtanov, D., Marx, D., Pilipczuk, M., Pilipczuk, M., Saurabh, S.: Parameterized Algorithms. Springer (2015). https://doi.org/10.1007/9783319212753
 [9] Diestel, R.: Graph Theory, 4th Edition, Graduate texts in mathematics, vol. 173. Springer (2012)
 [10] Downey, R.G., Fellows, M.R.: Fundamentals of Parameterized Complexity. Texts in Computer Science, Springer (2013). https://doi.org/10.1007/9781447155591
 [11] Dujmović, V., Wood, D.R.: On linear layouts of graphs. Discrete Math. Theor. Comput. Sci. 6(2), 339–358 (2004)
 [12] Dujmovic, V., Wood, D.R.: Graph treewidth and geometric thickness parameters. Discrete Computat. Geom. 37(4), 641–670 (2007). https://doi.org/10.1007/s0045400713187
 [13] Dujmović, V., Wood, D.R.: On the book thickness of ktrees. Discrete Math. Theor. Comput. Sci. 13(3), 39–44 (2011)
 [14] Fellows, M.R., Lokshtanov, D., Misra, N., Rosamond, F.A., Saurabh, S.: Graph layout problems parameterized by vertex cover. In: Algorithms and Computation (ISAAC’08). pp. 294–305 (2008). https://doi.org/10.1007/9783540921820_28
 [15] Ganian, R.: Improving vertex cover as a graph parameter. Discrete Math. Theor. Comput. Sci. 17(2), 77–100 (2015)
 [16] Ganian, R., Ordyniak, S.: The complexity landscape of decompositional parameters for ILP. Artif. Intell. 257, 61–71 (2018). https://doi.org/10.1016/j.artint.2017.12.006
 [17] Ganley, J.L., Heath, L.S.: The pagenumber of trees is . Discrete Appl. Math. 109(3), 215–221 (2001). https://doi.org/10.1016/S0166218X(00)001785
 [18] Gutin, G.Z., Jones, M., Wahlström, M.: The mixed Chinese postman problem parameterized by pathwidth and treedepth. SIAM J. Discrete Math. 30(4), 2177–2205 (2016). https://doi.org/10.1137/15M1034337
 [19] Haslinger, C., Stadler, P.F.: RNA structures with pseudoknots: Graphtheoretical, combinatorial, and statistical properties. Bull. Math. Biol. 61(3), 437–467 (1999). https://doi.org/10.1006/bulm.1998.0085
 [20] Kainen, P.C.: Some recent results in topological graph theory. In: Bari, R.A., Harary, F. (eds.) Graphs and Combinatorics. pp. 76–108. Springer (1974). https://doi.org/10.1007/BFb0066436
 [21] Kinnersley, N.G.: The vertex separation number of a graph equals its pathwidth. Inf. Process. Lett. 42(6), 345–350 (1992). https://doi.org/10.1016/00200190(92)90234M
 [22] Lodha, N., Ordyniak, S., Szeider, S.: Satencodings for special treewidth and pathwidth. In: Gaspers, S., Walsh, T. (eds.) Theory and Applications of Satisfiability Testing (SAT’17). LNCS, vol. 10491, pp. 429–445. Springer (2017). https://doi.org/10.1007/9783319662633_27
 [23] Mallach, S.: Linear ordering based MIP formulations for the vertex separation or pathwidth problem. In: Brankovic, L., Ryan, J., Smyth, W. (eds.) Combinatorial Algorithms (IWOCA’17). LNCS, vol. 10765, pp. 327–340. Springer (2017). https://doi.org/10.1007/9783319788258_27
 [24] Nesetril, J., de Mendez, P.O.: Sparsity – Graphs, Structures, and Algorithms, Algorithms and combinatorics, vol. 28. Springer (2012). https://doi.org/10.1007/9783642278754
 [25] Ollmann, L.T.: On the book thicknesses of various graphs. In: 4th Southeastern Conference on Combinatorics, Graph Theory and Computing. vol. 8, p. 459 (1973)
 [26] Robertson, N., Seymour, P.D.: Graph minors. I. Excluding a forest. J. Comb. Theory, Ser. B 35(1), 39–61 (1983). https://doi.org/10.1016/00958956(83)900795
 [27] Robertson, N., Seymour, P.D.: Graph minors. II. Algorithmic aspects of treewidth. J. Algorithms 7(3), 309–322 (1986). https://doi.org/10.1016/01966774(86)900234
 [28] Unger, W.: The complexity of colouring circle graphs (extended abstract). In: Finkel, A., Jantzen, M. (eds.) Theoretical Aspects of Computer Science (STACS’92). LNCS, vol. 577, pp. 389–400. Springer (1992). https://doi.org/10.1007/3540552103_199
 [29] Yannakakis, M.: Embedding planar graphs in four pages. J. Comput. Syst. Sci. 38(1), 36–67 (1989). https://doi.org/10.1016/00220000(89)900329
Appendix 0.A Missing Proofs of Section 3
Observation 2
If for all it holds that , then is a NOinstance of FixedOrder Book Thickness.
Proof
Assume for a contradiction that admits a page book embedding with order . Let be the restriction of that book embedding to edges whose both endpoints lie in , and let be the restriction of that book embedding to all other edges. Then is a valid page assignment, and hence by definition . In particular, . ∎
Lemma 2
Every vertex graph with a linear order of such that has pathwidth admits a page book embedding , which can be computed in time.
Proof
Let be the page assignment to defined as follows. We parse the vertices of following from right to left. Consider the rightmost vertex of , and let be an arbitrary injective assignment from (the guard set of ) to . For each edge incident to there exists some , and we assign to page . Observe that, this results in a noncrossing page assignment of the edges incident to .
Next, we proceed by induction. Assume, we have obtained a noncrossing page assignment for all edges incident to the last vertices, i.e., for all edges incident to , and that furthermore we have a mapping which maps the guards for to and a noncrossing partial page assignment which maps all edges where and to . In particular, all edges with an endpoint to the left of end in the guards for and are assigned to distinct pages if and only if they are incident to distinct guards.
We extend this page assignment to all edges incident to the last vertices as follows. First, we extend to an arbitrary injective mapping , which is always possible since the number of guards for is at most . Second, we assign each left edge of to .
To conclude the proof, observe that the page assignment obtained in this way is noncrossing. Indeed, the only edges added to the current page assignment are left edges of , and each such edge is assigned to the page —notably, they maintain the property of being assigned to distinct pages if and only if they are incident to distinct guards. Also, it can be computed in time. ∎
Comments
There are no comments yet.