1 Introduction
In the last decades, there was a notable progress in solving the wellknown Boolean satisfiability (Sat) problem [BiereHeuleMaarenWalsh09, KleineBuningLettman99], which can be witnessed by powerful Sat solvers that are also strikingly fast. On the one hand, these solvers can decide the existence of a satisfying assignment for Boolean formulas with millions of variables, but on the other hand Sat is one of the most prominent NPcomplete problems [Cook71]. In consequence, this means bad news for solving this problem efficiently, assuming , which is the gold standard assumption in computational complexity. Over the time, even stronger assumptions like the exponential time hypothesis (ETH) [ImpagliazzoPaturiZane01] emerged, which implies exponential solving time in the number of variables in the worst case. Nowadays, ETH is widely believed among researchers and therefore oftentimes assumed for establishing theoretical results. Still, from a scientific point of view, it is not completely clear why in practice Sat
solvers are dealing so well with a large amount of instances, but there are probably many interleaving reasons for this observation. One of these reasons are
structural properties of instances that are indirectly utilized by the solver’s interna, which has been demonstrated at least theoretically [AtseriasFichteThurley11].This thesis deals with such a structural property, which is referred to by treewidth [RobertsonSeymour86]. The treewidth is wellstudied and measures the closeness of an instance to being a tree (treelikeness), which is motivated by the fact that many hard problems become easy for the special case of trees or treelike structures. This parameter, however, is quite generic and by far not limited to Boolean satisfiability. In fact, there are further problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth. Interestingly, also plenty of problems relevant to knowledge representation and reasoning (KR) and artificial intelligence (AI), which are believed to be even harder than Sat, can be turned tractable when utilizing treewidth. One prominent example of such a problem is QSat, which asks for deciding the validity of a quantified Boolean formula (QBF) [BiereHeuleMaarenWalsh09, KleineBuningLettman99], an extension of a Boolean formula where certain variables are existentially or universally quantified. Complexitywise it is known that restrictions of this problem reach higher levels of the polynomial hierarchy; in general it is even PSPACEcomplete. Notably, similar to complexity classes in classical complexity, the actual “hardness” of such problems when parameterized by treewidth is oftentimes quantified by studying precise runtime dependence (levels of exponentiality) on treewidth, see, e.g., [PanVardi06, AtseriasOliva14a, MarxMitsou16, LampisMitsou17].
Contributions.
In this work, we study advanced treewidthbased methods and tools for problems in KR and AI. Thereby, we provide means to establish precise runtime results (upper bounds) for prominent fragments of the answer set programming (Asp) formalism, which is a canonical paradigm for solving problems relevant to KR. Our results are obtained by relying on the concept of dynamic programming that is guided along a socalled tree decomposition in a divideandconquer fashion. Such a tree decomposition is a concrete structural decomposition of an instance, thereby adhering to treewidth.
Then, we present a new type of problem reduction, which we call a decompositionguided (DG) reduction that allows us to precisely study and monitor the treewidth increase (or decrease) when reducing from a certain problem to another problem. This new reduction type will be the basis for proving a longopen result concerning quantified Boolean formulas. Indeed, with this reduction we are able to provide precise conditional lower bounds (assuming the ETH) for the problem QSat when parameterized by treewidth. More precisely, by relying on DG reductions, we prove that QSat when restricted to formulas of quantifier rank and treewidth cannot be decided in a runtime that is better than fold exponential in the treewidth and polynomial in the instance size.^{2}^{2}2“fold exponentiality” refers to a runtime dependence on the treewidth that is a tower of ’s of height with on top. More precisely, this indicates runtimes of the form , where are the number of variables. This nonincrementally lifts a known result for quantifier rank to arbitrary quantifier ranks, but yet implies further consequences.
Even further, the lower bound result for QSat allows us to design a new methodology for establishing lower bounds for a plethora of problems in the area of KR and AI. In consequence, we prove that all our upper bounds and DG reductions presented in this thesis are tight under the ETH. The lower bound result for QSat and the resulting methodology also unlocks a hierarchy of dedicated runtime classes for problems parameterized by treewidth. These classes can be used to quantify their hardness for utilizing treewidth and categorize them according to their runtime dependence on the treewidth.
Finally, despite the devastating news concerning lower bounds, we are able to provide an efficient implementation of algorithms based on dynamic programming that is guided along a tree decomposition. Our approach works by finding suitable abstractions of instances, which is subsequently refined in a nested (recursive) fashion. Given the tremendous power of Sat solvers, our implementation is hybrid in the sense that it heavily uses such standard solvers for solving certain subproblems that appear during dynamic programming. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat. In fact, we are able to solve instances with treewidth upper bounds beyond 260, which underlines that treewidth might be indeed an important parameter that should be considered in modern solver designs.
Problem  Runtime dependence on treewidth  

Exponentiality  Runtime*  Upper bound  Lower bound  
Sat  single exponential  [SamerSzeider10b]  [ImpagliazzoPaturiZane01]  
Tight Asp  single exponential  Thm. 3.8  Prop. 3.9  
Normal Asp  slightly super exp.  Thm. 1  Thm. 4  
Tight Asp  slightly super exp.  Thm. 4.27  Corr. 4.28  
Asp  double exponential  [JaklPichlerWoltran09]  Thm. 3  
, quantifier rank  fold exponential  [Chen04a]  Thm. 2 
Overview and Structure.
Table 1 provides a short overview on selected key findings of this work as well as some related existing results. Thereby we show selected lower and upper bounds that are proved in the thesis. The displayed problems of the table are variants of Sat, QSat, as well as important fragments of Asp, which is an essential formalism for modeling problems in knowledge representation and reasoning. In the course of the thesis, this table is significantly extended, where we also show consequences for many further problems relevant to knowledge representation and artificial intelligence. We refer to Chapter 6 of the thesis for this extension [Hecher21, Table 6.1].
Each section of this abstract briefly summarizes the corresponding chapter of the thesis. Note, however, that due to the space limit only a small frame of findings and only key ideas can be sketched in this paper. In the next section, we recap some basics. Then, in Section 3
we establish new upper bound results via dynamic programming for important fragments of logic programs (
Asp). In Section 4 we illustrate the concept of decompositionguided (DG) reductions, which holds a crucial role in the thesis. Section 5 presents new lower bounds for QSat and Asp, thereby heavily utilizing DG reductions. The lower bound result for QSatenables a novel methodology for proving runtime lower bounds that yields to a hierarchy of runtime classes for classifying problems depending on their hardness for utilizing treewidth, which is briefly outlined in Section
6. Despite the established runtime bounds, we show in Section 7, how one can still design efficient and competitive solvers that heavily rely on utilizing treewidth.2 Preliminaries
A (Boolean) formula in conjunctive normal form is a conjunction of clauses, which is a disjunction of variables or the negation thereof, cf., [BiereHeuleMaarenWalsh09]. The decision problem Sat concerns about deciding whether for a given formula there exists a satisfying assignment. A quantified Boolean formula (QBF) is an extension of a Boolean formula that additionally quantifies variables either existentially or universally. The quantifier rank of a QBF is the number of alternating quantifiers [KleineBuningLettman99]. We only consider closed formulas, where we have that coincides with the variables of . The problem asks for a given QBF (of quantifier rank ) whether it evaluates to true; is located on the th level of the polynomial hierarchy.
A (logic) program [GebserKaminskiKaufmannSchaub12, BrewkaEiterTruszczynski11] is a set of rules, where each rule is of the form over sets , of variables and set of (default negated) variables. A program is normal if and tight, whenever there are no cyclic dependencies over all rules involving variables in and of a rule . Intuitively, the semantics of this formalism, called answer set programming (Asp), require that whenever for a rule every variable in can be derived, but no variable in is derived, then at least one variable in has to hold. In addition, the Asp formalism imposes a stability criteria on top, which is based on minimizing the set of those derived variables. As a consequence, for a given program deciding already the existence of such a set of variables, called answer set, is believed to be beyond NP.
Assume a given formula, QBF, or logic program . Then, denotes the set of variables of . Further, the primal graph of is an undirected graph, whose vertices are the variables with an edge between two variables, whenever those variables appear together in a clause or rule of .
A (tree) decomposition of a graph consists of a tree and a function , which assigns every node in a set of nodes in [RobertsonSeymour86]. Further, has to fulfill (i) Nodes covered: for every node in , there is a node in such that ; (ii) Edges covered: for every edge of , there is a node in with ; and (iii) Connectedness: for every three nodes , , in , whenever lies on the unique path between and , then . The width of is the largest value over all nodes in and the treewidth of is the smallest width among every tree decomposition of .
3 Upper Bounds for Utilizing Treewidth by Dynamic Programming
For proving the existence of parameterized algorithms for a problem when considering treewidth, the famous metatheorem by Courcelle [Courcelle90] has been established. While this theorem and several extensions thereof have been oftentimes invoked to prove theoretical results for problems parameterized by treewidth, such theorems do not necessarily provide precise runtime guarantees. Alternatively, there is a more direct way to exploit treewidth and to obtain concrete upper bounds. Indeed, one of the most prominent methods to directly utilize treewidth [RobertsonSeymour86] is by means of dynamic programming on tree decompositions [CyganEtAl15]. Thereby, the method of dynamic programming [Bellman54], which generally refers to breaking down problems in a divideandconquer fashion, is guided along a tree decomposition, where the decomposition is traversed in postorder (bottomup traversal) such that during the traversal a table is computed for each node of the decomposition. While for a given graph the computation of a tree decomposition of minimal width (treewidth) is NPhard, it is possible to efficiently approximate treewidth [Bodlaender96]
and compute a tree decomposition, and there are also numerous efficient heuristics as well as exact solvers available
[Dell17a].The literature distinguishes plenty of research on dynamic programming of tree decompositions for diverse problems and formalisms [CyganEtAl15]. This section concerns such algorithms, whereby we particularly focus on key fragments of answer set programming, which have not been studied yet. We briefly sketch the ideas that results in an algorithm for deciding whether a normal program admits an answer set. It is known that one can encode a normal logic program into a Boolean formula with a subquadratic overhead in the number of variables by means of socalled level mappings [LinZhao03, Janhunen06]. The idea of these level mappings is to encode some derivation order for the variables that are supposed to hold, thereby avoiding cyclic derivations. However, we demonstrate in the thesis that in general the overhead caused by these reductions is unbounded in the treewidth of the primal graph for a given normal logic program . Consequently, it turns out that for designing a dynamic programming algorithm that is guided along a tree decomposition of , one needs to relax the notion of level mappings. This idea leads to the concept of local level mappings, where we order variables only locally within a node of , which relaxes the “global” order and therefore indicates only some relative order among variables in . The relaxation to local level mappings might result in several computed solutions per answer set of , thereby potentially losing the onetoone (bijective) characterization. However, if the relative orderings corresponding to local level mappings are properly maintained and “synchronized” between neighboring nodes of , we arrive at an algorithm that is still sufficient for deciding whether admits an answer set, which yields the following result.
Theorem 1 (Upper bound for normal programs).
Let be an arbitrary normal logic program, where the treewidth of is . Then, deciding whether admits an answer set can be achieved in time .
4 DecompositionGuided Reductions for Treewidth
There are several problems in logic and artificial intelligence, for which dedicated solutions and systems based on utilizing treewidth have been proposed. This can be witnessed by the existence of specialized implementations, e.g., [CharwatWoltran19, FichteHecherZisser19, KiljanPilipczuk18], but also more general frameworks have been introduced [BliemEtAl16, BannachBerndt19, LangerEtAl12]. Interestingly, also one of the winning solvers [KorhonenJaervisalo21] of the mostrecent model counting competition [FichteHecherHamiti21], which focuses on solving the canonical counting problem #Sat of Sat, explicitly exploits treewidth and demonstrates surprising performance gains.
Inspired and motivated by utilizing treewidth in order to solve problems efficiently, the question is raised, how instances between different formalisms can be converted (reduced) in a way that preserves structural properties as far as possible. Specialized reductions with guarantees for treewidth are not only interesting in theory, but also have practical relevance, as they might enable efficient solving procedures for further formalisms of different areas. As an example, a reduction to #Sat that linearly preserves (or only slightly increases) treewidth would enable other problem formalisms to benefit from utilizing treewidthbased counting solvers. These considerations are addressed by means of specialized reductions, which are referred to by decompositionguided (DG). The idea of these reductions is inspired by dynamic programming on tree decompositions, as briefly introduced in the previous section, where problems are solved in parts by traversing the tree from the leaves towards the root. Analogously, such a DG reduction reduces a given instance in parts, thereby being guided along a tree decomposition in order to establish guarantees for the treewidth of the resulting instance.
The simplified concept of DG reductions is highlighted in Figure 1, where a given instance of a source problem and a decomposition of are assumed. DG reductions have the advantage that in addition to the resulting instance, they automatically give rise to a tree decomposition of the resulting instance, which immediately establishes the relation between and . Further, if such a reduction works for any tree decomposition of the source instance, it immediately yields treewidth dependencies and guarantees in the process, thereby oftentimes allowing for simple transformations that preserve the treewidth.
5 Lower Bounds by DecompositionGuided Reductions
Recall that in Section 3 we briefly sketched how upper bounds for treewidthbased algorithms are obtained. Alternatively one can often prove upper bounds for a problem of interest by utilizing a DG reduction to QSat that linearly preserves the treewidth, which yields the same upper bounds as those for evaluating QBFs, cf., Table 1. Thereby, the natural question of whether one can improve such runtime results arises. As a result, scientists have been establishing lower bounds under the widely believed exponential time hypothesis (ETH) for particular problems, whereby also problems on higher levels of the polynomial hierarchy have been pursued [MarxMitsou16, LampisMitsou17]. Despite those efforts, a general methodology that enables and simply supports the process of proving such conditional lower bounds in a general way. We address this shortcoming by providing such a methodology for proving precise conditional lower bounds that is based on the following lower bound result for QSat, obtained with the help of DG reductions.
Theorem 2 (Lower bound for ).
Let be an arbitrary QBF of quantifier rank such that the treewidth of is . Then, assuming that the ETH holds, one cannot decide whether evaluates to true (or is valid) in time .
The proof idea of Theorem 2 and the whole approach is innovative, since it uses a DG selfreduction from to . In contrast to existing lower bound proof approaches, we exploit an additional quantifier to constructively decrease treewidth exponentially from to . Techniques that have been used before, oftentimes do not directly relate the parameters, treewidth in our case, of the source and destination instance. The concept is sketched and visualized in Table 2.
Overall, our approach enables many further conditional lower bound results, which results in a new methodology for proving them. Instead of directly applying ETH by reducing from Sat, one can simply decide on the quantifier rank and reduce from to the problem of interest. If this reduction is carried out via a DG reduction that linearly preserves the treewidth, we immediately establish an fold exponential lower bound. Especially for problems higher in the polynomial hierarchy, this greatly simplifies the process, by avoiding a barrier of several levels of exponentiality that one alternatively would have to bypass in a specialized reduction and heavily problemspecific way. For deeper insights and formal details, we refer to the thesis [Hecher21, Chapter 5]. The thesis also contains a large table of formalisms and corresponding novel conditional lower bounds for treewidth that can be easily obtained by applying this new methodology [Hecher21, Table 6.1].
Approach for 2QSat [LampisMitsou17]  Novel approach for 

Are Normal Programs “Harder” than Sat?
The results above have immediate consequences also for the Asp formalism. Indeed, by utilizing a DG reduction from and applying Theorem 2, one can show the following conditional lower bound for logic programs, which closes the gap to the existing upper bound [JaklPichlerWoltran09], cf., Table 1.
Theorem 3 (Lower bound for logic programs).
Let be an arbitrary logic program, where the treewidth of is . Then, assuming the ETH, one cannot decide whether admits an answer set in time .
Interestingly, with the methodology of the previous section, there are no larger barriers in the course of proving Theorem 3. Also, the result is in line with the expectations one might have due to classical complexity theory, as the problem is located on the second level of the polynomial hierarchy [BrewkaEiterTruszczynski11]. This is different for the import fragment of normal logic programs, where despite the problem of deciding whether a normal program admits an answer set is NPcomplete, its “hardness” for treewidth has been open, cf., Table 1. It has been known that in general a normal program cannot be translated into a Boolean formula such that the answer sets are bijectively captured by the satisfying assignments of the formula, without a subquadratic overhead in the number of (auxiliary) variables [LifschitzRazborov06, Janhunen06]. Nevertheless, the following question has been left open: Is deciding whether a normal logic program admits an answer set actually “harder” than Sat when considering treewidth?
Indeed, this question can be answered affirmatively when assuming the widely believed ETH. However, the proof is not only more involved than Theorem 3, also its consequences reach further.
Theorem 4 (Lower bound for normal programs).
Let be an arbitrary normal logic program, where the treewidth of is . Then, assuming the ETH, one cannot decide whether admits an answer set in time .
Notably, for treewidth the actual overhead is not only in the number of variables, but this problem is indeed harder to solve under ETH than Sat. An informal explanation lies in the observation that there exist programs, whose primal graphs have low treewidth, that can express broader reachability problems and transitive closures such that the involved variables are required to be widely spread over any tree decomposition of low width.
These findings and the algorithm of Section 3 lead to a new family of logic programs, referred to by tight. There, represents for a logic program the “degree” between the class of tight (=1) and normal () programs, where is the treewidth. The actual value of then directly corresponds to the solving effort for treewidth, cf., Table 1, which is between the evaluation of tight programs (similar to Sat) and the evaluation of normal programs.
6 A Complexity Landscape for Treewidth
The findings of the previous section do not only yield a new methodology for proving lower bounds, it also gives rise to a hierarchy of runtime classes that are useful for categorizing problems according to their immediate hardness when utilizing treewidth. The basic definition of a family of these runtime classes is given below.
, for every Class is the set of all problems parameterized by treewidth such that every instance of these problems can be solved in time , where refers to the treewidth of and is the size of .
The definition of these classes is inspired by the work on general fixedparameter runtime classes [DowneyEtAl07]; these classes are therefore contained in the broader class FPT. Observe that indeed . It is also immediate by the definition of these classes for any we have . Even further, under the ETH, the inclusions between those classes are strict. Consequently, we show next that these for form a strict hierarchy assuming the exponential time hypothesis.
Proposition 1.
Let . Then, under the ETH we have that .
Proof.
Assume towards a contradiction that the ETH holds and at the same time we have that . Consequently, due to , we have that . However, this contradicts Theorem 2 for problem , which is in (cf., [Chen04a]), but under the ETH it is not in since any function in is also in the set of functions. ∎
This hierarchy of runtime classes then yields to a new categorization of problems based on their complexity for treewidth. These problems range from typical graph problems, extensions of Boolean satisfiability and logic programs, over questions in abstract argumentation and beyond. For details, we refer to the table of results [Hecher21, Table 6.1], which also contains results for a list of counting problems.
7 Efficiently Implementing TreewidthAware Algorithms
Despite the strong lower bound results of Section 5 and consequences of the previous section, we present an approach to design a solver that utilizes treewidth, but is still capable of dealing with instances of high treewidth. To this end, we mainly focus on problems based on Sat and canonical counting extensions like #Sat, which have been gaining increasing importance for quantitative reasoning and AI in general.
Note that due to space limits, we can only briefly provide crucial ideas instead of completely discussing fullfledged implementations. Our approach of efficiently utilizing high treewidth lies on the combination of three key concepts.

Abstractions: We are computing certain abstractions of the primal graph representation, which are obtained via heuristics. There, the structured instance parts are subject to being solved by means of dynamic programming that is guided along a tree decomposition of those abstractions. These abstractions are such that they cover only a part of the graph, which can be seen similar to the visual example of zooming out of a larger street map. Then, the remaining graph parts are actually larger subinstances, where each of these subinstances is forced to be within a unique tree decomposition node.

Hybrid Solving: Subinstances as a result of building such an abstraction that are too unstructured to be tackled by dynamic programming are then solved by existing standard Satbased solvers. For those instances, there is not much hope to efficiently utilize structurebased measures like treewidth. However, oftentimes the size of those subinstances has already been significantly reduced in the process, compared to the full instance.

Nested Refinement: Subinstances that still contain some sort of structure are again decomposed, and, if needed, again abstractions are built. So, our approach approximates suitable abstractions of the primal graph that is highly structured (low treewidth). Then, during dynamic programming, subinstances are simplified and there is again the chance to find some structure. In turn, nested refinement ensures that we can refine abstractions for simpler subinstances at a later time, namely after simplifications of subinstances during dynamic programming. Note that the level of nesting is limited, i.e., if the nesting is too deep, we fall back to hybrid solving.
Our empirical results of this approach yield the following observations. Notably, our method allows us to solve instances with tree decompositions of widths beyond 260. From a conceptual point of view, it seems that the hybrid approach is wellsuited. Overall, the solving is guided along the basic structure of the instance such that hardly structured instances are passed to existing Satbased solvers. Further, we indeed observe a difference between problems of singleexponential runtime and those of doubleexponential runtime in the treewidth. For an extension of the problem #Sat that is doubleexponential in the treewidth, our approach is capable of successfully utilizing tree decompositions of widths up to 99. While this is still remarkable, it practically shows the difference between different levels of exponentiality for treewidth. Our observations thereby confirm the study of the classification of Section 6, where problems are categorized according to their runtime dependence on the treewidth.
8 Conclusion
This thesis raises a list of further research questions, which is also reflected by the increasing interest of treewidth^{3}^{3}3“Treewidth” yields more than 22,000 results on Google Scholar (queried on March 18th, 2022).. In the thesis we establish DG reductions, which serve as a key for proving both upper and lower bounds. However, strengths, weaknesses, as well as restrictions and potential extensions of these reductions are widely unexplored. Especially in the area of explainable AI, DG reductions could help in proving the correctness of solver runs for extensions of Sat when considering treewidth, e.g., counting problems. Theorem 2 provides a tool for proving precise conditional lower bounds for treewidth, which we currently generalize to stronger parameters as well. More recent works lift and extend this result for constraint programming, which enables our methodology to express more elaborated lower bounds [FichteHecherKieler20]. However, we are certain that these results can be further generalized and applied, e.g., in the context of database theories. There are also extensions of the ETH, where we expect that our methodology to yield even more concrete lower bounds. In the light of theoretical studies between Sat solving and potential relations for treewidth, e.g., [AtseriasFichteThurley11], we expect further consequences of the lower bounds of Theorems 3 and 4 in the context of utilizing structural properties like treewidth for Asp solvers. Recent efforts of countingbased solvers also manage to combine treewidth and Satbased techniques, where treewidth serves as a heuristics within the solver [KorhonenJaervisalo21]. We see potential synergies with our approach of Section 7.