1 Introduction
Disjunctive Answer Set Programming (ASP) [10, 29, 44] is an active field of AI providing a declarative formalism for solving hard computational problems. Thanks to the high sophistication of modern solvers [28], ASP was successfully used in several applications, including product configuration [52], decision support for space shuttle flight controllers [2], team scheduling [49], and bioinformatics [33].
Since the main decision problems of propositional ASP are located at the second level of the polynomial hierarchy [24, 54], the quest for easier fragments are important research contributions that could lead to improvements in ASP systems. An interesting approach to dealing with intractable problems comes from parameterized complexity theory [22] and is based on the fact that many hard problems become polynomialtime tractable if some problem parameter is bounded by a fixed constant. If the order of the polynomial bound on the runtime is independent of the parameter, one speaks of fixedparameter tractability (FPT). Results in this direction for the ASP domain include [43] (parameter: size of answer sets), [42] (number of cycles), [5] (length of longest cycles), [4] (number of nonHorn rules), and [26] (backdoors). Also related is the parameterized complexity analysis of reasoning under subsetminimal models, see, e.g., [41].
As many prominent representations of logic programs are given in terms of directed graphs (consider, e.g., the dependency graph), it is natural to investigate parameters for ASP that apply to directed graphs. Over the past two decades, various width measures for directed graphs have been introduced
[37, 3, 6, 35, 50]. These are typically smaller than, e.g., the popular parameter of treewidth [7]. In particular, all these measures are zero on directed acyclic graphs (DAGs), but the treewidth of DAGs can be arbitrarily high. Moreover, since these measures are based on some notion of “closeness” to acyclicity and the complexity of ASP is closely related to the “cyclicity” of the rules in a program, such measures seem promising for obtaining efficient algorithms for ASP. Prominent applications of directed width measures include the Disjoint Path Problem [37], query evaluation in graph databases [1], and model checking [9].Another graph parameter for capturing the structural complexity of a graph is cliquewidth [16, 17, 15]. It applies to directed and undirected graphs, and in its general form (known as signed cliquewidth) to edgelabeled graphs. It is defined via graph construction where only a limited number of vertex labels is available; vertices that share the same label at a certain point of the construction process must be treated uniformly in subsequent steps. Constructions can be given by expressions in a graph grammar (socalled cwdexpressions) and the minimal number of labels required for constructing a graph is the cliquewidth of . While cliquewidth is in a certain way orthogonal to other directed width measures, it is more general than treewidth; there are classes of graphs with constant cliquewidth but arbitrarily high treewidth (e.g., complete graphs). In contrast, graphs with bounded treewidth also have bounded cliquewidth [12, 15].
By means of a metatheorem due to Courcelle, Makowsky, and Rotics [18], one can solve any graph problem that can be expressed in Monadic SecondOrder Logic with quantification on vertex sets (MSO) in linear time for graphs of bounded cliquewidth. This result is similar to Courcelle’s theorem [13, 14] for graphs of bounded treewidth, which has been used for the FPT result for ASP w.r.t. treewidth [31]. There, the incidence graph of a program is used as an underlying graph structure (i.e., the graph containing a vertex for each atom and rule of the program, with an edge between and whenever appears in ). Since the formula given in [31] is in MSO, the FPT result for ASP applies also to signed cliquewidth.
Cliquewidth is NPhard to compute [25], which might be considered as an obstacle toward practical applications. However, one can check in polynomial time whether the width of a graph is bounded by a fixed [47, 40]. (These algorithms involve an additive approximation error that is bounded in terms of ). Recently, SAT solvers have been used to obtain sequences of vertex partitions that correspond to cwdexpressions [34] for a given graph. For some applications, it might not even be necessary to compute cliquewidth and the underlying cwdexpression: As mentioned in [27, Section 1.4], applications from the area of verification are supposed to already come with such an expression. Moreover, it might even be possible to partially obtain cwdexpressions during the grounding process of ASP.
This all calls for dedicated algorithms for solving ASP for programs of bounded cliquewidth. In contrast to treewidth where the FPT result from [31] has been used for designing [36] and implementing [45] a dynamic programming algorithm, to the best of our knowledge there are no algorithms yet that explicitly exploit the fixedparameter tractability of ASP on bounded cliquewidth. In fact, we are not aware of any FPT algorithm for bounded cliquewidth for a reasoning problem located on the second level of the polynomial hierarchy (except [23] from the area of abstract argumentation).
The main contributions of this paper are as follows. First, we show some negative results for several directed width measures, indicating that the structure of the dependency graph and of various natural directed versions of the signed incidence graph does not adequately measure the complexity of evaluating the corresponding program.
Second, concerning signed cliquewidth, we give a novel dynamic programming algorithm that runs in polynomial time for programs where this parameter is bounded on their incidence graphs. We do so by suitably generalizing the seminal approach of [27] for the SAT problem. We also give a preliminary analysis how many signs are required in order to obtain FPT.
2 Preliminaries
Graphs.
We use standard graph terminology, see for instance the handbook [21]. All our graphs are simple. An undirected graph is a tuple , where or is the vertex set and or is the edge set. For a subset , we denote by , the induced subgraph of induced by the vertices in , i.e., has vertices and edges . We also denote by the graph . Similarly to undirected graphs, a digraph is a tuple , where or is the vertex set and or is the arc set. A strongly connected component of a digraph is a maximal subgraph of that is strongly connected, i.e., contains a directed path between each pair of vertices in . We denote by the symmetric closure of , i.e., the graph with vertex set and arc set . Finally, for a directed graph , we denote by , the undirected graph with vertex set and edge set .
Parameterized Complexity.
In parameterized algorithmics [22] the runtime of an algorithm is studied with respect to a parameter and input size . The most favorable class is FPT (fixedparameter tractable) which contains all problems that can be decided by an algorithm running in time , where is a computable function. We also call such an algorithm fixedparameter tractable, or FPT for short. Formally, a parameterized problem is a subset of , where is the input alphabet. Let and be two parameterized problems. A parameterized reduction (or FPTreduction) from to is a mapping such that: (1) iff , (2) the mapping can be computed by an FPTalgorithm w.r.t. parameter , and (3) there is a computable function such that , where . The class W[1] captures parameterized intractability and contains all problems that are FPTreducible to Partitioned Clique when parameterized by the size of the solution. Showing W[1]hardness for a problem rules out the existence of an FPTalgorithm under the usual assumption .
Answer Set Programming.
A program consists of a set of propositional atoms and a set of rules of the form
where and for . Each rule consists of a head and a body given by and . A set is a called a model of if and imply . We denote the set of models of by and the models of are given by .
The reduct of a program with respect to a set of atoms is the program with and , where denotes rule without negative body, i.e., , , and . Following [29], is an answer set of if and for no , we have . In what follows, we consider the problem of ASP consistency, i.e., the problem of deciding whether a given program has at least one answer set. As shown by Eiter and Gottlob, this problem is complete [24].
Graphical Representations of ASP.
Let be a program. The dependency graph of , denoted by , is the directed graph with vertex set and that contains an arc if there is a rule such that either and or [26]. Note that there are other notions of dependency graphs used in the literature, most of them, however, are given as subgraphs of . As we will see later, our definition of dependency graphs allows us to draw immediate conclusions for such other notions.
The incidence graph of , denoted by , is the undirected graph with vertices that contains an edge between a rule vertex and a atom vertex whenever . The signed incidence graph of , denoted by , is the graph , where addionally every edge of between an atom and a rule is annotated with a label from depending on whether occurs in , , or .
3 Directed Width Measures
Since many representations of ASP programs are in terms of directed graphs, it is natural to consider parameters for ASP that are tailormade for directed graphs. Over the past two decades various width measures for directed graphs have been introduced, which are better suited for directed graphs than treewidth, on which they are based. The most prominent of those are directed treewidth [37], directed pathwidth [3], DAGwidth [6], Kellywidth [35], and Dwidth [50] (see also [20]). Since these width measures are usually smaller on directed graphs than treewidth, it is worth considering them for problems that have already been shown to be fixedparameter tractable parameterized by treewidth. In particular, all of these measures are zero on directed acyclic graphs (DAGs), but the treewidth of DAGs can be arbitrary high. Moreover, since these measures are based on some notion of “closeness” to acyclicity and the complexity of ASP is closely related to the “cyclicity” of the logical rules, one would consider such measures as promising for obtaining efficient algorithms for ASP.
In this section, we give results for directed width measures when applied to dependency graphs as defined in Section 2. To state our results in the most general manner, we will employ the parameter cyclerank [11]. Since the cyclerank is always greater or equal to any of the above mentioned directed width measures [32, 38], any (parameterized) hardness result obtained for cyclerank carries over to the aforementioned width measures for directed graphs.
Definition 1.
Let be a directed graph. The cyclerank of , denoted by , is inductively defined as follows: if is acyclic, then . Moreover, if is strongly connected, then . Otherwise the cyclerank of is the maximum cyclerank of any strongly connected component of .
We will also consider a natural “undirected version” of the cyclerank for directed graphs, i.e., we define the undirected cyclerank of a directed graph , denoted by , to be the cyclerank of . It is also well known (see, e.g., [30]) that the cyclerank of is equal to the treedepth of , i.e., the underlying undirected graph of , and that the treedepth is always an upper bound for the pathwidth and the treewidth of an undirected graph [8]. Putting these facts together implies that any hardness result obtained for the undirected cyclerank implies hardness for pathwidth, treewidth, treedepth as well as the aforementioned directed width measures. See also Figure 1 for an illustration how hardness results for the considered width measures propagate.
Finally, we would like to remark that both the cyclerank and the undirected cyclerank are easily seen to be closed under taking subgraphs, i.e., the (undirected) cyclerank of a graph is always larger or equal to the (undirected) cyclerank of every subgraph of the graph.
Hardness Results
We show that ASP consistency remains as hard as in the general setting even for instances that have a dependency graph of constant width in terms of any of the directed width measures introduced.
For our hardness results, we employ the reduction given in [24] showing that ASP consistency is hard in general. The reduction is given from the validity problem for quantified Boolean formulas (QBF) of the form: where each is a conjunction of at most three literals over the variables and . We will denote the set of all QBF formulas of the above form in the following by .
Given , a program is constructed as follows. The atoms of are , , and and contains the following rules:

for every with , the rule ,

for every with , the rules , , , and ,

for every with , the rule , where (for ) is the th literal that occurs in (if , the respective parts are omitted) and the function is defined by setting to if , to if , and to otherwise.

the rule (i.e., with an empty disjunction in the head).
It has been shown [24, Theorem 38] that a formula is valid iff has an answer set. As checking validity of formulas is complete [53], this reduction shows that ASP is hard.
Lemma 1.
Let be a , then .
Proof.
Figure 2 illustrates the symmetric closure of for a simple formula . As this example illustrates, the only arcs in not incident to are the arcs incident to and and the arcs incident to and , for and . Hence, after removing from , every strongly connected component of the remaining graph contains at most two vertices and each of those has hence cyclerank at most one. It follows that the cyclerank of and hence the undirected cyclerank of is at most two. ∎
Together with our considerations from above, we obtain:
Theorem 1.
ASP consistency is complete even for instances whose dependency graph has width at most two for any of the following width measures: undirected cyclerank, pathwidth, treewidth, treedepth, cyclerank, directed treewidth, directed pathwidth, DAGwidth, Kellywidth, and Dwidth.
Observe that because the undirected cyclerank is closed under taking subgraphs and we chose the “richest” variant of the dependency graph, the above result carries over to the other notions of dependency graphs of ASP programs considered in the literature.
The above result draws a very negative picture of the complexity of ASP w.r.t. restrictions on the dependency graph. In particular, not even structural restrictions of the dependency graph by the usually very successful parameter treewidth can be employed for ASP. This is in contrast to our second graphical representation of ASP, the incidence graph, for which it is known that ASP is fixedparameter tractable parameterized by the treewidth [36]. It is hence natural to ask whether the same still holds under restrictions provided by one of the directed width measures under consideration. We first need to discuss how to obtain a directed version of the usually undirected incidence graph. For this, observe that the incidence graph, unlike the signed incidence graph, provides merely an incomplete model of the underlying ASP instance. Namely, it misses the information about how atoms occur in rules, i.e., whether they occur in the head, in the positive body, or in the negative body of a rule. A directed version of the incidence graph should therefore use the additional expressiveness provided by the direction of the arcs to incorporate the information given by the labels of the signed incidence graph. For instance, a natural directed version of the incidence graph could orient the edges depending on whether an atom occurs in the head or in the body of a rule. Clearly, there are many ways to orient the edges and it is not a priori clear which of those orientations leads to a directed version of the incidence graph that is best suited for an application of the directed width measures. Every orientation should, however, be consistent with the labels of the signed incidence graph, i.e., whenever two atoms are connected to a rule via edges having the same label, their arcs should be oriented in the same way. We call such an orientation of the incidence graph a homogeneous orientation.
Lemma 2.
Let be a , then the cyclerank of any homogeneous orientation of the incidence graph of is at most one.
Proof.
Let be a homogeneous orientation of and let . First observe that in every rule vertex is either only incident to edges with label or to edges of label . Hence, as is a homogeneous orientation, we obtain that every rule vertex of is either a source vertex (i.e., having only outgoing arcs) or a sink vertex (i.e., having only incoming arcs). So cannot contain a cycle through a rule vertex. However, since there are no arcs between atom vertices in , we obtain that is acyclic, which shows that the cyclerank of is at most one. ∎
We can thus state the following result:
Theorem 2.
ASP consistency is complete even for instances whose directed incidence graph has width at most one for any of the following width measures: cyclerank, directed treewidth, directed pathwidth, DAGwidth, Kellywidth, and Dwidth.
4 CliqueWidth
The results in [31] imply that bounding the cliquewidth of the signed incidence graph of a program leads to tractability.
Proposition 1.
For a program such that the cliquewidth of its signed incidence graph is bounded by a constant, we can decide in linear time whether has an answer set.
This result has been established via a formulation of ASP consistency as an MSO formula. Formulating a problem in this logic automatically gives us an FPT algorithm. However, such algorithms are primarily of theoretical interest due to huge constant factors, and for actually solving problems, it is preferable to explicitly design dynamic programming algorithms [19].
Since our main tractability result considers the cliquewidth of an edgelabeled graph, i.e., the signed incidence graph, we will introduce cliquewidth for edgelabeled graphs. This definition also applies to graphs without edgelabels by considering all edges to be labeled with the same label. A graph, for , is a graph whose vertices are labeled by integers from . Additionally, we also allow for the edges of a graph to be labeled by some arbitrary but finite set of labels (in our case the labels will correspond to the signs of the signed incidence graph). The labeling of the vertices of a graph is formally denoted by a function . We consider an arbitrary graph as a graph with all vertices labeled by . We call the graph consisting of exactly one vertex (say, labeled by ) an initial graph and denote it by .
Graphs can be constructed from initial graphs by means of repeated application of the following three operations.

Disjoint union (denoted by );

Relabeling: changing all labels to (denoted by );

Edge insertion: connecting all vertices labeled by with all vertices labeled by via an edge with label (denoted by ); ; already existing edges are not doubled.
A construction of a graph using the above operations can be represented by an algebraic term composed of , , , and , (, and a vertex). Such a term is then called a cwdexpression defining . For any cwdexpression , we use to denote the labeling of the graph defined by . A expression is a cwdexpression in which at most different labels occur. The set of all expressions is denoted by .
As an example consider the complete bipartite graph with bipartition and and assume that all edges of are labeled with the label . A cwdexpression of using at most two labels is given by the following steps: (1) introduce all vertices in using label , (2) introduce all vertices in using label , (3) take the disjoint union of all these vertices, and (4) add all edges between vertices with label and vertices with label , i.e., such a cwdexpression is given by . As a second example consider the complete graph on vertices, where all edges are labeled with label . A cwdexpression for using at most two labels can be obtained by the following iterative process: Given a cwdexpression for , where every vertex is labeled with label , one takes the disjoint union of and (where is the vertex only contained in but not in ), adds all edges between vertices with label and vertices with label , and then relabels label to label . Formally, the cwdexpression for is given by .
Definition 2.
The cliquewidth of a graph , , is the smallest integer such that can be defined by a expression.
Our discussion above thus witnesses that complete (bipartite) graphs have cliquewidth . Furthermore, cographs also have cliquewidth (cographs are exactly given by the graphs which are free, i.e., whenever there is a path in the graph then , or is also an edge of the graph) and trees have cliquewidth .
We have already introduced the notion of incidence graphs (resp. signed incidence graphs) of a program in Section 2. We thus can use cwdexpressions to represent programs.
Example 1.
Let be the program with and , where is the rule and is the rule . Its signed incidence graph can be constructed by the expression , as depicted in Figure 3.
Since every expression of the signed incidence graph can be transformed into a expression of the unsigned incidence graph (by replacing all operations of the form with , where is new label), it holds that .
Proposition 2.
Let be a program. It holds that , and there is a class of programs such that, for each , but is unbounded.
For showing the second statement of the above proposition, consider a program that has atoms and rules (for some ), such that every atom occurs in every rule of . Because the incidence graph is a complete bipartite graph it has cliquewidth two and moreover it contains a grid of size as a subgraph. Assume that is defined in such a way that an atom occurring in a rule is in the head of if the edge between and occurs in the grid and otherwise is in the (positive) body of . Then, the cliquewidth of is at least the cliquewidth of the grid , which grows with [39]. Hence, the class containing for every shows the second statement of the above proposition.
4.1 Algorithms
In this section, we provide our dynamic programming algorithms for deciding existence of an answer set. We start with the classical semantics for programs, where it is sufficient to just slightly adapt (a simplified version of) the algorithm for SAT by [27]. For answerset semantics, we then extend this algorithm in order to deal with the intrinsic higher complexity of this semantics.
Both algorithms follow the same basic principles by making use of a expression defining a program via its signed incidence graph in the following way: We assign certain objects to each subexpression of and manipulate these objects in a bottomup traversal of the parse tree of the expression such that the objects in the root of the parse tree then provide the necessary information to decide the problem under consideration. The size of these objects is bounded in terms of (and independent of the size of ) and the number of such objects required is linear in the size of . Most importantly, we will show that these objects can also be efficiently computed for bounded . Thus, we will obtain the desired linear running time.
4.1.1 Classical Semantics
Definition 3.
A tuple with is called a triple, and we refer to its parts using , , and . The set of all triples is given by .
The intuition of a triple is to characterize a set of interpretations in the following way:

For each , at least one atom with label is true in ;

for each , at least one atom with label is false in ;

for each , there is at least one rule with label that is “not satisfied yet”.
Formally, the “semantics” of a triple with respect to a given program is given as follows.
Definition 4.
Let and be a program whose signed incidence graph is labeled by . A interpretation of is a set that satisfies
Example 2.
Consider again program from Example 1 and the expression from Figure 3. Let be the triple . Observe that is a interpretation of : It sets to true and to false, and and hold as required; the rule is not satisfied by , and indeed . We can easily verify that no other subset of is a interpretation of : Each interpretation of must set to true and to false, as these are the only atoms labeled with and , respectively.
We use the following notation for triples , , and set .


where for ,

if ; otherwise.
Using these abbreviations, we define our dynamic programming algorithm: We assign to each subexpression of a given expression a set of triples by recursively defining a function , which associates to a set of triples as follows.
Definition 5.
The function is recursively defined along the structure of expressions as follows.
Example 3.
Consider again program from Example 1 and the expression depicted in Figure 3. To break down the structure of , let be subexpressions of such that , , , , , and . We get and . These sets are then combined to . The program defined by consists of atom and rule , but does not occur in yet. Accordingly, the triple models the situation where is set to true, which does not satisfy (since the head and body of are still empty), hence the label of is in the last component; the triple represents being set to false, which does not satisfy either. Next, causes all atoms with label (i.e., just ) to be inserted into the head of all rules with label (i.e., just ), and we get . We obtain the first element from by removing the label from because . The idea is that the heads of all rules labeled with now contain all atoms labeled with , so these rules become satisfied by every interpretation that sets some atom labeled with to true. Next, adds the rule with label and we get . The edge added by adds all atoms with label (i.e., just ) into the positive body of all rules with label (i.e., just ), which results in . Observe that the last component of the second element no longer contains , i.e., setting to false makes true. Now the label is renamed to , and we get . Note that now and are no longer distinguishable since they now share the same label. Hence all operations that add edges to will also add edges to and vice versa. In , atom is added with label and we get four triples in : From in we obtain and , and from in we get and . In , we add a negative edge from all atoms labeled with (i.e., just ) to all rules labeled with (both and ). From in we now get , from we get , and the triples and from occur unmodified in . As we will prove shortly, for each triple in , there is a interpretation of . So if there is a triple in such that , then has a classical model due to the definition of . For instance, has a interpretation , which is obviously a model of .
We now prove correctness of our algorithm:
Lemma 3.
Let be a program and be a expression for . For every set , there is a triple such that is a interpretation of , and for every triple there is a set such that is a interpretation of .
Proof.
We prove the first statement by induction on the structure of a expression defining . Let be a subexpression of , let denote the program defined by , and let .
If , for , then , so . Moreover, consists of an unsatisfiable rule (its head and body are empty). Hence is a interpretation of in .
If , for , then and . If , then is a interpretation of the triple in . Otherwise and is a interpretation of the triple in .
If , let , and . By definition of , it holds that , and . By induction hypothesis, is a interpretation of some triple in . By definition of , there is a triple in with , and . This allows us to easily verify that is a interpretation of by checking the conditions in Definition 4.
If , then and, by induction hypothesis, is a interpretation of some triple in . By definition of , the triple in is the result of replacing by in each of , and . Hence we can easily verify that satisfies all conditions for being a interpretation of .
If , for , then . Hence, by induction hypothesis, is a interpretation of some triple in . We use to denote the triple , which is in . Since , and , satisfies the first two conditions for being a interpretation of . It remains to check the third condition.
For every it holds that if and only if . By induction hypothesis, the latter is the case if and only if there is a rule such that and . This is equivalent to the existence of a rule such that , , , and , since only differs from by additional edges that are not incident to due to .
It remains to check that if and only if there is a rule such that and . First suppose toward a contradiction that while is a model of every rule such that . Since , also and by induction hypothesis there is a rule such that and is not a model of . There is a corresponding rule , for which , , and hold. Since is a model of but not of , contains some atom labeled with (by both and ) because all atoms in and are labeled with . By induction hypothesis, this implies , which leads to the contradiction by construction of .
Finally, suppose toward a contradiction that and there is a rule such that and . The rule corresponding to in with is not satisfied by either, since , and . By induction hypothesis, this entails . Due to , it holds that , so there is an with . Due to the new edge from to , either or . This yields the contradiction that is a model of .
The case is symmetric.
The proof of the second statement is similar. ∎
We can now state our FPT result for classical models:
Theorem 3.
Let be an integer and be a program. Given a expression for the signed incidence graph of , we can decide in linear time whether has a model.
Proof.
Let be a constant, be a program and be a expression of . We show that there is a model of if and only if there is a triple in with : If has a model , then is a interpretation of a triple in , by Lemma 3, and by Definition 4. Conversely, if there is a triple in with , then there is a interpretation of , by Lemma 3, and implies that is a model of by Definition 4. Finally, it is easy to see that can be computed in linear time. ∎
4.1.2 AnswerSet Semantics
For full disjunctive ASP we need a more involved data structure.
Definition 6.
A pair with with and is called a pair. The set of all pairs is given by .
Given a pair , the purpose of is, as for classical semantics, to represent interpretations (that in the end correspond to models). Every triple in represents sets of atoms such that . If, in the end, there is such a set that still satisfies every rule in the reduct w.r.t. , then we conclude that is not an answer set.
Definition 7.
Let , let be a program whose signed incidence graph is labeled by , and let . A interpretation of is a set such that
Comments
There are no comments yet.