DLV - A System for Declarative Problem Solving

by   Thomas Eiter, et al.
TU Wien

DLV is an efficient logic programming and non-monotonic reasoning (LPNMR) system with advanced knowledge representation mechanisms and interfaces to classic relational database systems. Its core language is disjunctive datalog (function-free disjunctive logic programming) under the Answer Set Semantics with integrity constraints, both default and strong (or explicit) negation, and queries. Integer arithmetics and various built-in predicates are also supported. In addition DLV has several frontends, namely brave and cautious reasoning, abductive diagnosis, consistency-based diagnosis, a subset of SQL3, planning with action languages, and logic programming with inheritance.


page 1

page 2

page 3

page 4


Implementing Default and Autoepistemic Logics via the Logic of GK

The logic of knowledge and justified assumptions, also known as logic of...

Two results for proiritized logic programming

Prioritized default reasoning has illustrated its rich expressiveness an...

Controlled Natural Languages and Default Reasoning

Controlled natural languages (CNLs) are effective languages for knowledg...

Exceptional Subclasses in Qualitative Probability

System Z+ [Goldszmidt and Pearl, 1991, Goldszmidt, 1992] is a formalism ...

Scaling-Up Reasoning and Advanced Analytics on BigData

BigDatalog is an extension of Datalog that achieves performance and scal...

An Implementation of a Non-monotonic Logic in an Embedded Computer for a Motor-glider

In this article we present an implementation of non-monotonic reasoning ...

Computing Circumscriptive Databases by Integer Programming: Revisited (Extended Abstract)

In this paper, we consider a method of computing minimal models in circu...

1 General Information

Currently DLV is available in binary form for various platforms (sparc-sun-solaris2.6, alpha-dec-osf4.0, i386-linux-elf-gnulibc2, i386-pc-solaris2.7, and i386-unknown-freebsdelf3.3 as of this writing) and it is easy to build DLV on further platforms.

Including all frontends, DLV consists of around 25000 lines ISO C++ code plus several scanners and parsers written in lex/flex and yacc/bison, respectively. DLV is being developed using GNU tools (GCC, flex, and bison) and is therefore portable to most Unix-like platforms. Additionally, the system has been successfully built with proprietary compilers such as those of Compaq and SCO.

For up-to-date information on the system and a full manual please refer to the project homepage [Faber & Pfeifersince 1996], where you can also download DLV.

2 Description of the System

2.1 Kernel Language

The kernel language of DLV is disjunctive datalog extended with strong negation under the answer set semantics [Eiter, Gottlob, & Mannila1997, Gelfond & Lifschitz1991].


Strings starting with uppercase letters denote variables, while those starting with lower case letters denote constants. A term is either a variable or a constant. An atom is an expression ,, where is a predicate of arity and ,…, are terms. A literal is either an atom (in this case, it is positive), or a negated atom (in this case, it is negative).

Given a literal , its complementary literal is defined as if and if . A set of literals is said to be consistent if for every literal , its complementary literal is not contained in .

In addition to literals as defined above, DLV also supports built-ins, like #int, #succ, , +, and *. For details, we refer to our full manual [Faber & Pfeifersince 1996].

A disjunctive rule (rule, for short) is a formula

where are literals, , and represents negation-as-failure (or default negation). The disjunction is the head of , while the conjunction is the body of . A rule without head literals (i.e. ) is usually referred to as integrity constraint. If the body is empty (i.e. ), we usually omit the “:-” sign.

We denote by the set of literals in the head, and by the set of the body literals, where …, and …, are the sets of positive and negative body literals, respectively.

A disjunctive datalog program is a finite set of rules.


DLV implements the consistent answer sets semantics which has originally been defined in [Gelfond & Lifschitz1991].111Note that we only consider consistent answer sets, while in [Lifschitz1996] also the inconsistent set of all possible literals is a valid answer set.

Before we are going to define this semantics, we need a few prerequisites. As usual, given a program , (the Herbrand Universe) is the set of all constants appearing in and (the Herbrand Base) is the set of all possible combinations of predicate symbols appearing in with constants of possibly preceded by , in other words, the set of ground literals constructible from the symbols in .

Given a rule , denotes the set of rules obtained by applying all possible substitutions from the variables in to elements of ; is also called the Ground Instantiation of . In a similar way, given a program , denotes the set . For programs not containing variables holds.

For every program , we define its answer sets using its ground instantiation in two steps, following [Lifschitz1996]: First we define the answer sets of positive programs, then we give a reduction of general programs to positive ones and use this reduction to define answer sets of general programs.

An interpretation is a set of literals. A consistent interpretation is called closed under a positive, i.e. -free, program , if, for every , whenever . is an answer set for a positive program if it is minimal w.r.t. set inclusion and closed under .

The reduct or Gelfond-Lifschitz transform of a general ground program w.r.t. a set is the positive ground program , obtained from by deleting all rules for which holds, and deleting the negative body from the remaining rules.

An answer set of a general program is a set such that is an answer set of .

2.2 Application Frontends

In addition to its kernel language, DLV

provides a number of application frontends that show the suitability of our formalism for solving various problems from the areas of Artificial Intelligence, Knowledge Representation and (Deductive) Databases.

  • The Brave and Cautious Frontends are simple extensions of the normal mode, where in addition to a disjunctive datalog program the user specifies a conjunction of literals (a query) and DLV checks whether this query holds in any respectively all answer sets of the program.

  • The Diagnoses Frontend implements both abductive diagnosis [Poole1989, Console, Theseider Dupré, & Torasso1991], adapted to the semantics of logic programming [Kakas, Kowalski, & Toni1993, Eiter, Gottlob, & Leone1997], and consistency-based diagnosis [Reiter1987, de Kleer, Mackworth, & Reiter1992] and supports general diagnosis as well as single-failure and subset-minimal diagnosis.

  • The SQL3 Frontend is a prototype implementation of the query interface of the SQL3 standard that has been approved by ISO last year.

  • The Inheritance Frontend extends the kernel language of DLV with objects, and inheritance [Buccafurri, Faber, & Leone1999]. This extension allows us to naturally represent inheritance and default reasoning with (multi-level) exceptions, providing a natural solution also to the frame problem.

  • Finally, the Planning Frontend implements a new logic-based planning language, called , which is well suited for planning under incomplete knowledge.

2.3 Architecture

An outline of the general architecture of our system is depicted in Fig.1.

The heart of the system is the DLV core. Wrapped around this basic block are frontend preprocessors and output filters (which also do some post-processing for frontends). The system takes input data from the user (mostly via the command line) and from the file system and/or database systems.

Upon startup, input is possibly translated by a frontend. Together with relational database tables, provided by an Oracle database, an Objectivity database, or ASCII text files, the Intelligent Grounding Module, efficiently generates a subset of the grounded input program that has exactly the answer sets as the full program, but is much smaller in general.

After that, the Model Generator is started. It generates one answer set candidate at a time and verifies it using the Model Checker. Upon success, filtered output is generated for the answer set. This process is iterated until either no more answer sets exist or an explicitly specified number of answer sets has been computed.

Not shown in Fig.1 are various additional data structures, such as dependency graphs.

Figure 1: Overall architecture of DLV.

3 Applying the System

3.1 Methodology

The core language of DLV can be used to encode problems in a highly declarative fashion, following a “Guess&Check” paradigm. We will first describe this paradigm in an abstract way and then provide some concrete examples. We will see that several problems, also problems of high computational complexity, can be solved naturally in DLV by using this declarative programming technique. The power of disjunctive rules allows one to express problems, which are even more complex than NP uniformly over varying instances of the problem using a fixed program.

Given a set of facts that specify an instance of some problem , a Guess&Check program for consists of the following two parts:

Guessing Part

The guessing part defines the search space, in a way such that answer sets of represent “solution candidates” of .

Checking Part

The checking part tests whether a solution candidate is in fact a solution, such that the answer sets of represent the solutions for the problem instance .

In general, we may allow both and to be arbitrary collections of rules in the program, and it may depend on the complexity of the problem which kind of rules are needed to realize these parts (in particular, the checking part); we defer this discussion to a later point in this section.

Without imposing restrictions on which rules and may contain, in the extremal case we might set to the full program and let be empty, i.e., all checking is moved to the guessing part such that solution candidates are always solutions. This is certainly not intended. However, in general the generation of the search space may be guarded by some rules, and such rules might be considered more appropriately placed in the guessing part than in the checking part. We do not pursue this issue any further here, and thus also refrain from giving a formal definition of how to separate a program into a guessing and a checking part.

For solving a number of problems, however, it is possible to design a natural Guess&Check program in which the two parts are clearly identifiable and have a simple structure:

  • The guessing part consists of a disjunctive rule which “guesses” a solution candidate .

  • The checking part consists of integrity constraints which check the admissibility of , possibly using auxiliary predicates which are defined by normal stratified rules.

In a sense, the disjunctive rule defines the search space in which rule applications are branching points, while the integrity constraints prune illegal branches.

As a first example, let us consider Hamiltonian Path, a classical -complete problem from graph theory.


Given a directed graph and a vertex of this graph, does there exist a path of starting at and passing through each vertex in exactly once?

Suppose that the graph is specified by means of predicates (unary) and (binary), and the starting node is specified by the predicate (unary). Then, the following Guess&Check program solves the Hamilton Path problem.

The first rule guesses a subset of all given arcs, while the rest of the program checks whether it is a Hamiltonian Path. Here, the checking part uses an auxiliary predicate , which his defined using positive recursion.

In particular, the first two constraints in check whether the set of arcs selected by meets the following requirements, which any Hamiltonian Path must satisfy: There must not be two arcs starting at the same node, and there must not be two arcs ending in the same node.

The two rules after the constraints define reachability from the starting node with respect to the selected arc set . This is used in the third constraint, which enforces that all nodes in the graph are reached from the starting node in the subgraph induced by . This constraint also ensures that this subgraph is connected.

It is easy to see that a selected arc set which satisfies all three constraints must contain the edges of a path in that starts at node , and passes through distinct nodes until no further node is left, or it arrives at the starting node again. In the latter case, this means that the path is a Hamiltonian Cycle, and by dropping the last edge, we have a Hamiltonian Path.

Thus, given a set of facts for , , and which specify the problem input, the program has an answer set if and only if the input graph has a Hamiltonian Path.

If we want to compute a Hamiltonian Path rather than only answering that such a path exists, we can strip off the last edge from a Hamiltonian Cycle by adding a further constraint :- start(Y), inPath(_,Y). to the program. Then, the set of selected edges in an answer sets of constitutes a Hamiltonian Path starting at .

It is worth noting that DLV is able to solve problems which are located at the second level of the polynomial hierarchy, and indeed also such problems can be encoded by the Guess&Check technique, as in the following example called Strategic Companies.


Given the collection , … of companies owned by a holding, and information about company control, compute the set of the strategic companies in the holding.

To briefly explain what “strategic” means in this context, imagine that each company produces some goods. Moreover, several companies jointly may have control over another company. Now, some companies should be sold, under the constraint that all goods can be still produced, and that no company is sold which would still be controlled by the holding after the transaction. A company is strategic, if it belongs to a strategic set, which is a minimal set of companies satisfying these constraints.

This problem is -hard in general [Cadoli, Eiter, & Gottlob1997]; reformulated as a decision problem (“Given a further company in the input, is strategic?”), it is -complete. To our knowledge, it is the only KR problem from the business domain of this complexity that has been considered so far.

In the following encoding, means that is strategic, that is a company, that product is produced by companies and , and that is jointly controlled by and . We have adopted the setting from [Cadoli, Eiter, & Gottlob1997] where each product is produced by at most two companies and each company is jointly controlled by at most three other companies.

Given the facts for , and , the answer sets of the following program (actually ) correspond one-to-one to the strategic sets of the holding. Thus, the set of all strategic companies is given by the set of all companies for which the fact is true under brave reasoning.

Intuitively, the guessing part of consists of the disjunctive rule , and the checking part consist of the normal rule . This program exploits the minimization which is inherent to the semantics of answer sets for the check whether a candidate set of companies that produces all goods and obeys company control is also minimal with respect to this property.

The guessing rule intuitively selects one of the companies and that produce some item , which is described by . If there were no company control information, minimality of answer sets would then naturally ensure that the answer sets of correspond to the strategic sets; no further checking is needed. However, in case such control information, given by facts , is available, the rule in the program checks that no company is sold that would be controlled by other companies in the strategic set, by simply requesting that this company must be strategic as well. The minimality of the strategic sets is automatically ensured by the minimality of answer sets. The answer sets of correspond one-to-one to the strategic sets of the given instance.

It is interesting to note that the checking constraint interferes with the guessing rule : applying may spoil the minimal answer set generated by rule . Such feedback from the checking part to the guessing part is in fact needed to solve -hard problems.

In general, if a program encodes a problem that is -complete, then the checking part must contain disjunctive rules unless has feedback to the guessing part .

Finally, note that STRATCOMP can not be expressed by a fixed normal logic program uniformly on all collections of facts produced_by and controlled_by (unless , an unlikely event).

3.2 Specifics

DLV is the result of putting theoretical results into practice. It is the first system supporting answer set semantics for full disjunctive logic programs with negation, integrity constraints, queries, and arithmetic built-ins.

The semantics of the language is both precise and intuitive, which provides a clean, declarative, and easy-to use framework for knowledge representation and reasoning.

The availability of a system supporting such an expressive language in an efficient way is stimulating AI and database people to use logic-based systems for the development of their applications.

Furthermore, it is possible to formulate translations from many other formalisms to DLV’s core language, such that the answer sets of the translated programs correspond to the solutions in the other formalism. DLV incorporates some of these translations as frontends. Currently frontends for diagnostic reasoning, SQL3, planning with action languages, and logic programming with inheritance exist.

We believe that DLV can be used in this way – as a core engine – for many problem domains. The advantage of this approach is that people with different background do not have to be aware of DLV’s syntax and semantics.

3.3 Users and Usability

Prospective users of the DLV core system should have a basic knowledge of logics for knowledge representation. As explained in the previous section, if a frontend for a particular language exists, a user need not even know about logics, but of course knowledge about the frontend language is still required.

Currently, the DLV system is used for educational purposes in courses on Databases and on AI, both in European and American universities. It is also used by several researchers for knowledge representation, for verifying theoretical work, and for performance comparisons.

Furthermore, DLV is currently under evaluation at CERN, the European Laboratory for Particle Physics located near Geneva in Switzerland and France, for an advanced deductive database application that involves complex knowledge manipulations on large-sized databases.

4 Evaluating the System

4.1 Benchmarks

It is a well-known in the area of benchmarking that the only really useful benchmark is the one where a (prospective) user of a system tests that system with exactly the kind of application he is going to use.

Nevertheless, artificial benchmarks do have some merits in developing and improving the performance of systems. Moreover, they are also very useful in evaluating the progress of various implementations, so there has been some work in that area, too, and it seems that DLV compares favorably to similar systems [Eiter et al.1998, Janhunen et al.2000].

Also for the development of some deductive database applications DLV can compete with database systems. Indeed, DLV is being considered by CERN for such an application which could not be handled by other systems.

4.2 Problem Size

As far as data structures are concerned, DLV does not have any real limit on the problem size it can handle. For example, we have verified current versions on programs with 1 million literals in 1 million rules.

Another crucial factor for hard input are suitable heuristics. Here we already have developed an interesting approach

[Faber, Leone, & Pfeifer1999] and are actively working on various new approaches.

To give an idea of the sizes of the problems that DLV can currently handle, and of the problems solvable by DLV in the near future, below we provide the execution times of a number of hard benchmark instances reporting also the improvements over the last year.

Problem Jul. ’98 Feb. ’99 Jun. ’99 Nov. ’99
3COL 1000s 26.4s 2.1s 0.5s
HPATH 1000s 1000s 10.8s 0.3s
PRIME 21.2s 10.2s 0.8s
STRATCOMP 54.6s 8.0s 6.9s 5.4s
BW P4 1000s 1000s 32.4s 6.3s
BW Split P4 1000s 1000s 10.5s 2.3s

find one coloring of a random graph
         with 150 nodes and 350 edges

find one Hamiltonian Path in a random graph
         with 25 nodes and 120 arcs

find all prime implicants of a random 3CNF
         with 546 clauses and 127 variables

find all strategic sets a randomly chosen company
         occurs in (71 companies and 213 products)

find one plan of length 9 involving 11 blocks

linear encoding for


  • [Apt, Blair, & Walker1988] Apt, K. R.; Blair, H. A.; and Walker, A. 1988. Towards a theory of declarative knowledge. In Minker, J., ed., Foundations of Deductive Databases and Logic Programming. Los Altos, California: Morgan Kaufmann Publishers, Inc. 89–148.
  • [Buccafurri, Faber, & Leone1999] Buccafurri, F.; Faber, W.; and Leone, N. 1999. Disjunctive Logic Programs with Inheritance. In Proceedings of the 16th International Conference on Logic Programming (ICLP ’99).
  • [Cadoli, Eiter, & Gottlob1997] Cadoli, M.; Eiter, T.; and Gottlob, G. 1997. Default Logic as a Query Language. IEEE Transactions on Knowledge and Data Engineering 9(3):448–463.
  • [Console, Theseider Dupré, & Torasso1991] Console, L.; Theseider Dupré, D.; and Torasso, P. 1991. On the Relationship Between Abduction and Deduction. Journal of Logic and Computation 1(5):661–690.
  • [de Kleer, Mackworth, & Reiter1992] de Kleer, J.; Mackworth, A. K.; and Reiter, R. 1992. Characterizing diagnoses and systems. Artificial Intelligence 56(2–3):197–222.
  • [Eiter et al.1998] Eiter, T.; Leone, N.; Mateis, C.; Pfeifer, G.; and Scarcello, F. 1998. The KR System dlv: Progress Report, Comparisons and Benchmarks. In Cohn, A. G.; Schubert, L.; and Shapiro, S. C., eds., Proceedings Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR’98), 406–417. Morgan Kaufmann Publishers.
  • [Eiter, Gottlob, & Leone1997] Eiter, T.; Gottlob, G.; and Leone, N. 1997. Abduction from Logic Programs: Semantics and Complexity. Theoretical Computer Science 189(1–2):129–177.
  • [Eiter, Gottlob, & Mannila1997] Eiter, T.; Gottlob, G.; and Mannila, H. 1997. Disjunctive Datalog. ACM Transactions on Database Systems 22(3):315–363.
  • [Faber & Pfeifersince 1996] Faber, W., and Pfeifer, G. since 1996. dlv homepage. <URL:http://www.dbai.tuwien.ac.at/proj/dlv/>.
  • [Faber, Leone, & Pfeifer1999] Faber, W.; Leone, N.; and Pfeifer, G. 1999. Pushing Goal Derivation in DLP Computations. In Proceedings of the 5th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’99), Lecture Notes in AI (LNAI), 177–191. El Paso, Texas, USA: Springer Verlag.
  • [Gelfond & Lifschitz1991] Gelfond, M., and Lifschitz, V. 1991. Classical Negation in Logic Programs and Disjunctive Databases. New Generation Computing 9:365–385.
  • [Janhunen et al.2000] Janhunen, T.; Niemela, I.; Simons, P.; and You, J.-H. 2000. Partiality and disjunctions in stable model semantics. In Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning (KR2000).
  • [Kakas, Kowalski, & Toni1993] Kakas, A.; Kowalski, R.; and Toni, F. 1993. Abductive Logic Programming. Journal of Logic and Computation.
  • [Lifschitz1996] Lifschitz, V. 1996. Foundations of logic programming. In Brewka, G., ed., Principles of Knowledge Representation. Stanford: CSLI Publications. 69–127.
  • [Poole1989] Poole, D. 1989. Explanation and Prediction: An Architecture for Default and Abductive Reasoning. Computational Intelligence 5(1):97–110.
  • [Reiter1987] Reiter, R. 1987. A Theory of Diagnosis From First Principles. Artificial Intelligence 32:57–95.