The objective of the Semantic Web is to make information available in a form that is understandable and automatically manageable by machines. In order to realize this vision, the W3C supported the development of a family of knowledge representation formalisms of increasing complexity for defining ontologies, called Web Ontology Languages (OWL), based on Description Logics (DLs). In order to fully support the development of the Semantic Web, efficient DL reasoners are essential. Usually, the most common approach adopted by reasoners is the tableau algorithm [Horrocks and Sattler (2007)], written in a procedural language. This algorithm applies some expansion rules on a tableau, a representation of the assertional part of the KB. However, some of these rules are non-deterministic, requiring the implementation of a search strategy in an or-branching search space. Pellet [Sirin et al. (2007)], for instance, is a reasoner written in Java.
Modeling real world domains requires dealing with information that is incomplete or that comes from sources with different trust levels. This motivated the need for managing uncertainty in the Semantic Web, and led to many proposals for combining probability theory with OWL languages, or with the underlying DLs, such as P- [Lukasiewicz (2008)], [Ceylan and Peñaloza (2015)], Prob- [Lutz and Schröder (2010)], PR-OWL [Carvalho et al. (2010)], and those proposed in DBLP:conf/semweb/JungL12, DBLP:conf/uai/Heinsohn94, DBLP:conf/kr/Jaeger94, DBLP:conf/aaai/KollerLP97, Ding04aprobabilistic.
In [Bellodi et al. (2011), Riguzzi et al. (2015), Zese (2017)] we introduced DISPONTE, a probabilistic semantics for DLs. DISPONTE follows the distribution semantics [Sato (1995)] derived from Probabilistic Logic Programming (PLP), that has emerged as one of the most effective approaches for representing probabilistic information in Logic Programming languages. Many techniques have been proposed in PLP for combining Logic Programming with probability theory, for example [Lakshmanan and Sadri (2001)] and [Kifer and Subrahmanian (1992)] defined an extended immediate consequence operator that deals with probability intervals associated with atoms, effectively propagating the uncertainty among atoms using rules.
Despite the number of proposals for probabilistic semantics extending DLs, only few of them have been equipped with a reasoner to compute the probability of queries. Examples of probabilistic DL reasoners are PRONTO [Klinov (2008)], BORN [Ceylan et al. (2015)] and BUNDLE [Riguzzi et al. (2015), Zese (2017)]. PRONTO, for instance, is a probabilistic reasoner that can be applied to P-. BORN answers probabilistic subsumption queries w.r.t. KBs by using ProbLog for managing the probabilistic part of the KB. Finally, BUNDLE performs probabilistic reasoning over DISPONTE KBs by exploiting Pellet to return explanations and Binary Decision Diagrams (BDDs) to compute the probability of queries.
Usually DL reasoners adopt the tableau algorithm [Horrocks and Sattler (2007), Horrocks et al. (2006)]. This algorithm applies some expansion rules on a tableau, a representation of the assertional part of the KB. However, some of these rules are non-deterministic, requiring the implementation of a search strategy in an or-branching search space.
Reasoners written in Prolog can exploit Prolog’s backtracking facilities for performing the search, as has been observed in various works [Beckert and Posegga (1995), Hustadt et al. (2008), Lukácsy and Szeredi (2009), Ricca et al. (2009), Gavanelli et al. (2015)]. For this reason, in [Zese et al. (2018), Zese (2017)] we proposed the system TRILL, a tableau reasoner implemented in Prolog. Prolog’s search strategy is exploited for taking into account the non-determinism of the tableau rules. TRILL can check the consistency of a concept and the entailment of an axiom from an ontology, and can also return the probability of a query.
Both BUNDLE and TRILL use Binary Decision Diagrams (BDDs) for computing the probability of queries from the set of all explanations. They encode the results of the inference process in a BDD from which the probability can be computed in a time linear in the size of the diagram. We also developed TRILL [Zese et al. (2018), Zese (2017)], which builds a pinpointing formula able to compactly represent the set of explanations. This formula is used to build the corresponding BDD and compute the query’s probability. In [Riguzzi et al. (2015), Zese et al. (2018), Zese (2017)] we have extensively tested BUNDLE, TRILL and TRILL, showing that they can achieve significant results in terms of scalability and speed.
In this paper, we present TORNADO for “Trill powered by pinpOinting foRmulas and biNAry DecisiOn diagrams”, in which the BDD representing the pinpointing formula is directly built during tableau expansion, speeding up the overall inference process. TRILL, TRILL and TORNADO are all available in the TRILL on SWISH web application at http://trill.ml.unife.it/.
We also present an experimental evaluation of TORNADO by comparing it with several probabilistic and non-probabilistic reasoners. Results show that TORNADO is as fast as or faster than state-of-art reasoners also for non-probabilistic inference and can, in some cases, avoid an exponential blow-up.
The paper is organized as follows: Section 2 briefly introduces DLs and Section 3 presents DISPONTE. The tableau algorithm of TRILL and TORNADO is discussed in Section 4, followed by the description of the two systems in Section 5. Finally, Section 6 shows the experimental evaluation and Section 7 concludes the paper.
2 Description Logics
DLs are fragments of FOL languages used for modeling knowledge bases (KBs) that exhibit nice computational properties such as decidability and/or low complexity [Baader et al. (2008)]. There are many DL languages that differ by the constructs that are allowed for defining concepts (sets of individuals of the domain) and roles (sets of pairs of individuals). Here we illustrate the DL which is the expressiveness level supported by TRILL and TORNADO.
Let us consider a set of atomic concepts , a set of atomic roles and a set of individuals . A role could be an atomic role or the inverse of an atomic role . We use to denote the set of all inverses of roles in . Each , and are concepts. If , and are concepts and , then , and are concepts, as well as and .
A knowledge base (KB) consists of a TBox , an RBox and an ABox . An RBox is a finite set of transitivity axioms and role inclusion axioms , where . A TBox is a finite set of concept inclusion axioms , where and are concepts. An ABox is a finite set of concept membership axioms and role membership axioms , where is a concept, and .
A KB is usually assigned a semantics in terms of interpretations , where is a non-empty domain and is the interpretation function, which assigns an element in to each , a subset of to each concept and a subset of to each role.
A query over a KB is usually an axiom for which we want to test the entailment from the KB, written as .
The following KB is inspired by the ontology people+pets [Patel-Schneider et al. (2003)]:
It states that individuals that own an animal which is a pet are nature lovers and that owns the animals and , which are cats. Moreover, cats are pets. The KB entails the query .
3 Probabilistic Description Logics
DISPONTE [Bellodi et al. (2011), Riguzzi et al. (2015), Zese (2017)] applies the distribution semantics to probabilistic ontologies [Sato (1995)]. In DISPONTE a probabilistic knowledge base is a set of certain and probabilistic axioms. Certain axioms are regular DL axioms. Probabilistic axioms take the form , where is a real number in and is a DL axiom. Probability can be interpreted as the degree of our belief in axiom . For example, a probabilistic concept membership axiom means that we have degree of belief in . The statement that cats are pets with probability 0.6 can be expressed as .
The idea of DISPONTE is to associate independent Boolean random variables with the probabilistic axioms. By assigning values to every random variable we obtain aworld, i.e. the set of probabilistic axioms whose random variable takes on value 1 together with the set of certain axioms. Therefore, given a KB with probabilistic axioms, there are
different worlds, one for each possible subset of the probabilistic axioms. Each world contains all the non-probabilistic axioms of the KB. DISPONTE defines a probability distribution over worlds as in probabilistic logic programming.
The probability of a world is computed by multiplying the probability for each probabilistic axiom included in the world with the probability for each probabilistic axiom not included in the world.
Formally, an atomic choice is a couple where is the -th probabilistic axiom and . indicates whether is chosen to be included in a world ( = 1) or not ( = 0). A composite choice is a consistent set of atomic choices, i.e., implies (only one decision is taken for each axiom). The probability of a composite choice is , where is the probability associated with axiom . A selection is a total composite choice, i.e., it contains an atomic choice for every probabilistic axiom of the theory. Thus a selection identifies a world in this way: where is the set of certain axioms. Let us indicate with the set of all worlds. The probability of a world is . is a probability distribution over worlds, i.e., .
We can now assign probabilities to queries. Given a world the probability of a query is defined as if and 0 otherwise. The probability of a query can be obtained by marginalizing the joint probability of the query and the worlds :
Let us consider the knowledge base and the query of Example 1 where some of the axioms are made probabilistic:
and are cats and cats are pets with the specified probabilities. The KB has eight worlds and is true in three of them, i.e.,
These worlds corresponds to the selections:
The probability is
TRILL [Zese et al. (2018), Zese (2017)] computes the probability of a query w.r.t. KBs that follow DISPONTE by first computing all the explanations for the query and then building a Binary Decision Diagram (BDD) that represents them. An explanation is a subset of axioms of a KB such that . Since explanations may contain also axioms that are irrelevant for proving the truth of , usually, minimal explanations111Also known as justifications. w.r.t. set inclusion are considered. This means that a set of axioms is a minimal explanation if and for all , , i.e. is not an explanation for . Therefore, consider a minimal explanation, if we remove one of the axioms in , creating the set , then is not an explanation, while if we add an axiom randomly chosen among those contained in the KB to , creating , then is an explanation that is not minimal. From now on, we will consider only minimal explanations. For the sake of brevity, when we will mention explanations we will refer to minimal explanations. An explanation can be represented with a composite choice. Given the set of all explanations for a query , we can define the Disjunctive Normal Form (DNF) Boolean formula as . The variables are independent Boolean random variables with and the probability that takes value 1 gives the probability of . A BDD for a function of Boolean variables is a rooted graph that has one level for each Boolean variable. A node has two children: one corresponding to the 1 value of the variable associated with the level of and one corresponding to the 0 value of the variable. When drawing BDDs, the 0-branch is distinguished from the 1-branch by drawing it with a dashed line. The leaves store either 0 or 1. BDD software packages take as input a Boolean function and incrementally build the diagram so that isomorphic portions of it are merged, possibly changing the order of variables if useful. This often allows the diagram to have a number of nodes much smaller than exponential in the number of variables that a naive representation of the function would require.
Given the BDD, we can use the function Prob shown in Algorithm 1 [Kimmig et al. (2011)]. This dynamic programming algorithm traverses the diagram from the leaves and computes the probability of a formula encoded as a BDD.
Example 3 (Example 2 cont.)
Let us consider the KB of Example 2. If we associate the random variables with axiom , with and with , the Boolean formula represents the set of explanations. The BDD for such a function is shown in Figure 1. By applying function Prob of Algorithm 1 to this BDD we get
and therefore , which corresponds to the probability given by the semantics.
4 The Pinpointing Formula
In [Baader and Peñaloza (2010a), Baader and Peñaloza (2010b)] the authors consider the problem of finding a pinpointing formula instead of a set of explanations. A pinpointing formula is a compact representation of the set of explanations. To build a pinpointing formula, first we have to associate a unique propositional variable with every axiom of the KB , indicated with . Let be the set of all the propositional variables associated with axioms in , then the pinpointing formula is a monotone Boolean formula built using some or all of the variables in and the conjunction and disjunction connectives. A valuation of a set of variables is the set of propositional variables that are true, i.e., . For a valuation , let .
Definition 1 (Pinpointing formula [Baader and Peñaloza (2010b)])
Given a query and a KB , a monotone Boolean formula over is called a pinpointing formula for if for every valuation it holds that iff satisfies .
In [Baader and Peñaloza (2010b)] the authors also discuss the relation between the pinpointing formula and explanations for a query . Let us denote the set of explanations for by is a minimal valuation satisfying . can be obtained by converting the pinpointing formula into Disjunctive Normal Form (DNF) and removing disjuncts implying other disjuncts. However, the transformation to DNF may produce a formula whose size is exponential in the size of the original one. In addition, the correspondence holds also in the other direction: the formula is a pinpointing formula.
Example 4 (Example 3 cont.)
Let us consider the KB and the query of Example 2. The set corresponds to the pinpointing formula .
One interesting feature of the pinpointing formula is that an exponential number of explanations can be represented with a much smaller pinpointing formula.
Given an integer , consider the KB containing the following axioms for :
The query has explanations, even if the KB has a size that is linear in . For for example, we have 4 different explanations, namely
The corresponding pinpointing formula is . In general, given , the formula for this example is
whose size is linear in .
4.1 The Tableau Algorithm for the Pinpointing Formula
One of the most common approaches for performing inference in DL is the tableau algorithm [Baader and Sattler (2001)]. A tableau is a graph where the nodes are individuals annotated with the concepts they belong to and the edges are annotated with the roles that relate the connected individuals. A tableau can also be seen as an ABox, i.e., a set of (class and role) assertions. This graph is expanded by applying a set of consistency preserving expansion rules until no more rules are applicable. However, some expansion rules are non-deterministic and their application results in a set of tableaux. Therefore, the tableau algorithm manages a forest of tableau graphs and terminates when all the graphs are fully expanded.
Extensions of the standard tableau algorithm allow the computation of explanations for a query associating sets of axioms representing the set of explanations to each annotation of each node and edge. The set of annotations for a node is denoted by , analogously, the set of annotations of an edge is denoted by . A recent extension represents explanations by means of a Boolean formula [Baader and Peñaloza (2010b)]. In particular every node (edge) annotation, which is an assertion () with (), is associated with a label that is a monotone Boolean formula over . In the initial tableau, every assertion is labeled with variable , and assertion is added with label .
The tableau is then expanded by means of expansion rules. In [Baader and Peñaloza (2010b)] a rule is of the form
where the s are finite sets of assertions possibly containing variables and is a finite set of axioms. Assertions have variables for concepts, roles and individuals, when can be unified with an assertion in the tableau and the set of axioms , then the rule can be applied to the tableau. Before applying the rule, all variables in assertions in are instantiated.
In this example we show the tableau algorithm in action on an extract of the KB of Example 1, and the query .
The initial tableau, shown on the left hand side of Figure 2, contains the nodes for and . The node for is annotated with the concept due to axiom , while the node for is annotated with the concept , due to the query . Moreover, the edge between the two nodes is annotated with the role , due to axiom . The final tableau, obtained after the application of the expansion rules, is shown on the right hand side of Figure 2. In this tableau, the node for is also annotated with the concept , and the node for with the concepts and .
Rules can be divided into two sets: deterministic and non-deterministic. In the first type, and all the ground assertions in are inserted in the tableau to which the rule is applied, while in the second type , meaning that it creates new tableaux, one for each , and adds to the -th tableau the ground assertions in .
In order to explain the conditions that allow for the application of a rule we need first some definitions.
Let be a set of labeled assertions and a monotone Boolean formula. The assertion is -insertable into if either , or but . Given a set of assertions and a set of labeled assertions, the set of -insertable elements of into is defined as -insertable.
The result of the operation of -insertion of into is the set of labeled assertions containing assertions in and those specified in opportunely labeled, i.e., the label of assertions in remain unchanged, assertions in get label and the remaining s get the label .
Consider the KB and the query of Example 2. After finding the first explanation for the query, which is , the tableau contains the set of assertions with labels , , and . Suppose we want to insert the assertion into , and is . Since this formula does not imply , then is -insertable into and its insertion changes the label to the disjunction of the two formulas, i.e., .
We also need the concept of substitution. A substitution is a mapping , where is a finite set of logical variables and is a countably infinite set of constants that contains all the individuals in the KB and all the anonymous individuals created by the application of the rules. A substitution can also be seen as a set of ordered couples in the obvious way. Variables are seen as placeholders for individuals in the assertions. For example, an assertion can be or where is a concept, is a role and and are variables. Let be an assertion with variable and a substitution, then denotes the assertion obtained by replacing variable with its -image, i.e. . A substitution extends if . A rule can be applied to the tableau with a substitution on the variables occurring in if , and . An applicable rule is applied by generating a set of tableaux with the -th obtained by the -insertion of in , where is a substitution extending . In the case of variables not occurring in (fresh variables), instantiates them with new individuals which do not appear in the KB. These individuals are also called anonymous.
Consider, for example, the rule defined as
handling existential restrictions. Informally, “if , but there is no individual name such that and in , then where is an individual name not occurring in ”. If does not contain two assertions that match , a fresh variable is instantiated with a new fresh individual. Thus, if the rule can be applied to with substitution with a new anonymous individual. After the application of the rule .
However, the discussion above does not ensure that rules such as that of Example 8 are not applied again to creating new fresh individuals. In fact, just checking whether the new assertions are not contained in does not prevent to re-apply the rule in the example to generating . This motivates the following definition for rule applicability.
Definition 3 (Rule Applicability)
Given a tableau , a rule is applicable with a substitution on the variable occurring in if , and , where is the set of assertions of the tableau, and, for every and every substitution on the variables occurring in extending we have .
We can now define also rule application.
Definition 4 (Rule Application)
Given a forest of tableaux and a tableau representing the set of assertions to which a rule is applicable with substitution , the application of the rule leads to the new forest . Each contains the assertions in , where is a substitution on the variables occurring in that extends substitution and maps variables of to new distinct anonymous individuals, i.e. individuals not occurring in . The rule is applied for each possible given by .
After the full expansion of the forest of tableaux, i.e., when no more rules are applicable to any tableau of the forest, the pinpointing formula is built from all the clashes in the tableaux. A clash is represented by two assertions and present in the tableau.
The pinpointing formula is built by first conjoining, for each clash, the labels of the two clashing assertions, then by disjoining the formulas for every clash in a tableau and finally by conjoining the formulas for each tableau.
In order to ensure termination of the algorithm, blocking must be used.
Definition 5 (Blocking)
Given a node of a tableau, is blocked iff either is a new node generated by a rule, it has a predecessor which contains the same annotations of and the labels of these annotations are equal, or its parent is blocked.
Let us consider the following KB.
The initial tableau, shown on the left hand side of Figure 3, contains only the node for , annotated with . After the application of the unfold rule, using axiom , and of the rule, explained in Example 8, the resulting tableau is shown on the right hand side of Figure 3. The tableau has a new node corresponding to an anonymous individual , which has the same annotations of its predecessor . The node for is blocked according to Definition 5, because further expansion of this node would lead to the creation of an infinite chain of nodes associated to new anonymous individual, all containing the same annotations and .
Then, a new definition of applicability must be given.
Definition 6 (Rule Applicability with Blocking)
A rule is applicable if it is so in the sense of Definition 3. Moreover, if the rule adds a new node to the tableau, the node annotated with the assertion to which the rule is applied must be not blocked.
Theorem 1 (Correctness of Pinpointing Formula [Baader and Peñaloza (2010b)])
Given a KB and a query , for every chain of rule applications resulting in a fully expanded forest , the formula built as indicated above is a pinpointing formula for the query .
This approach is correct and terminating for the DL . Number restrictions and nominal concepts cannot be handled by this definition of the tableau algorithm because of the definitions of rule and rule application. In fact tableau expansion rules for DLs with these constructs may merge some nodes, operation that is not allowed by the approach presented above. The authors of [Baader and Peñaloza (2010b)] conjecture that the approach can be extended to deal with such constructs but, to the best of our knowledge, this conjecture has not been proved yet.
Until now, we have not considered transitivity axioms nor role inclusion axioms. To do so, the definition of -successor must be given.
Given a role , an individual is called -successor of an individual iff there is an assertion for some sub-role of .
Note that, each role is a sub-role of itself. Following Definition 7, every assertion indicates that is an -successor of .
Consider a KB containing, among the others, the following axioms:
the assertion means that is an -successor of and, therefore, that there is also the assertion , and, since is an -successor of , recursively and as well.
Definition 7 is used to deal with role inclusion when considering quantified concepts ( and ) in order to correctly manage subsumption ( if ). The expansion rules for the tableau algorithm extended with pinpointing formula and management of -successors are shown in Figure 4. Here, and indicate that concept is an atomic concept and a complex concept respectively. Moreover, means that there is not any individual such that the set of assertions containing does not contain the assertions defined in . Finally, adds a new anonymous individual to the set of assertions.
As reported in [Baader and Sattler (2001)], the unfold rule considers only subsumption axioms where the sub-class is an atomic concept. The CE rule is used in the case that the sub-class is not atomic, in such a case the unfold rule might lead to an exponential blow-up. The CE rule applies every subsumption axiom where the sub-class is complex to every individual of the KB.
While the and rules are easily understandable, the rule ensures that there exists at least one individual connected to by role belonging to class . The rule ensures that every individual connected to by role belongs to the concept specified by the assertion, while the ensures that the effects of universal restrictions are propagated as necessary in the presence of non-simple roles. It basically adds iff is an -successor of such that is included in the set of initial assertions and is a transitive sub-role of .
We refer to [Baader and Sattler (2001)] for a detailed discussion on the tableau algorithm for DLs and its rules.
As TRILL and TRILL, TORNADO implements the tableau algorithm described in the previous section. In particular, TORNADO shares the same basis of TRILL because they both build the pinpointing formula representing the answer to queries. Differently from TRILL, TORNADO labels the assertions with a BDD representing the pinpointing formula instead of the formula itself. -insertability can be checked in this case without resorting to a SAT solver. In fact, suppose the tableau contains assertion labeled with BDD , and we want to add BDD to the label of assertion , where represents the formula . If is -insertable, the result is that assertion in the tableau will have the BDD obtained by disjoining and , , as label. is -insertable if . We have that . Since BDDs are a canonical representation of Boolean formulas, iff , so we can avoid the SAT test by computing the disjunction of BDDs and checking whether the result is the same as the first argument, i.e., the two BDDs represent the same Boolean formula or, in other words, they represent two Boolean formulas which have the same truth value. If this is not the case, we can insert the formula in the tableau with BDD which is already computed.
Theorem 2 (TORNADO’s Correctness)
Given a KB and a query , the probability value returned by TORNADO when answering query corresponds to the probability value for the query computed accordingly to the DISPONTE semantics.
The proof of this theorem follows from Theorem 1. Since the pinpointing formula of query w.r.t. the KB corresponds to the set of explanations, also their translation into BDDs is equivalent. TORNADO implements the tableau algorithm computing the pinpointing formula and represents such formula directly with BDDs built during inference, hence the probability computed from is correct w.r.t. the semantics.
5.1 Implementation of TORNADO
First, we describe the common parts of TORNADO, TRILL and TRILL and then we show the differences. The code of all three systems is available at https://github.com/rzese/trill and can be tested online with the TRILL on SWISH web application at http://trill.ml.unife.it/. Figure 5 shows the TRILL on SWISH interface.
All systems allow the use of two different syntaxes for axioms: OWL/RDF and Prolog.
The first can be used by exploiting the predicate
owl_rdf/1, whose argument is a string containing the KB in OWL/RDF.
The Prolog syntax is borrowed fro the Thea222http://vangelisv.github.io/thea/ library, similar to the Functional-Style Syntax of OWL [W3C (2012)] and represents axioms as Prolog atoms. For example, the axiom
stating that cat is subclass of pet can be expressed as
while the axiom
stating that pet is equivalent to the intersection of classes animal and not wild can be expressed as:
In order to represent the tableau, the systems use a pair , where is a list containing assertions labeled with the corresponding pinpointing formula and is a triple (, , ) in which is a directed graph that encodes the structure of the tableau, is a red-black tree (a key-value dictionary), where a key is a pair of individuals and its value is the set of roles that connect the two individuals, and is a red-black tree, where a key is a role and its value is the set of pairs of individuals that are linked by the role. These structures are built and handled by using two Prolog built-in libraries, one tailored for unweighted graphs, used for the structure of the tableau , and one for red-black trees, used for the two dictionaries and . From the data structure we can quickly find the information needed during the execution of the tableau algorithm and check blocking conditions through predicates nominal/2 and blocked/2. These predicates take as input a nominal individual and a tableau . For each individual in the ABox, the atom is added to in the initial tableau in order to rapidly check whether a node is associated with an anonymous individual or not.
All non-deterministic rules are implemented using a predicate of the form, that takes as input the current tableau and returns the list of tableaux created by the application of the rule to . Deterministic rules are implemented by a predicate that returns a single tableau after the application of to .
Since the order of rule application does not influence the final result, deterministic rules are applied first and then the non-deterministic ones in order to delay as much as possible the generation of new tableaux. Among deterministic rules, , , and are applied as last rules [Baader and Sattler (2001)]. After the application of a deterministic rule, a cut avoids backtracking to other possible choices for the deterministic rules. Then, non-deterministic rules are tried sequentially. After the application of a non-deterministic rule, a cut is performed to avoid backtracking to other rule choices and a tableau from the list is non-deterministically chosen with member/2. If no rule is applicable, rule application stops and returns the current tableau, otherwise a new round of rule application is performed.
The labels of assertions are combined in TRILL using functors
+/1 representing conjunction and disjunction respectively. Their argument is the list of operands. For example the formula of Example 4 can be represented as
-insertability is checked in TRILL by conjoining the formula we want to add with the negation of the formula labeling the assertion in the tableau. If the resulting formula is satisfiable, then the assertion is -insertable. Predicate test/2 checks -insertability: it takes as input the two formulas and calls a satisfiability library after having transformed the formulas into a suitable format.
The Boolean pinpointing formula returned by TRILL is then translated into a BDD from which the probability can be computed.
As already seen, TORNADO avoids the steps just described by directly building BDDs. -insertability is checked by disjoining the current label of assertion and the new BDD found and checking whether the resulting BDD is different from the original label of the assertion. Finally, when TORNADO ends the computation of the query, the corresponding BDD is already built and can be used to calculate the probability of the query.
BDDs are managed in Prolog by using a library developed for the system PITA [Riguzzi and Swift (2011)], which interfaces Prolog to the CUDD library333http://vlsi.colorado.edu/~fabio/CUDD/ for manipulating BDDs. The PITA library offers predicates for performing Boolean operations between BDDs. Note that BDDs are represented in Prolog with pointers to their root node and checking equality between BDDs can be performed by checking equality between two pointers which is constant in time. Thus, the test/2 predicate has only to update the BDD and check if the new BDD is different from the original one. This test is necessary to avoid entering in an infinite loop where the same assertion is inserted infinitely many times. The code of TORNADO’s test/2 predicate is shown below.
test(BDD1,BDD2,F) :- % BDD1 is the new BDD, % BDD2 is the BDD already in the tableau or_f(BDD1,BDD2,F), % combines BDD1 and BDD2 to create BDD F BDD2 \== F. % checks if F is different from BDD2
The time taken by Boolean operations between BDDs and the size of the results depend on the ordering of the variables. A smart order can significantly reduce the time and size of the results. However, the problem of finding the optimal order is coNP-complete [Bryant (1986)]
. For this reason, heuristic methods are used to choose the ordering. CUDD for example offers symmetry detection or genetic algorithms. Reordering can be executed when the user requests it or automatically by the package when the number of nodes reaches a certain threshold. The threshold is initialized and automatically tuned after each reordering. We refer to the documentation444http://www.cs.uleth.ca/~rice/cudd_docs/ of the library for detailed information about each implemented heuristic.
It is important to note that CUDD groups BDDs in environments called BDD managers. We use a single BDD manager for each query. When a reordering is made, all the BDDs of the BDD manager are reordered. So the difference test can compare the two pointers.
For TORNADO we chose the group sifting heuristic [Panda and Somenzi (1995)] for the order selection, natively available in the CUDD package.
TORNADO never forces the reordering and uses CUDD automatic dynamic reordering. However, as one can see from the experimental results presented in the next section, TORNADO is able to achieve good results using the default settings.
We performed two experiments, the first one regarding non-probabilistic inference, the second one regarding probabilistic inference.
In the first experiment we compared TRILL, TRILL, TORNADO, and BORN with the non-probabilistic reasoners Pellet [Sirin et al. (2007)], Konclude555http://derivo.de/produkte/konclude/ [Steigmiller et al. (2014)], HermiT [Shearer et al. (2008)], Fact++ [Tsarkov and Horrocks (2006)], and JFact666http://jfact.sourceforge.net/. Konclude can check the consistency of a KB and satisfiability of concepts, define the class hierarchy of the KB and find all the classes to which a given individual belongs. However, it cannot directly answer general queries or return explanations. On the other hand, Pellet, HermiT, Fact++, and JFact answer general queries and can be used for returning all explanations. To find explanations, once the first one is found, Pellet, HermiT, Fact++, and JFact use the Hitting Set Tree (HST) algorithm [Reiter (1987)] to compute the others by repeatedly removing axioms one at a time and invoking the reasoner. This algorithm is implemented in the OWL Explanation library777https://github.com/matthewhorridge/owlexplanation [Horridge et al. (2009)].
Basically, it takes as input one explanation, randomly chooses one axiom from the explanation and removes it form the KB. At this point the HST algorithm calls the reasoner to try to find a new explanation w.r.t. the reduced KB. If a new explanation is found, a new axiom from this new explanation is selected and removed from the reduced KB, trying to find a new explanation. Otherwise, the removed axiom is added to the reduced KB and a new axiom is selected to be removed from the last explanation found. The HST algorithm stops when all the axioms form all the explanations have been tested. Therefore, to find a new explanation at every iteration, OWL Explanation, for HermiT, Fact++ and JFact, uses a black box approach, i.e. a reasoner-independent approach. Whereas Pellet uses a built-in approach to find them, which is, however, the HST algorithm implemented in OWL Explanation slightly modified. On the other hand, Konclude does not implements the OWL API interface that we used for the implementation of the black box algorithm. Moreover, since it does not return explanations, the black box approach described above cannot be directly applied. In order to use Konclude for finding all possible explanations would require significant development work with a careful tuning of the implementation, which is outside of the scope of this paper. Therefore, we decided to include Konclude only in tests where we are not interested in finding all the explanations.
In the second experiment we compared TRILL, TRILL, TORNADO, BORN, BUNDLE and PRONTO. While TRILL, TRILL, BUNDLE and TORNADO all follow the DISPONTE semantics, PRONTO and BORN are based on different semantics. PRONTO uses P- [Lukasiewicz (2008)], a language based on Nilsson’s probabilistic logic [Nilsson (1986)], that defines probabilistic interpretations instead of a single probability distribution over theories (such as DISPONTE). BORN uses , that extends the
Description Logic with Bayesian networks and is strongly related to DISPONTE. In fact, DISPONTE is a special case ofwhere (1) every axiom corresponds to a single Boolean random variable, while allows a set of Boolean random variables; and (2) the Bayesian network has no edges, i.e., all the variables are independent. This special case greatly simplifies reasoning while still achieving significant expressiveness. Note that if we need the added expressiveness of , as shown in [Zese et al. (2018)], the Bayesian network can be translated into an equivalent one where all the random variables are mutually unconditionally independent, so that the KB can be represented with DISPONTE.
Because of the above differences, the comparison with PRONTO and BORN is only meant to provide an empirical comparison of the difficulty of reasoning under the various semantics.
TRILL is implemented both in YAP and SWI-Prolog, while TORNADO only in SWI-Prolog, thus all tests were run with the SWI-Prolog version of the TRILL 888The SWI-Prolog version exploits the solver contained in the clpb (http://www.swi-prolog.org/pldoc/man?section=clpb) library.. Pellet, BUNDLE and BORN are implemented in Java. BORN needs ProbLog to perform inference, we used version 2.1. To get the fairest results, the measured running time does not include the start-up time of the Prolog interpreter and of the Java virtual machine, but only inference and KBs loading.
All tests were performed on the HPC System Marconi999http://www.hpc.cineca.it/hardware/marconi equipped with Intel Xeon E5-2697 v4 (Broadwell) @ 2.30 GHz, using 8 cores for each test.
6.1 Non-Probabilistic Inference
We performed three different tests for the non-probabilistic case. One with KBs modeling real world domains and two with artificial KBs.
We used four real-world KBs as in [Zese et al. (2018)]:
BRCA101010http://www2.cs.man.ac.uk/~klinovp/pronto/brc/cancer_cc.owl, which models the risk factors of breast cancer depending on many factors such as age and drugs taken;
an extract of DBPedia ontology111111http://dbpedia.org/, containing structured information of Wikipedia, usually those contained in the information box on the right hand side of a page;
BioPAX level 3121212http://www.biopax.org/, which models metabolic pathways;
Vicodi131313http://www.vicodi.org/, which contains information on European history and models historical events and important personalities.
We used a version of the DBPedia and BioPAX KBs without the ABox and a version of BRCA and Vicodi with an ABox containing 1 individual and 19 individuals respectively. We randomly created 50 subclass-of queries for DBPedia and BioPAX and 50 instance-of queries for the other two, ensuring each query had at least one explanation. We ran each query with two different settings.
In the first setting, we used the reasoners to answer Boolean queries. We compared Konclude, Pellet, HermiT, Fact++, JFact, BORN, TRILL, TRILL and TORNADO. In this setting, Konclude has an advantage because it is optimized to test concept satisfiability. TRILL provides a predicate for answering yes/no to queries by checking for the existence of an explanation. On the other hand, TRILL and TORNADO are used by checking whether the output formula is satisfiable and BORN by checking that the probability of the query is not .
For all the considered reasoners except Konclude, we used the queries generated as described above. For Konclude, in order to perform tests as close as possible with the other competitors, for each subclass-of test with query we extended the KB with one test concept defined as , while for each instance-of query , where belongs to the concepts , we extended the KB with one test concept defined as , where is defined as the intersection of .
shows the average running time and its standard deviation in seconds to answer queries on each KB. On BRCA, TRILLperforms worse than TRILL and TORNADO since the SAT solver is repeatedly called with complex formulas. Konclude is the best on all KBs except DBPedia, where TORNADO performs similarly. TORNADO is the second faster on BioPAX and BRCA, while TRILL is the second fastest algorithm on Vicodi. TRILL, TRILL, TORNADO and Konclude outperform Pellet, HermiT, Fact++ and JFact.
|Pellet||1.502 0.082||0.965 0.083||1.334 0.072||2.148 0.12|
|Konclude||0.025 0.004||0.013 0.002||0.012 0.001||0.018 0.001|
|Fact++||1.405 0.126||1.230 0.085||1.276 0.131||1.465 0.094|
|HermiT||6.572 0.0367||3.917 0.279||6.313 0.557||8.622 0.603|
|JFact||1.895 0.058||1.625 0.9||1.772 0.087||2.832 0.1|
|TRILL||0.108 0.047||0.106 0.012||0.044 0.018||0.800 0.021|
|TRILL||0.109 0.03||0.139 0.007||0.038 0.020||1.486 0.039|
|TORNADO||0.105 0.055||0.012 0.005||0.041 0.021||0.082 0.018|
In the second setting, we collected all the explanations, that is the fairest comparison since both TRILL and TORNADO explore all the search space during inference, and so does BORN. We ran Pellet, HermiT, Fact++, JFact, BORN, TRILL, TRILL and TORNADO, while Konclude was not considered because we are interested here in finding all the explanations. Table 2 shows, for each ontology, the average number of explanations, and the average time in seconds to answer the queries for all the considered reasoners, together with the standard deviation. The values for BORN are taken from Table 1 because the check on the final probability for BORN can be neglected.
BRCA and DBPedia get the highest average number of explanations as they contain mainly subclass axioms between complex concepts.
In general TRILL, TRILL and TORNADO perform similarly to the first setting, while Pellet, HermiT, Fact++ and JFact are slower than in the first setting. BORN could be applied only to DBPedia given that it can only handle DLs. On BRCA, TRILL performs worse than TRILL and TORNADO since the SAT solver is repeatedly called with complex formulas. TORNADO is the best on all KBs except Vicodi, thanks to the compact encoding of explanations via BDDs and the non-use of a SAT solver, while TRILL is the fastest algorithm on Vicodi and the second fastest algorithm on BioPAX. In all the other cases, TRILL achieves the second best results.
While TRILL, TRILL, TORNADO terminate within one second (except for TRILL on BRCA), the remaining reasoners are slower. This is probably due to the approach used to find explanations (OWL Explanation library for HermiT, Fact++ and JFact, and a built-in approach for Pellet): the use of satisfiability reasoner in the HST may be less efficient than a reasoner specifically designed to return explanations.
|Avg. N. Expl.||3.92||16.32||1.02||6.49|
|Pellet||1.954 0.363||1.624 0.637||1.734 0.831||7.038 2.952|
|Fact++||3.837 1.97||5.000 1.266||2.803 1.13||8.218 3.754|
|HermiT||11.798 4.069||18.879 16.754||9.331 9.509||25.034 10.855|
|JFact||5.395 3.913||12.274 4.99||4.771 4.03||18.068 27.280|
|TRILL||0.137 0.042||0.108 0.01||0.049 0.026||0.805 0.024|
|TRILL||0.110 0.043||0.139 0.006||0.039 0.018||1.507 0.045|
|TORNADO||0.106 0.039||0.012 0.008||0.041 0.021||0.083 0.031|
Here we followed the idea presented in Section 3.6 of [Zese
et al. (2018)], for investigating the effect of the non-determinism in the choice of rules. In particular, we artificially created a set of KBs of increasing size of the following form:
with and varying in 1 to 7. The assertion is then added and the queries are asked. For each KB, explanations can be found and every explanation contains axioms, subclass-of axioms and 1 assertion axiom. The idea is to create an increasing number of backtracking points in order to test how Prolog can improve the performance when collecting all explanations with respect to procedural languages. For this reason, Konclude was not considered in this test.
Table 3 reports the average running time on 100 query executions for each system and KB when computing all the explanations for the query . Columns correspond to while rows correspond to . As in [Zese et al. (2018)], we set a time limit of 10 minutes for query execution. In these cases, the corresponding cells are filled in with “–”.
Results show that even small KBs may cause large running times for Pellet, HermiT, Fact++, and JFact , while BORN, TRILL, TRILL and TORNADO scale much better.
For , TRILL, TRILL and TORNADO take about the same time; for , TRILL’s becomes slower due to the use of the SAT solver.
BORN takes about 3.5 seconds in all cases, which is probably due to ProbLog exploiting Prolog backtracking as well.