cel
A lightweight Description Logic reasoner for large-scale biomedical ontologies
view repo
Axiom pinpointing refers to the task of finding the specific axioms in an ontology which are responsible for a consequence to follow. This task has been studied, under different names, in many research areas, leading to a reformulation and reinvention of techniques. In this work, we present a general overview to axiom pinpointing, providing the basic notions, different approaches for solving it, and some variations and applications which have been considered in the literature. This should serve as a starting point for researchers interested in related problems, with an ample bibliography for delving deeper into the details.
READ FULL TEXT VIEW PDFA lightweight Description Logic reasoner for large-scale biomedical ontologies
A guide to my stuff on GitHub
Intelligent applications need to represent and handle knowledge effectively. For that reason, many different knowledge representation languages have been developed, providing formal semantics and reasoning methods for deriving implicit consequences from explicitly represented elements (also called axioms). As these knowledge bases or ontologies grow, they become harder to maintain and verify and when—inevitably—errors occur, they are harder to understand and correct. Indeed, it is nowadays common to encounter knowledge bases with tens-of-thousands of axioms, and detecting the handful responsible for a given consequence would be impossible without the help of an automated tool.
Axiom pinpointing refers to the task of identifying the axioms in a knowledge base that are responsible for a given consequence. Assuming that the representation language is monotonic (that is, adding new knowledge does not remove any previous consequences), a relevant set of axioms is nothing more than a subset-minimal subontology which still entails the consequence under consideration. Such a set is called a justification. It is not difficult to see (and will be exemplified in the following sections) that there may exist multiple justifications for a single consequence. It is thus important to try to compute them all, for a full understanding of the derivation.
Understanding the causes of a consequence are not just instrumental for knowledge engineers to understand the ontologies they work on. Axiom pinpointing is also a relevant step for repairing modeling errors which could have been understood through misunderstanding, automated knowledge extraction, or merely typing or methodological errors. Identifying the potentially faulty axioms is the first step towards correcting the error. A third usecase for axiom pinpointing is explainability: if an intelligent system makes a decision based on a reasoning process, it is important to be able to explain the reasoning behind this decision to all the stakeholders involved.
We provide a general overview of axiom pinpointing over many different representation languages. Although we use terminology and results primarily developed in the context of description logics [2]
, we try to keep the presentation as general as possible to include other well-known monotonic formalisms like databases, logic programming, and propositional logic. Our goal is to describe what the main reasoning tasks associated to axiom pinpointing are, provide the basic templates for solving them, and present a few variants and applications from the literature. The hope is that this general description serves as a first step towards a unified description of the tasks for different areas of knowledge representation, and aids in a common development of new methods and tools.
To make the presentation as general as possible, we consider an abstract notion of an ontology language, which has four components: a class of well-formed axioms; a class of consequences; a class of valid ontologies, where is the class of all finite subsets of , such that if and , then (that is, every subset of an ontology is also an ontology); and an entailment relation , expressed in infix notation, such that for every two ontologies and consequence , if and , then ; that is, the entailment relation must be monotonic w.r.t. the ontology.
We note that in some existing work on axiom pinpointing and related topics, an ontology is often defined to be just a finite set of axioms, which is a special case of our definition. We decided to use this more general notion to account for syntactic restrictions that are common in description logics. For example, it allows for acyclic TBoxes, but also for the syntactic restrictions imposed to guarantee decidability of reasoning in ontologies [24, 2]. It also allows for other languages not typically considered ontological, such as propositional formulas in conjunctive normal form (CNF) or constraint satisfaction problems, to name just two examples.
To aid the understanding of the notions presented here, we will use a very simple ontology language dealing with reachability in finite graphs. Specifically, given a countable set of vertices, let
; that is, axioms and consequences are given by ordered pairs of vertices (called
edges), and . Intuitively, an ontology is a finite graph, and iff is reachable from in the graph . For example, the graph in Figure 1 (a) is one such ontology, and but . Importantly, this is only an example of a very simple ontology language, but many of the intuitions obtained from it apply also to more complex langauges.For any given ontology language, the main reasoning task is to decide entailments. Formally, given an ontology and a consequence , we are interested in deciding whether holds. Depending on the specific language considered, different methods can be developed to solve this reasoning task, and its computational complexity may vary. In fact, already within the family of description logics, we can find examples where the complexity of entailment checking varies from polynomial time up to doubly-exponential time. Moreover, there exist ontology languages with an undecidable entailment problem, but for the scope of this work we focus only on cases where this problem is decidable with complexity . Once that adequate methods for deciding entailments have been developed, one is often interested in solving more complex reasoning tasks.
Axiom pinpointing is a non-standard reasoning task, which focuses on identifying the axioms that are responsible for a consequence to follow from an ontology. Formally, axiom pinpointing is the task of identifying one or all the justifications for a given consequence, where a justification is a minimal sub-ontology that still entails the consequence.
Let be an ontology and a consequence such that . The sub-ontology is called a justification for w.r.t. iff (i) and (ii) for every , .
Here we are using the standard name from description logics, but it is worth noting that justifications are known with different names by different communities. For example, they are also known as MUSes in SAT [33], MESCs in CSP [39], causes in DBs [37], and MIPS, MUPS, and MinAs in DLs [47, 41].
Consider for example the graph from Figure 1 (a), which entails the consequence ;
that is, is reachable from . This consequence has three justifications, which are the three subgraphs (b)–(d) of the same figure. Note that there exist other sub-ontologies that still entail , but these contain at least one of the justifications depicted, and hence are not subset-minimal. In general, when an ontology corresponds to a graph, a justification for the consequence is a simple path from to .
Technically, it is not difficult to come out with algorithms which compute one or all justifications. The simplest approach is to exploit the existing reasoners and find justifications through repeated entailment checks. This approach is known as black-box because it uses the reasoner without any modification. A black-box method for computing one justification is described in Algorithm 1.
It tries to remove each axiom from the ontology, as long as it still entails the consequence. An invariant of the for loop is that , hence the resulting set satisfies the first condition from Definition 1. Moreover, by monotonicity of the resulting set cannot have any superfluous axioms, meaning that it is a justification. The resulting justification obtained through this algorithm depends on the order in which the axioms are selected for removal, but independently of the order, each axiom needs to be tested exactly once.
To find all justifications, one can simply enumerate all possible sub-ontologies and verify for each of them that they entail and that no strict subset entails as well. Any such is a justification. This method requires to check exponentially many times (i.e., once for each subset of ) whether an ontology is a justification. Obviously, the number of checks can be greatly reduced simply by taking into account the monotonicity of the entailment relation: if a set is such that , then there is no need to control any strict superset of . Indeed, any such superset still entails , but will not be minimal. Conversely, if , then no subset of can be a justification because it cannot entail . Other optimisations can be considered. For example, [26] presents an enumeration method based on Reiter’s Hitting Set Tree method [45], which guides the search for new justifications. However, as we will see later, one cannot avoid testing exponentially many sets. In the following section we analyse this, and other complexity issues in more detail.
In this section, we consider computational complexity issues related to axiom pinpointing. We have already seen two basic algorithms for computing one or all justifications, but it remains unclear whether more advanced techniques can provide improvements in terms of worst-case complexity. Although our goal is to understand the computational problem of finding the justifications, to analyse the complexity we consider their decision variants.
is-just is the problem of deciding, given an ontology and a consequence , whether is a justification of . all-just is the problem of deciding, given an ontology , a consequence , and a set of sub-ontologies of , whether are all the justifications of w.r.t. .
Recall that the first condition for a justification is that it still entails the consequence . Hence, entailment is a sub-problem of is-just. In particular this means that is-just is necessarily at least as hard as entailment. More formally, if entailment is hard for the complexity class , then is-just must be -hard as well. Note also that Algorithm 1 can be easily modified to decide is-just as well: if the test in line 1 succeeds at any point, then the input ontology is not a justification and the method can exit with failure; if the whole loop runs without exiting, then was a justification. In the worst case, this algorithm has to make one call to the entailment test for each axiom in . This means that is-just can be decided through polynomially many entailment tests; that is, is-just is in , if entailment is in . Note that if is at least PSpace, then the polynomial enumeration can be absorbed into the oracle.
If entailment is -complete for at least PSpace, then is-just is -complete as well.
The consequence of this proposition is that there is no need to analyse the complexity of is-just for expressive ontology languages with complex entailment relations. However, the black-box method for deciding this problem still leaves a gap when the complexity of entailment is below PSpace; in those cases, is usually smaller than and hence -hardness vs. in are not tight bounds. The only exception is if , where we again have that .
To understand the complexity of finding justifications, several ontology languages with lower-complexity entailments—mainly polynomial—have been studied. Unfortunately, the picture that arises from these studies is more complex than what is observed in Proposition 1. For example, there are ontology languages, such as the language of propositional formulas in CNF, whose entailment problem is NP-complete, but is-just is -complete [43].^{1}^{1}1 is the class of problems which can be solved by one NP and one coNP test. It is believed to be strictly contained in .
Before considering the computation of all justifications, note that there are many important variants of is-just which may be considered. As we will see later, it is sometimes relevant to order the justifications according to some preference (e.g., size) and is-just becomes the problem of deciding whether a sub-ontology is the most preferred justification. This, of course, requires additional tests, and the complexity may change accordingly. For a study on how the complexity is affected by the preference relation and the ontology language see [43].
If we want to solve all-just, we can once again consider the black-box algorithm described in the previous section. For each of the input sets , we verify in that they are justifications, and afterwards we need to verify that no other set is a justification. The latter task can be performed through a PSpace enumeration of all possible sub-ontologies, checking for each of them that they are not a justification. Hence, overall, we can solve all-just with a algorithm. As in Proposition 1, the PSpace base method can be absorbed into the oracle if the latter is at least PSpace itself.
If entailment is -complete for at least PSpace, then all-just is -complete as well.
While this proposition is very similar to the case for one justification, the PSpace base at the algorithm makes the gap between the lower and upper bounds, for ontology languages having simpler entailment tests, larger. Indeed, while for deciding whether a set is a justification we incur in a jump at most of one level in the polynomial hierarchy, for all-just the increment goes all the way to the limit of this hierarchy at once. The gap can be reduced by considering the following idea: if an input is not an instance of all-just, we can verify it by guessing (in NP) a new set which does not contain any of the s and verifying (in ) that . Thus, all-just is in co.
The landscape of complexities also gets more complex in this case. There exist ontology languages with polynomial entailment problems for which all-just is polynomial, coNP-complete, or hard for an intermediate class, respectively. There is also a language with NP-complete entailment problem for which the exact complexity of all-just is unknown.
So far, the discussion has focused on deciding all-just, but in fact we are more interested in being able to enumerate all the justifications (rather than deciding whether a set of sub-ontologies is indeed the class of all of them). The first thing to notice when dealing with the enumeration of justifications is that it is impossible spend less than exponential time on this task. Indeed, even for the very simple ontology language that we are using as an example, it is easy to build an example of a consequence that has exponentially many justifications w.r.t. an ontology [10]. For example, there are justifications for w.r.t. the graph from Figure 2.
As before, this means that a full enumeration requires at least exponential time (but only polynomial space found justifications are not preserved in memory). Thus, for expressive ontology languages, the black-box algorithm for computing all justifications is optimal in terms of complexity.
When dealing with enumeration problems, one can consider a different alternative complexity classes which take into account also the total number of answers, and the time needed to obtain new answers [25]. Alternatively, one may try to count the number of justifications available [55]. Also in this case, the enumeration complexity varies with the specific ontology language, and whether a specific ordering is requested. For ontology languages related to directed graphs and hypergraphs, the justifications can be enumerated with polynomial delay; that is, allowing polynomial time (on the size of the ontology) between two successive answers if no order is required, but may require total exponential time, but polynomial in the number of justifications, if an order is required. For the light-weight description logic , on the other hand, there is no algorithm that can enumerate all justification in total polynomial time. As a consequence, there must exist consequences with polynomially many justifications, for which the enumeration problem requires super-polynomial time. For counting, in all the ontology languages with polynomial-time entailments studied so far, the complexity is #P-complete while for propositional logic, where entailment is NP-complete, counting the number of justifications becomes #NP-complete.
We have seen that black-box algorithms are complexity-optimal for expressive ontology languages, including the most common description logics—specifically, for any DL with value restrictions—assuming that an optimal implementation of the standard reasoning method exists. In addition, these algorithms are easy to implement and, as they only require repeated calls to an unmodified reasoner, can keep up to date with the newest optimisations and improvements. For that reason, they have become the approach of choice in those settings; for example, the explanation service in Protégé [29] is implemented as a black-box. Nonetheless, it is important to study more targeted approaches, which have the potential of behaving better in practice. This is particularly true for ontology languages whose complexity of reasoning is strictly below PSpace.
The so-called glass-box approaches to axiom pinpointing are based on a modification of the original reasoning algorithms to be able to identify the justifications. In a nutshell, any reasoning algorithm must at some point consider the axioms in the ontology to decide whether the consequence follows. The idea of a glass-box algorithm is to trace which axioms were used during this process, yielding candidates for justifications, or the whole class of justifications, depending on how the tracing mechanism is implemented.
In DLs, the first proposals for a black-box algorithm for pinpointing were based on a modification of the tableaux-based reasoning method for reasoning in , which was originally designed for defeasible reasoning [3]. The idea was slowly improved and generalised to allow for more expressive languages [47, 40, 31] until a general approach for transforming tableaux-based reasoning methods into pinpointing algorithms was developed in [9]. The main drawback observed in these glass-box proposals is that, to guarantee that all the relevant axioms have been traced, important optimisations have to be disabled. For example, for standard reasoning it suffices to halt the execution of the tableaux when the desired consequence is obtained, but for pinpointing one must continue until all possible derivations have been explored. As a consequence, these pinpointing extensions do not behave well in practice. An additional disadvantage highlighted in [9] is that the pinpointing extension of a terminating tableau algorithm is not guaranteed to terminate in general. On the other hand, some basic properties which are satisfied by all DL tableau algorithms suffice for guaranteeing termination.
Following a different approach, [8] introduced a pinpointing approach based on weighted automata. Briefly, the approach takes an existing automata-based reasoning method, and transforms it into a pinpointing approach by adding weights, belonging to the free distributive lattice, to all the transitions of this automaton. The behaviour of the weighted automaton obtained this way is a compact representation of all the justifications of the consequence. The main advantage of this approach is that it is optimal in terms of worst-case complexity. However, as in standard automata-based reasoning, the best-case complexity also matches the worst case, making it impractical for real applications.
To-date, the most successful approach which could be called glass-box is based on a translation of the execution of a consequence-based algorithm into a propositional formula. In essence, one tries to reduce an axiom pinpointing problem to pinpointing in a well-known ontology language for which efficient implementations exist. The original idea was introduced in [48, 49] for the light-weight DL , but can be easily generalised to other consequence-based algorithms. Very briefly, consequence-based algorithms work by applying rules over the explicitly represented knowledge (originally, the ontology) to make some of its implicit consequences explicit. The method from [48] introduces a new propositional variable for each consequence, and a Horn clause simulating each possible rule application. In addition, it adds the representative variable of each axiom of the original ontology.
As a very simple example, consider our graph ontology language, where the entailment relation is reachability. In this case, a consequence-based algorithm would only have the rule expressing that if we have the (explicit) knowledge that the ontology entails and , then we can derive that it also entails . From this rule, and the graph in Figure 1, we obtain a set of Horn clauses which contains, among others,
The ontology is then represented through the variables
The conjunction of all these elements yields a Horn formula that entails the variable representing each relevant consequence. Hence, for example .
In order to find the justifications for a consequence w.r.t. the ontology , one needs only to enumerate the justifications for the consequence variable w.r.t. the set of Horn clauses , with the difference that the Horn clauses simulating the execution of the rules are always present; that is, a justification may only remove from the formula variables representing the original ontology.
In essence, what this translation does is to reduce axiom pinpointing in an arbitrary ontology language which accepts consequence-based reasoning into axiom pinpointing in propositional logic. The advantage is that we can then focus on the development of highly-efficient pinpointing tools for this very specific ontology language, which can be further optimised taking the shape of the obtained problem into account. Indeed, this idea has given rise to many axiom pinpointing tools for an extension of [1, 35, 49] of which the most efficient—and only system capable of computing all the justifications for all the 5415670 atomic entailments from the very large ontology Snomed is PULi [27].
Before looking into different applications and variations of axiom pinpointing, we would be amiss to ignore a popular approach for finding one justification, which combines the glass-box tracing approach with the black-box method from Algorithm 1; hence, it is usually called grey-box. The main idea, as with the glass-box approach, is to trace all the axioms used to prove the entailment of a consequence, through the application of an algorithm; for example, in a tableau-based or consequence-based algorithm. However, in contrast to the case for finding all justifications, we stop once the consequence is derived. This yields a set of axioms that is not guaranteed to be a justification, but from which the consequence still follows. That is, it might still require some pruning of superfluous axioms to become a justification. On the other hand, this limited tracing only imposes a minimal overhead to the original decision algorithm, and does not affect the existing optimisations. Moreover, the resulting set tends to be much smaller than the original ontology, and very close to a justification. Once this smaller ontology is found, it can be minimised by a call to the black-box method, yielding a justification. Note that this second step needs to check potentially much less axioms than a direct use of Algorithm 1 on the original ontology.
This grey-box approach was used originally by the CEL system [7] to find justifications efficiently for , where the goal was not to find them all, but only a relevant subset. A similar idea is followed in the database community to trace the facts which provide an answer to a query. Although, to the best of our knowledge, there is no implementation of the grey-box approach for more complex ontology languages, or based on other kinds of algorithms, it should be relatively straightforward to modify a tableaux-based tool for this purpose.
An important drawback of the grey-box algorithm as described above is that the sub-ontology obtained by the tracing step is only guaranteed to contain one justification. This means that if one is interested in potentially finding more justifications, then the whole process needs to be started anew for each successive solution. To alleviate this problem, some work has focused on computing justification-preserving modules; that is, sub-ontologies which are still guaranteed to contain all the justifications for a given consequence. Ideally, these modules should be fast to compute and still be small enough to guarantee that all justifications can be extracted from them following, e.g. the black-box approach, efficiently. It has been observed that different modularization methods yield adequate solutions in this direction. Usually, these modules are based on syntactic or semantic relationships between the axioms of the ontology [52, 17, 53]. More recently, a modularization method based on a modification of the reasoning algorithm was proposed [42, 30, 38, 36]. These approaches modify the tracing technique of the glass-box method. As usual, they keep track of the axioms that are being used during the execution of the algorithm for deriving new knowledge, but instead of distinguishing between different derivation paths, they are all stored in one single set. This simple modification avoids the problems with termination of the original glass-box approach, but provides more information than just following one derivation as in the description of the grey-box approach above. In empirical analyses, this approach has shown to be very helpful for solving axiom-pinpointing related tasks.
The task of axiom pinpointing as defined in Section 2 has been extended to cover a large class of variations, and can be used to solve different reasoning tasks.
The simplest generalisation which we can consider is to allow some parts of the ontology to be fixed; that is, consider a justification to be a minimal subontology that, when added to a fixed ontology , entails the consequence. This application makes sense in the context of debugging, when we trust some axioms, and we do not want to look into them when trying to understand or correct an erroneous consequence. We have also observed its use in the reduction from consequence-based algorithms to axiom pinpointing in SAT in Section 4. Further generalising this idea, one can consider the atomic elements to be not axioms, but rather sets of axioms. In the literature, these have been called group-MUSes [33] or contexts [5]. Note that these contexts may appear implicitly in several applications. For example, in consequence-based algorithms [51, 50], a common pre-processing step is to modify the ontology into a given normal form. In those cases, several separated axioms may have been produced by a single original statement, and thus should all be considered together. Another setting where considering sets of axioms makes sense is to find classes of modules of different kinds where a consequence can be derived. In this case, we can group axioms according to the atomic decomposition of the ontology [56].
Returning justifications at the granularity of (sets of) axioms from the original ontology is not always satisfactory. These may hide some relevant information or, conversely, be too complex to be understood or managed adequatedly by domain experts. Hence some research—specially in DLs—has considered building different representations. One direction produces so-called laconic and precise justifications [22] which, from a very abstract point of view, provide only the specific pieces of the axioms which are really responsible for the consequence, removing any superfluous information. The other direction combines several axioms into lemmas, which remove excessive detail and become easier to read [23].
In a similar direction, more recent work has focused on the goal of minimally modifying ontologies through weakening [54, 6]. Very briefly, after a justification has been found, its axioms are modified to weaker ones in order to get rid of the consequence. Hence, this process goes one step beyond pinpointing by further identifying the strongest possible weakenings which yield the desired result. Note that axiom pinpointing is a special step of this idea, where the only possible weakening an axiom is to replace it by a tautology.
Regarding the computation of all justifications, it is sometimes convenient to try to enumerate them in a specific order. For example, one may want to observe the smallest (w.r.t. cardinality) justifications first, or alternatively have a pre-specified order for accessing them. Obviously, once that we can compute all the justifications, it is also possible to order them before showing them to the user, but this brings a large overhead in general. To understand the issue better, the complexity of enumerating justifications w.r.t. some natural orderings has also been studied, with the unsurprising result that it depends not only on the ontology language, but also on the chosen order [43].
Axiom pinpointing is not only useful for understanding and potentially correcting consequences from an ontology, but it has also found applications aiding different kinds of supplemental and non-standard reasoning tasks. Perhaps the most studied and applied to date is related to the representation and handling of uncertain knowledge. When dealing with probabilities, several semantics and applications use axiom pinpointing, albeit often implicitly. In probabilistic logic programming
[44], the probability of an inference is given by the probabilities of combinations of facts that entail it (together with a fixed program) which correspond to the justifications of an ontology. This idea was generalised to description logics under the so-called Disponte semantics [46]. In these approaches, it is assumed that all the uncertain elements are probabilistically independent. To lift this assumption, newer approaches include a Bayesian network which expresses the joint probability distribution of the axioms (or more in general, contexts) of the ontology
[16, 14, 18]. For all these approaches, axiom pinpointing provides a helpful intermediate step which can be exploited also for approximating solutions. For example, finding the justification with the highest probability can often yield a good approximation for the full probability. In terms of possibility theory, when using the standard min-max semantics, it is well known that it suffices to find one justification with the highest possibility degree [11].Another application is in the context of access control, where one wants to provide only partial views to an ontology to different users [4]. Here, axiom pinpointing is not only useful to find out the access levels (that is, the contexts) where a consequence is derivable, but also to suggest changes in the access level of axioms in order to hide or open implicit consequences to some users [28].
There is a current interest in being able to reason in the presence of inconsistencies or errors in an ontology [12, 32, 13, 34]. In this case, we are interested in finding so-called repairs, which correspond to the dual notion of a justification; that is, maximal subontologies which do not entail the consequence under consideration. Axiom pinpointing comes into place in this case in two different ways. On the one hand, repairs can be computed from justifications through a hitting set computation. On the other hand, most of the techniques developed for axiom pinpointing (and in particular, the black-box methods) can be easily adapted to compute repairs directly, usually leading to the same complexity bounds [41]. Still, keeping this dual view is often helpful for finding new research problems.
To finish this brief overview on applications, we mention provenance [21, 20, 19, 15], which has a similar motivation, but different development, to axiom pinpointing. Originally developed for databases, and later extended to DLs, the goal of provenance is to track the origins for an answer to a query, in terms of the facts used to derive it. The main difference with axiom pinpointing is that with provenance one is also interested in finding how often an axiom (or group of axioms) is used in the derivation, and that minimality is replaced by a weaker notion, where all axioms need to be relevant, but they may be superfluous.
We have presented a general overview on axiom pinpointing: what it is about, how it is usually tackled, and its main applications and variations. We insist here that, although most of the terminology used was originally introduced in the context of description logics, we have tried to keep the presentation as general as possible to include the notions developed in many other areas of knowledge representation and reasoning. Our hope is that this general presentation helps as a first step towards a unified view of the problem of finding the axiomatic causes for consequences in different languages, and ultimately leads towards an exchange of techniques, problems, and tools between areas.
Clearly, the work on axiom pinpointing does not finish with this content. There exist many applications and variations to the problem which have not been mentioned, or commented only briefly, and which would require much more space to explore in detail. Even for the ones which were more thoroughly commented, there are usually many open problems which still require some additional work. Moreover, even at its basic state, axiom pinpointing is not fully resolved. For example, in terms of its practical application, the development of efficient tools for more expressive languages—or, even better, for general use—is still missing.
There are many specialised techniques developed in areas other than description logics (e.g., databases, CSP, and SAT), which may be applicable in other areas with minor modifications. Alternatively, it may be possible to reduce problems between areas as done in the reduction to propositional logic described here. This allows to focus on one specific problem, and using advanced engineering techniques for its effective solution.
Baader, F., Hollunder, B.: Embedding defaults into terminological knowledge representation formalisms. J. of Automated Reasoning
14(1), 149–180 (1995). https://doi.org/10.1007/BF00883932Botha, L., Meyer, T., Peñaloza, R.: The bayesian description logic BALC. In: Proceedings of the 16th European Conference on Logics in Artificial Intelligence (JELIA’19) (2019), to appear
Comments
There are no comments yet.