Repairing Description Logic Ontologies by Weakening Axioms

08/01/2018 ∙ by Franz Baader, et al. ∙ 0

The classical approach for repairing a Description Logic ontology O in the sense of removing an unwanted consequence α is to delete a minimal number of axioms from O such that the resulting ontology O' does not have the consequence α. However, the complete deletion of axioms may be too rough, in the sense that it may also remove consequences that are actually wanted. To alleviate this problem, we propose a more gentle way of repair in which axioms are not necessarily deleted, but only weakened. On the one hand, we investigate general properties of this gentle repair method. On the other hand, we propose and analyze concrete approaches for weakening axioms expressed in the Description Logic EL.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Description logics (DLs) [2, 5]

are a family of logic-based knowledge representation formalisms, which are employed in various application domains, such as natural language processing, configuration, databases, and bio-medical ontologies, but their most notable success so far is the adoption of the DL-based language OWL

111see for its most recent edition OWL 2. as standard ontology language for the Semantic Web. As the size of DL-based ontologies grows, tools that support improving the quality of such ontologies become more important. DL reasoners222see can be used to detect inconsistencies and to infer other implicit consequences, such as subsumption and instance relationships. However, for the developer of a DL-based ontology, it is often quite hard to understand why a consequence computed by the reasoner actually follows from the knowledge base, and how to repair the ontology in case this consequence is not intended.

Axiom pinpointing [22] was introduced to help developers or users of DL-based ontologies understand the reasons why a certain consequence holds by computing so-called justifications, i.e., minimal subsets of the ontology that have the consequence in question. Black-box approaches for computing justifications such as [23, 14, 8] use repeated calls of existing highly-optimized DL reasoners for this purpose, but it may be necessary to call the reasoner an exponential number of times. In contrast, glass-box approaches such as [3, 22, 20, 18] compute all justifications by a single run of a modified, but usually less efficient reasoner.

Given all justifications of an unwanted consequence, one can then repair the ontology by removing one axiom from each justification. However, removing complete axioms may also eliminate consequences that are actually wanted. For example, assume that our ontology contains the following terminological axioms:

These two axioms are a justification for the incorrect consequence that professors are students. While the first axiom is the culprit, removing it completely would also remove the correct consequence that professors are employed by a university. Thus, it would be more appropriate to replace the first axiom by the weaker axiom . This is the basic idea underlying our gentle repair approach. In general, in this approach we weaken one axiom from each justification such that the modified justifications no longer have the consequence.

Approaches for repairing ontologies while keeping more consequences than the classical approach based on completely removing axioms have already been considered in the literature. On the one hand, there are approaches that first modify the given ontology, and then repair this modified ontology using the classical approach. In [13], a specific syntactic structural transformation is applied to the axioms in an ontology, which replaces them by sets of logically weaker axioms. More recently, the authors of [11] have generalized this idea by allowing for different specifications of the structural transformation of axioms. They also introduce a specific structural transformation that is based on specializing left-hand sides and generalizing right-hand sides of axioms in a way that ensures finiteness of the obtained set of axioms. Closer to our gentle repair approach is the one in [16], which adapts the tracing technique from [4] to identify not only the axioms that cause a consequence, but also the parts of these axioms that are actively involved in deriving the consequence. This provides them with information for how to weaken these axioms. In [24], repairs are computed by weakening axioms with the help of refinement operators that were originally introduced for the purpose of concept learning [17].

In this paper, we will introduce a general framework for repairing ontologies based on axiom weakening. This framework is independent of the concrete method employed for weakening axioms and of the concrete ontology language used to write axiom. It only assumes that ontologies are finite sets of axioms, that there is a monotonic consequence operator defining which axiom follows from which, and that weaker axioms have less consequences. However, all our examples will consider ontologies expressed in the light-weight DL . Our first important result is that, in general, the gentle repair approach needs to be iterated, i.e., applying it once does not necessarily remove the consequence. This problem has actually been overlooked in [16], which means that their approach does not always yield a repair. Our second result is that at most exponentially many iterations are always sufficient to reach a repair. The authors of [24] had already realized that iteration is needed, but they did not give an example explicitly demonstrating this, and they had no termination proof. Instead of allowing for arbitrary ways of weakening axioms, we then introduce the notion of a weakening relation, which restricts the way in which axioms can be weakened. Subsequently, we define conditions on such weakening relations that equip the gentle repair approach with better algorithmic properties if they are satisfied. Finally, we address the task of defining specific weakening relations for the DL . After showing that two quite large such relations do not behave well, we introduce two restricted relations, which are based on generalizing the right-hand sides of axioms semantically or syntactically. Both of them satisfy most of our conditions, but from a complexity point of view the syntactic variant behaves considerably better.

2 Basic definitions

In the first part of this section, we introduce basic notions from DLs to provide us with concrete examples for how ontologies and their axioms may look like. In the second part, we provide basic definitions regarding the repair of ontologies, which are independent of the ontology language these ontologies are written in. However, the concrete examples given there are drawn from DL-based ontologies.

2.1 Description Logics

A wide range of DLs of different expressive power haven been investigated in the literature. Here, we only introduce the DL , for which reasoning is tractable [9].

Let and be mutually disjoint sets of concept and role names, respectively. Then concepts over these names are constructed through the grammar rule

where and , i.e., the DL has the concept constructors (top concept), (conjunction), and (existential restriction). The size of an concept is the number of occurrences of as well as concept and role names in , and its role depth is the maximal nesting of existential restrictions. If is a finite set of concepts, then we denote the conjunction of these concepts as .

Knowledge is represented using appropriate axioms formulated using concepts, role names and an additional set of individual names . An axiom is either a GCI of the form with concepts, or an assertion, which is of the form (concept assertion) or (role assertion), with , and a concept. A finite set of GCIs is called a TBox; a finite set of assertions is an ABox. An ontology is a finite set of axioms.

The semantics of is defined using interpretations , where is a non-empty set, called the domain, and is the interpretation function, which maps every to an element , every to a set , and every to a binary relation . The interpretation function is extended to arbitrary concepts by setting , , and .

The interpretation satisfies the GCI if ; it satisfies the assertion and , if and , respectively. It is a model of the TBox , the ABox , and the ontology , if it satisfies all the axioms in , , and , respectively. Given an ontology , and an axiom , we say that is a consequence of (or that entails ) if every model of satisfies . In this case, we write . The set of all consequences of is denoted by Con(). As shown in [9], consequences in can be decided in polynomial time. We say that the two axioms are equivalent if .

A tautology is an axiom such that , where is the ontology that contains no axioms. For example, GCIs of the form and , and assertions of the form are tautologies. We write to indicate that the GCI is a tautology. In this case we say that is subsumed by . We say that the concepts are equivalent (written ) if and ; and that is strictly subsumed by (written ) if and .

The following recursive characterization of the subsumption relation has been proved in [6].

Lemma 1.

Let be two concepts such that

and . Then iff and for every , there exists an , such that and .

2.2 Repairing Ontologies

For the purpose of this subsection and also large parts of the rest of this paper, we leave it open what sort of axioms and ontologies are allowed in general, but we draw our examples from ontologies. We only assume that there is a monotonic consequence relation between ontologies (i.e., finite sets of axioms) and axioms, and that consists of all consequences of .

Assume in the following that the ontology is the disjoint union of a static ontology and a refutable ontology

. When repairing the ontology, only the refutable part may be changed. For example, the static part of the ontology could be a carefully hand-crafted TBox whereas the refutable part is an ABox that is automatically generated from (possibly erroneous) data. It may also make sense to classify parts of a TBox as refutable, for example if the TBox is obtained as a combination of ontologies from different sources, some of which may be less trustworthy than others. In a privacy application

[10, 1], it may be the case that parts of the ontology are publicly known whereas other parts are hidden. In this setting, in order to hide critical information, it only makes sense to change the hidden part of the ontology.

Definition 2.

Let be an ontology consisting of a static and a refutable part, and an axiom such that and . The ontology is a repair of w.r.t.  if

The repair is an optimal repair of w.r.t.  if there is no repair of w.r.t.  with . The repair is a classical repair of w.r.t.  if , and it is an optimal classical repair of w.r.t.  if there is no classical repair of w.r.t.  such that .

The condition ensures that does have a repair w.r.t.  since obviously the empty ontology is such a repair. In general, optimal repairs need not exist.

Proposition 3.

There is an ontology and an axiom such that does not have an optimal repair w.r.t. .


We set , , and where

To show that there is no optimal repair of w.r.t. , we consider an arbitrary repair and show that it cannot be optimal. Thus, let be such that

Without loss of generality we assume that contains assertions only. In fact, if contains a GCI that does not follow from , then . This is an easy consequence of the fact that, in , a GCI follows from a TBox together with an ABox iff it follows from the TBox alone. It is also easy to see that cannot contain role assertions since no such assertions are entailed by . In addition, concept assertions following from must have a specific form.
Claim: If the assertion is in , then does not contain .
Proof of claim. By induction on the role depth of .
Base case: If and is contained in , then is a conjunct of and thus implies , which is a contradiction.
Step case: If and occurs at role depth in , then implies that there are roles such that Since , this can only be the case if since clearly has models in which all roles different from are empty. Since contains the GCI and , implies Induction now yields that this is not possible, which completes the proof of the claim.

Furthermore, as argued in the proof of the claim, any assertion belonging to cannot contain roles other than . The same is true for concept names different from . Consequently, all assertions are such that is built using and only. Any such concept is equivalent to a concept of the form .

Since is finite, there is a maximal such that , but for all . Since if , we can assume without loss of generality that . We claim that if . To this purpose, we construct a model of such that . This model is defined as follows:

Clearly, is a model of , and it does not satisfy if . In addition, it is a model of since .

Consequently, if we choose such that and define , then . In addition, , i.e., is a repair. This shows that is not optimal. Since we have chosen to be an arbitrary repair, this shows that there cannot be an optimal repair. ∎

In contrast, optimal classical repairs always exist. One approach for computing such a repair uses justifications and hitting sets [21].

Definition 4.

Let be an ontology and an axiom such that and . A justification for in is a minimal subset of such that . Given justifications for in , a hitting set of these justifications is a set of axioms such that for . This hitting set is minimal if there is no other hitting set strictly contained in it.

Note that the condition implies that justifications are non-empty. Consequently, hitting sets and thus minimal hitting sets always exist.

The algorithm for computing an optimal classical repair of w.r.t.  proceeds in two steps: (i) compute all justifications for in ; and then (ii) compute a minimal hitting set of and remove the elements of from , i.e., output .

It is not hard to see that, independently of the choice of the hitting set, this algorithm produces an optimal classical repair. Conversely, all optimal classical repairs can be generated this way by going through all hitting sets.

3 Gentle Repairs

Instead of removing axioms completely, as in the case of a classical repair, a gentle repair replaces them by weaker axioms.

Definition 5.

Let be two axioms. We say that is weaker than if .

Alternatively, we could have introduced weaker w.r.t the strict part of the ontology, by requiring .333Defining weaker w.r.t the whole ontology does not make sense since this ontology is possibly erroneous. In this paper, we will not consider this alternative definition, although most of the results in this section would also hold w.r.t. it (e.g., Theorem 7). The difference between the two definitions is, however, relevant in the next section, where we consider concrete approaches for how to weaken axioms. In the case where the whole ontology is refutable, there is of course no difference between the two definitions.

Obviously, the weaker-than relation from Definition 5 is transitive, i.e., if is weaker than and is weaker than , then is also weaker than . In addition, a tautology is always weaker than a non-tautology. Replacing an axiom by a tautology is obviously the same as removing this axiom. We assume in the following that there exist tautological axioms, which is obviously true for description logics such as .

Gentle repair algorithm:

we still compute all justifications for in and a minimal hitting set of . But instead of removing the elements of from , we replace them by weaker axioms. To be more precise, if and are all the justifications containing , then replace by a weaker axiom such that


Note that such a weaker axiom always exists. In fact, we can choose a tautology as the axiom . If is a tautology, then replacing by is the same as removing . Thus, we have due to the minimality of . In addition, minimality of also implies that is not a tautology since otherwise would also have the consequence . In general, different choices of yield different runs of the algorithm.

In principle, the algorithm could always use a tautology , but then this run would produce a classical repair. To obtain more gentle repairs, the algorithm needs to use a strategy that chooses stronger axioms (i.e., axioms that are less weak than tautologies) if possible. In contrast to what is claimed in the literature (e.g. [16]), this approach does not necessarily yield a repair.

Lemma 6.

Let be the ontology obtained from by replacing all the elements of the hitting set by weaker ones such that the condition (1) is satisfied. Then , but in general we may still have .


The definition of “weaker than” (see Definition 5) obviously implies that .

We now give an example where this approach nevertheless does not produce a repair. Let where and with and , and be the consequence . Then has a single justification , and thus is the only hitting set. The assertion is weaker than and it satisfies . However, if we define , then still holds. ∎

A similar example that uses only GCIs is the following, where now we consider a refutable ontology and we assume that is the consequence . Then has a single justification and thus is the only hitting set. The GCI is a weaker than and it satisfies . However, if we define , then .

These examples show that applying the gentle repair approach only once may not lead to a repair. For this reason, we need to iterate this approach, i.e., if the resulting ontology still has as a consequence, we again compute all justifications and a hitting set for them, and then replace the elements of the hitting set with weaker axioms as described above. This is iterated until a repair is reached. We can show that this iteration indeed always terminates after finitely many steps with a repair.

Theorem 7.

Let be a finite ontology and an axiom such that and . Applied to and , the iterative algorithm described above stops after a finite number of iterations that is at most exponential in the cardinality of , and yields as output an ontology that is a repair of w.r.t. the consequence .


Assume that contains axioms, and that there is an infinite run of the algorithm on input and . Take a bijection between and that assigns unique labels to axioms. Whenever we weaken an axiom during a step of the run, the new weaker axiom inherits the label of the original axiom. Thus, we have bijections for all ontologies considered during the run of the algorithm. For we define

i.e., contains all sets of indices such that the corresponding subset of together with has the consequence .

We claim that . Note that is an immediate consequence of the fact that implies that or is weaker than . Thus, it remains to show that the inclusion is strict. This follows from the following observations. Since the algorithm does not terminate with the ontology , we still have , and thus there is at least one justification . Consequently, the hitting set used in this step of the algorithm contains an element of . When going from to , is replaced by a weaker axiom such that . But then the set belongs to , but not to .

Since contains only exponentially many sets, the strict inclusion can happen only exponentially often, which contradicts our assumption that there is an infinite run of the algorithm on input and . This shows termination after exponentially many steps. However, if the algorithm terminates with output , then . In fact, otherwise, there would be a possibility to weaken into since it would always be possible to replace the elements of a hitting set by tautologies, i.e., perform a classical repair. ∎

When computing a classical repair, considering all justifications and then removing a minimal hitting set of these justifications guarantees that one immediately obtains a repair. We have seen in the proof of Lemma 6 that with our gentle repair approach this need not be the case. Nevertheless, we were able to show that, after a finite number of iterations of the approach, we obtain a repair. The proof of termination actually shows that for this it is sufficient to weaken only one axiom of one justification such that the resulting set is no longer a justification. This motivates the following modification of our approach:

Modified gentle repair algorithm:

compute one justification for in and choose an axiom . Replace by a weaker axiom such that


Clearly, one needs to iterate this approach, but it is easy to see that the termination argument used in the proof of Proposition 7 also applies here.

Corollary 8.

Let be a finite ontology and an axiom such that and . Applied to and , the modified iterative algorithm stops after a finite number of iterations that is at most exponential in the cardinality of , and yields as output an ontology that is a repair of w.r.t. .

An important advantage of this modified approach is that the complexity of a single iteration step may decrease considerably. For example, for the DL , a single justification can be computed in polynomial time, while computing all justifications may take exponential time [7]. In addition, to compute a minimal hitting set one needs to solve an NP-complete problem [12] whereas choosing one axiom from a single justification is easy. However, as usual, there is no free lunch: we can show that the modified gentle repair algorithm may indeed need exponentially many iteration steps.444It is not clear yet whether this is also the case for the unmodified gentle repair algorithm.

Proposition 9.

There is a sequence of ontologies with and an axiom such that the modified gentle repair algorithm applied to and has a run with exponentially many iterations in the size of .


For , consider the set of concept names , and define , where

It is easy to see that the size of is polynomial in and that . Suppose that we want to get rid of this consequence using the modified gentle repair approach. First, we can find the justification

We repair it by weakening the first axiom to

At this point, we can find a justification that uses and . We further weaken to

Repeating this approach, after weakenings we have only changed the first axiom, weakening it to the axiom


whose right-hand side is a conjunction with conjuncts, each of them representing a possible choice of or at every location .

So far, we have just considered axioms from . Taking also axioms from into account, we obtain for every conjunct in axiom (3) a justification for that consists of (3) and the axioms

This justification can be removed by weakening (3) further by deleting one concept name appearing in the conjunct. The justifications for other conjuncts are not influenced by this modification. Thus, we can repeat this for each of the exponentially many conjuncts, which shows that overall we have exponentially many iterations of the modified gentle repair algorithm in this run. ∎

3.1 Weakening Relations

In order to obtain better bounds on the number of iterations of our algorithms, we restrict the way in which axioms can be weakened. Before introducing concrete approaches for how to do this for axioms in the next section, we investigate such restricted weakening relations in a more abstract setting.

Definition 10.

Given a pre-order (i.e., an irreflexive and transitive binary relation) on axioms, we say that it

  • is a weakening relation if implies that ;

  • is bounded (linear, polynomial) if, for every axiom , there is a (linear, polynomial) bound on the length of all -chains issuing from ;

  • is complete if, for any axiom that is not a tautology, there is a tautology such that .

If we use a linear (polynomial) and complete weakening relation, then termination with a repair is guaranteed after a linear (polynomial) number of iterations.

Proposition 11.

Let be a linear (polynomial) and complete weakening relation. If in the above (modified) gentle repair algorithm we have whenever is replaced by , then the algorithm stops after a linear (polynomial) number of iterations and yields as output an ontology that is a repair of w.r.t. the consequence .


For every axiom in we consider the length of the longest -chain issuing from it, and then sum up these numbers over all axioms in . The resulting number is linearly (polynomially) bounded by the size of the ontology (assuming that this size is given as sum of the sizes of all its axioms). Let us call this number the chain-size of the ontology. Obviously, if is replaced by with , then the length of the longest -chain issuing from is smaller than the length of the longest -chain issuing from . Consequently, if is obtained from in the -th iteration of the algorithm, then the chain-size of is strictly larger than the chain-size of . This implies that there can be only linearly (polynomially) many iterations.

Consider a terminating run of the algorithm that has produced the sequence of ontologies . Then we have

since is a weakening relation. If the algorithm has terminated due to the fact that , then is a repair of w.r.t. . Otherwise, the only reason for termination could be that, although , the algorithm cannot generate a new ontology . In the unmodified gentle repair approach this means that there is an axiom in the hitting set such that there is no axiom with such that (1) is satisfied. However, using a tautology as the axiom actually allows us to satisfy the condition . Thus, completeness of implies that this reason for termination without success cannot occur. An analogous argument can be used for the modified gentle repair approach. ∎

When describing our (modified) gentle repair algorithm, we have said that the chosen axiom needs to be replaced by a weaker axiom such that (1) or (2) holds. But we have not said how such an axiom can be found. This of course depends on which ontology language and which weakening relation is used. In the abstract setting of this section, we assume that an “oracle” provides us with a weaker axiom.

Definition 12.

Let be a weakening relation. An oracle for is a computable function that, given an axiom that is not -minimal, provides us with an axiom such that . For -minimal axioms we assume that .

If the weakening relation is complete and well-founded (i.e., there are no infinite descending -chains ), we can effectively find an axiom such that (1) or (2) holds. We show this formally only for (2), but condition (1) can be treated similarly.

Lemma 13.

Assume that is a justification for the consequence , and . If is a well-founded and complete weakening relation and is an oracle for , then there is an such that (2) holds for . If is additionally linear (polynomial), then is linear (polynomial) in the size of .


Well-foundedness implies that the -chain is finite, and thus there is an such that , i.e., is -minimal. Since is complete, this implies that is a tautology. Minimality of the justification then yields . Linearity (polynomiality) of ensures that the length of the -chain is linearly (polynomially) bounded by the size of . ∎

Thus, to find an axiom satisfying (1) or (2), we iteratively apply to until an axiom satisfying the required property is found. The proof of Lemma 13 shows that at the latest this is the case when a tautology is reached, but of course the property may already be satisfied before that by a non-tautological axiom .

In order to weaken axioms as gently as possible, should realize small weakening steps. The smallest such step is one where there is no step in between.

Definition 14.

Let be a pre-order. The one-step relation555This is sometimes also called the transitive reduction of . induced by is defined as

We say that covers if its transitive closure is again , i.e., . In this case we also say that is one-step generated.

If is one-step generated, then every weaker element can be reached by a finite sequence of one-step weakenings, i.e., if , then there are finitely many elements () such that . This leads us to the following characterization of pre-orders that are not one-step generated.

Lemma 15.

The pre-order is not one-step generated iff there exist two comparable elements such that every finite chain can be refined in the sense that there is an , and an element such that .

If are such that any finite chain between them can be refined, then obviously there cannot be an upper bound on the length of the chains issuing from . Thus, Lemma 15 implies the following result.

Proposition 16.

If is bounded, then it is one-step generated.

The following example shows that well-founded pre-orders need not be one-step generated.

Example 17.

Consider the pre-order on the set

where for all , and iff . It is easy to see that is well-founded and that Consequently, contains none of the tuples for , which shows that does not cover . In particular, any finite chain between and can be refined.

Interestingly, if we add elements () with to this pre-order, then it becomes one-step generated.

One-step generated weakening relations allow us to find maximally strong weakenings satisfying (1) or (2). Again, we consider only condition (2), but all definitions and results can be adapted to deal with (1) as well.

Definition 18.

Let be a justification for the consequence , and . We say that is a maximally strong weakening of in if , but for all with .

In general, maximally strong weakenings need not exist. As an example, assume that the pre-order introduced in Example 17 (without the added axioms ) is a weakening relation on axioms, and assume that and that none of the axioms have the consequence. Obviously, in this situation there is no maximally strong weakening of in .

Next, we introduce conditions under which maximally strong weakenings always exist, and can also be computed. We say that the one-step generated weakening relation is effectively finitely branching if for every axiom the set is finite and can effectively be computed.

Proposition 19.

Let be a well-founded, one-step generated, and effectively finitely branching weakening relation and assume that the consequence relation is decidable. Then all maximally strong weakenings of an axiom in a justification can effectively be computed.


Let be a justification for the consequence , and . Since is well-founded, one-step generated, and finitely branching, König’s Lemma implies that there are only finitely many such that , and all these can be reached by following . Thus, by a breadth-first search, we can compute the set of all such that there is a path with , but for all . If this set still contains elements that are comparable w.r.t.  (i.e., there is a -path between them), then we remove the weaker elements. It is easy to see that the remaining set consists of all maximally strong weakenings of in . ∎

Note that the additional removal of weaker elements in the above proof is really necessary. In fact, assume that and , and that , , but . Then both and belong to the set computed in the breadth-first search, but only is a maximally strong weakening (see Example 29, where it is shown that this situation can really occur when repairing ontologies).

In particular, this also means that iterated application of a one-step oracle, i.e., an oracle satisfying , does not necessarily yield a maximally strong weakening.

4 Weakening Relations for Axioms

In this section, we restrict the attention to ontologies written in , but some of our approaches and results could also be transferred to other DLs. We start with observing that weakening relations for axioms need not be one-step generated.

Proposition 20.

If we define if , then is a weakening relation on axioms that is not one-step generated.


It is obvious that is a weakening relation.666In fact, it is the greatest one w.r.t. set inclusion. To see that it is not one-step generated, consider a GCI that is not a tautology and an arbitrary tautology . Then we have . Let be a finite chain leading from to . Then must be a GCI that is not a tautology. Assume that . Then satisfies . By Lemma 15, this shows that is not one-step generated. ∎

Our main idea for obtaining more well-behaved weakening relations is to weaken a GCI by generalizing the right-hand side and/or by specializing the left-hand side . Similarly, a concept assertion can be weakened by generalizing . For role assertions we can use as weakening an arbitrary tautological axiom, but will no longer consider them explicitly in the following.

Proposition 21.

If we define

then is a complete weakening relation.


To prove that is a weakening relation we must show that implies . If and hold, then it follows that and . The second inclusion is strict iff . For the first inclusion to be strict, or is a necessary condition, but it is not sufficient. This is why we explicitly require , which yields strictness of the inclusion. Completeness is trivial due to the availability of all tautologies of the form and . ∎

To see why, e.g., does not imply , notice that , but .

Unfortunately, the weakening relation introduced in Proposition 21 is not well-founded since left-hand sides can be specialized indefinitely. For example, we have . To avoid this problem, we now restrict the attention to sub-relations of that only generalize the right-hand sides of GCIs. We will not consider concept assertions, but they can be treated similarly.

4.1 Generalizing the Right-Hand Sides of GCIs

We define

Theorem 22.

The relation on axiom is a well-founded, complete, and one-step generated weakening relation, but it is not polynomial.


Proposition 21 implies that is a weakening relation and completeness follows from the fact that whenever is not a tautology. In , the inverse subsumption relation is well-founded, i.e., there cannot be an infinite sequence of concepts. Looking at the proof of this result given in [6], one sees that it actually shows that is bounded. Obviously, this implies that is bounded as well, and thus one-step generated by Proposition 16.

It remains to show that is not polynomial. Let and be a set of distinct concept names. Then we have

Note that the size of