Introduction
Querying inconsistent ontologies is an intriguing new problem that gives rise to a flourishing research activity in the description logic (DL) and existential rules community. Consistent query answering, first developed for relational databases [Arenas, Bertossi, and Chomicki1999, Chomicki2007] and then generalized as the AR and IAR semantics for several DLs [Lembo et al.2010], is the most widely recognized semantics for inconsistencytolerant query answering. These two traditional semantics are based upon the notion of repair, defined as an inclusionmaximal subset of the ABox consistent with the TBox. kais2013Du kais2013Du studied query answering under weightbased AR semantics for DL . aaai14BienvenuBG aaai14BienvenuBG studied variants of AR and IAR semantics for DLLite obtained by replacing classical repairs with various preferred repairs. Existential rules (also known as Datalog) are set to play a central role in the context of query answering and information extraction for the Semantic Web. aaai2015Lukasiewicz ecai2012Lukasiewicz,otm2013Lukasiewicz,aaai2015Lukasiewicz studied the data complexity and combined complexity of AR semantics under the main decidable classes of existential rules enriched with negative constraints.
However, observe that some rules might be unreliable if they are extracted from ontology learning or written by unskillful knowledge engineer [Lehmann et al.2011]. MeyerLBP06 MeyerLBP06 proposed a tableaulike algorithm which yields ExpTime as upper bound for finding maximally conceptsatisfiable terminologies represented in . KalyanpurPSG06 KalyanpurPSG06 provided solutions on repairing unsatisfiable concepts in a consistent OWL ontology. Furthermore, usually there exist preferences between rules, and rules with negation are often considered less preferred than rules without negation. dlog2010Scharrenbach dlog2010Scharrenbach proposed that the original axioms must be preserved in the knowledge base under certain conditions and requires changing the underlying logics for repair. dlog2014Wang dlog2014Wang proposed that when new facts are added that contradict to the ontology, it is often desirable to revise the ontology according to the added data. Therefore, this motivates us to consider another repair that selects maximal components of the existential rules. We illustrate the motivation via the following example.
Example 1.
Let be a database and let be the following rule set expressing that each bat can fly and has at least one cave to live in; and if one creature lives in cave then it is a trogloxene; and if we do not know one mammal can fly then it can not fly; if one creature can fly then it is a bird; additionally a bird can not be a trogloxene at the same time; similarly a bird can not be a mammal meanwhile.
Clearly is inconsistent under stable model semantics.
We assume
is more reliable (or preferred) than .
Then we can delete and , or in to restore the consistency,
and get inclusionmaximal preferred consistent rule sets w.r.t. :
,
We will focus on the case where the database is reliable but rules are not. Our main goal is to present a framework of handling inconsistent existential rules under stable model semantics. We define a notion called rule repairs to select maximal components of the rules, the philosophy behind that is to trust the rules as many as possible. Our second goal is to perform an indepth analysis of the data and combined complexity of inconsistencytolerant query answering under rule repair semantics. Let us recall some previous work on existential rules under stable model semantics. ijcai2013Magka ijcai2013Magka presented Racyclic and Rstratified normal rule sets each of which always admits at most one finite stable models. aaai2015heng aaai2015heng implicitly showed that the Racyclicity is enough to capture all negationfree rule sets with finite stable models. kr2014Gottlob kr2014Gottlob proved the decidability of query answering under stable model semantics for guarded existential rules. AlvianoP15 AlvianoP15 extended the stickiness notion to normal rule sets and showed that it assures the decidability for wellfounded semantics rather than stable model semantics. We will focus on Racyclic rule sets with Rstratified or full negations and guarded existential rules with stratified or full negations.
Our main contributions are briefly summarized as follows. We define rule repair semantics to handle inconsistent existential rules under stable model semantics. We consider rule repairs w.r.t. inclusionmaximal subset or cardinality, and that with preference. We obtain a (nearly) complete picture of the data and combined complexity of inconsistencytolerant query answering under rule repair semantics (Table 1). Surprisingly, for Racyclic existential rules with Rstratified or guarded existential rules with stratified negations, both the data complexity and combined complexity of query answering under the rule repair semantics remain the same as that under the conventional query answering semantics. Interestingly, the data complexity based upon weakacyclic or guarded existential rules with stratified negation is PTimecomplete. This leads us to propose several approaches to handle the rule repair semantics by calling answer set programming (ASP) solvers. An experimental evaluation shows that these approaches have good scalability of query answering rule repairs on realistic cases.
Preliminaries
We consider a standard firstorder language. We use to denote the variables appearing in an expression .
Databases.
We assume an infinite set of (data) constants, an infinite set of (labeled) nulls (used as fresh Skolem terms), and an infinite set of variables. A term is a constant, a null, or a variable. We denote by a sequence of variables with . An atom has the form , where is an ary relation symbol, and are terms. A conjunction of atoms is often identified with the set of all its atoms. We assume a relational schema , which is a finite set of relation symbols. An instance is a (possibly infinite) set of facts , i.e., atoms without involving variables, where is a tuple of constants and nulls. A database over a relational schema is a finite instance with relation symbols from and with arguments only from (i.e., without involving nulls).
Normal Logic Programs and Stable Models.
Each
normal (logic) program
is a finite set of NLP rules of the form(1) 
where are atoms and . Given a rule of the above form, let , let , and let .
Let be a normal program. The Herbrand universe and Herbrand base of are denoted by and , respectively. A variablefree rule is called an instance of some rule if there is a substitution such that . Let , the grounding of , be the set of all instances of for all .
The GelfondLifschitz reduct of a normal program w.r.t. a set , denoted , is the (possibly infinite) ground positive program obtained from by

deleting every rule such that , and

deleting all negative literals from each remaining rule.
A subset of is called a stable model of if it is the least model of . For more about stable model semantics, refer to [Gelfond and Lifschitz1988, Ferraris, Lee, and Lifschitz2011].
Normal Existential Rules.
Every normal (existential) rule is a firstorder sentence of the form , where is a conjunction of literals, i.e., atoms or negated atoms (of the form where is atomic), is a conjunction of atoms, and each universally quantified variable appears in at least one positive conjunct of . In the above normal rule, is called its body, and its head. A normal rule is called a constraint if its head is the “false” . For simplicity, when writing a rule, we often omit the universal quantifiers; by a normal rule set, we always mean a finite number of normal existential rules.
Let be a normal rule . For each variable , we introduce an ary fresh function symbol where . The skolemization of , denoted , is the rule obtained from by substituting for , followed by substituting “” for . Let be a normal rule set. We define to be the set of rules for all . Clearly, can be regarded as a normal program in an obvious way. Given any database , an instance is called a stable model of if it is a stable model of .
A normal rule is called guarded if there is a positive conjunct in the body of that contains all the universally quantified variable of , and a normal rule set is called guarded if every rule in it is guarded.
A normal rule set is stratified if there is a function that maps relation symbols to integers such that for all :

for all relation symbols occurring in the head and positively occurring in the body, , and

for all relation symbols occurring in the head and negatively occurring in the body, .
Sometimes, the negations that occur in a stratified normal rule set are called stratified negations, and those in a nonstratified normal rule set are called full negations.
Let and be two normal rules, and let (resp., and ) be the set of atoms positively (resp., negatively and positively) occurring in the body (resp., body and head) of . W.l.o.g., assume that no variable occurs in both and . Rule positively relies on , written , if there exist a database and a substitution such that , , , , and . Rule negatively relies on , written , if there exist a database and a substitution such that , , , and . A normal rule set is called Racyclic if there is no cycle of positive reliances that involves a rule with an existential quantifier, and is called Rstratified if there is a partition of such that, for every two normal rule sets and rules and , if then and if then .
Classical Boolean Query Answering.
A normal Boolean conjunctive query (NBCQ) is an existentially closed conjunction of atoms and negated atoms involving no null. Let (respectively., ) be the set of atoms positively (respectively., negatively) occurring in . An NBCQ is called safe if every variable in an atom from has at least one occurrence in ; it is covered if for every atom in , there is an atom in that contains all arguments of .
Given a database and an NBCQ , we write if there exists an assignment (that is, a function that maps each variable to a variablefree term) such that and . Furthermore, given a database , a normal rule set and an NBCQ , we write if, for each stable model of , we have that .
Complexity Classes.
We assume that the reader is familiar with the complexity theory. Given a unary function on natural numbers, by (, respectively) we mean the class of languages decidable in time
by a deterministic (nondeterministic, respectively) Turing machine. Besides the wellknown complexity classes such as
and , we will also use several unusual classes as follows. By notation 2ExpTime we mean the class of all languages decidable in exponential time by a deterministic Turing machine with an oracle for some N2ExpTimecomplete problem. The Boolean hierarchy (BH) is defined as follows: is NPTime; for , () is the class of languages each of which is the intersection (union, respectively) of a language in (, respectively) and a language in (NPTime, respectively); BH is then the union of for all . Note that DP, the class for difference polynomial time, is exactly the class ; is actually the class of languages each of which is the union of languages in DP; and BH is closed under complement. It was shown by [Chang and Kadin1996] that a collapse of the Boolean hierarchy implies a collapse of the polynomial hierarchy; thus it seems impossible to find a BHcomplete problem.Existential Rule Repair Semantics
In this section, we propose several semantics to handle inconsistency in ontological knowledge base. Different from many existing works, we will focus on the case where the database is reliable but rules are not. Similar to the data repair semantics, see [Lembo et al.2010], our inconsistencytolerant semantics will rely on a notion called rule repairs.
To define rule repairs, we arm every rule set with a preference. Such rule sets are called preferencebased ontologies.
Definition 1.
Each preferencebased ontology
is an ordered pair
, where is a normal rule set, and is a preorder (i.e., a reflexive and transitive binary relation) on (i.e., the power set of ). We call a preference.Now, we are in the position to define rule repairs.
Definition 2.
Let be a preferencebased ontology and a database. A subset of is called a (preferred rule) repair of w.r.t. and (or simply a repair w.r.t. if and are clear from the context) if has at least one stable model, and for all subsets of with (i.e., but ), has no stable model.
Intuitively, a preferred rule repair is a maximal component of the rule set which is consistent with the current database. The philosophy behind it is to trust the rules as many as possible. Note that the number of repairs are normally more than one. To avoid a choice among them, we follow the spirit of “certain” query answering. The semantics is then as follows.
Definition 3.
Let be a preferencebased ontology where is a normal rule set, and let be a database and an NBCQ. Then we write if, for all preferred rule repairs of w.r.t. and , we have .
The following proposition shows us that our semantics for inconsistencytolerant query answering will coincide with the classical semantics for query answering if the ontological knowledge base is consistent, which is clearly important.
Proposition 1.
Let be a preferencebased ontology and let be a database. If has a stable model, then iff for any NBCQ .
With the above definitions, we then have a framework to define semantics for rulebased inconsistencytolerant query answering. To define concrete semantics, we need to find preferences which will be useful in realworld applications. Besides the preference based on the set inclusion , similar to [Bienvenu, Bourgaux, and Goasdoué2014], we will consider other four kinds of preferences over subsets, which were first proposed by [Eiter and Gottlob1995] to study logicbased abduction.
Cardinality ().
Given any , we write if . The intuition of using this preference is that we always prefer the rule set with the maximum number of rules which are most likely to be correct.
Priority Levels (, ).
Every prioritization of is a tuple where is a partition of . Given a prioritization of , the preferences and can be defined as follows:

Prioritized set inclusion (): Given , we write if for every , or there is some such that and for all , .

Prioritized cardinality (): Given , we write if for every , or there is some such that and for all , .
Weights ().
A weight assignment is a function . Given two sets and a weight assignment , we write if .
In the rest of this paper, we will fix as a prioritization and as a weight assignment unless otherwise noted.
Example 2 (Example 1 continued).
Let and be the same as in Example 1. Then
the repairs w.r.t. and are:
,
,
The repairs w.r.t. and include:
, .
Let where are the same as in Example 1. Then the repairs w.r.t. and are shown in Example 1, and the repairs w.r.t. and are:
.
Let be the weight assignment that maps each rule to its index. Then the only repair w.r.t. and is:
.
Let be query “Mammal(a)” and be query “Bird(a)”, then we have and , but and .
We find that repairs under , , , and are the subset of the inclusionmaximal repairs.
Theorem 1.
The repairs under , , , are the subset of the repairs under .
Proof.
Let be the set of repairs under , be the set of repairs under , we prove that . Suppose for contradiction that , then there exists a repair , and . Because the repairs in are inclusionmaximal, we have for some . It is clear that , then is not a repair which contradict our assumption.
The rest semantics can be proved similarly. ∎
Complexity Results
In this section, we study the data and combined complexity for query entailment under our rule repair semantics. In particular, we focus on the following decision problems:

Data complexity: Fixing a preferencebased ontology and an NBCQ , given any database as input, deciding whether .

Combined complexity: Given any preferencebased ontology , any NBCQ and any database as input, deciding whether .
To measure the size of input, we fix a natural way to represent a database , a normal rule set , an NBCQ , a prioritization and a weight assigning function , and let denote the sizes of , respectively, w.r.t. the fixed representing approach. Given a preferencebased ontology , we define
By properly representing, we can have that .
The following result is obvious.
Proposition 2.
Let be a preferencebased ontology , where . Then, given any subsets , deciding whether is in .
Now, let us consider the complexity of query answering for Racyclic and Rstratified rule sets under our semantics.
Theorem 2.
Let be a preferencebased ontology , where is Racyclic and Rstratified, and . Given a database and a safe NBCQ , deciding whether is PTimecomplete for data complexity, and complete for combined complexity.
Proof.
Let be a database and be a safe NBCQ. By the definition of semantics, it is easy to verify that the problem of deciding whether can be solved by Alg. 1.
First, we consider the data complexity. In Alg. 1, let us fix a preferencebased ontology as defined in this theorem, fix a safe NBCQ , and let be the only input. As is Racyclic and Rstratified, by Theorem 5 in [Magka, Krötzsch, and Horrocks2013], it is clear that the body of the second loop (the inside one) in Alg. 1 is computable in PTime w.r.t. . (Note that the existence of stable models can be reduced to the query answering problem in a routine way.) Since the second loop will be repeated a constant times, and by Proposition 2 the loop condition can be checked in a constant time. (Note that the rule set is fixed now.) Thus, the second loop can be computed in PTime w.r.t. the size of . By a similar argument, we can show that Alg. 1 can be implemented in PTime w.r.t. . This then completes the proof of membership. The hardness follows from the PTimehardness of Datalog for data complexity, see, e.g., [Dantsin et al.2001].
Next, we prove the combined complexity. Again, first address the membership. Let be the number of rules in . Clearly, the body of the second loop will be repeated at most times. By Theorem 9 in [Magka, Krötzsch, and Horrocks2013], it is computable in . By Proposition 2, it is also clear that the loop condition can be checked in . So, the second loop is computable in since . By a similar evaluation, we know that the algorithm is implementable in . Thus, the combined complexity is in . And the hardness follows from the hardness of query answering of the Racyclic language [Magka, Krötzsch, and Horrocks2013] and the fact that iff , where is and a fresh 0ary relational symbol. ∎
Theorem 3.
Let be a preferencebased ontology , where is Racyclic with full negations and . Then, given a database and a safe NBCQ , deciding whether is in BH for data complexity and in  for combined complexity.
Proof.
We first prove the data complexity. To do this, we need to define some notations. Let be the schema of . Given any subset of , let be the set of all databases such that

has at least one stable model, and

does not hold, and

for all with , has no stable model.
Let denote the union of for all subsets of . By the definition of the rule repair semantics, it is easy to see that iff there is no such that , iff does not belong to . Thus, if the following claim is true, by the definition of BH we then have the desired result. Notice that the complexity class BH is closed under complement.
Claim. Given any subset of , it is in DP (w.r.t. the size of input database ) to determine whether .
Now, it remains to show the claim. Fix a subset . Let denote the set of all databases such that conditions 1 and 2 hold, and let denote the set of all databases such that the condition 3 holds. According to Theorem 2 in [Magka, Krötzsch, and Horrocks2013], is in NPTime and in . (Note that, as and are fixed, the number of subsets is independent on the size of input database; thus should be in .) By definition, is in DP. This proves the data complexity.
Next, we show the combined complexity. It is clear that holds iff there does not exist such that

has at least one stable model, and

does not hold, and

for all with , has no stable models.
By Theorem 2 in [Magka, Krötzsch, and Horrocks2013] and an analysis similar to that in Theorem 2 (for combined complexity), it is not difficult to see that, fixing , both conditions 1 and 2 are in , and condition 3 is in N2ExpTime. For “there does not exist ”, we can simply enumerate all subsets , which can be done in times. Therefore, query answering under the mentioned semantics must be in  for combined complexity, which is as desired. ∎
Now let us focus on guarded rules. The proof of the following is similar to that of Theorem 2, but employs the complexity results in [Calì, Gottlob, and Lukasiewicz2012]. The only thing we should be careful about is the constraints.
Theorem 4.
Let be a preferencebased ontology , where is guarded and stratified, and . Given a database and a covered NBCQ , deciding whether is PTimecomplete for data complexity, and complete for combined complexity.
For guarded rules with full negations, we have some results as below, where the proof for data complexity is similar to that in Theorem 3, and the proof for combined complexity is similar to that in Theorem 2. Both results rely on the corresponding complexity results in [Gottlob et al.2014].
Theorem 5.
Let be a preferencebased ontology , where is guarded, and . Then, given a database and a covered NBCQ , deciding whether is in BH for data complexity and complete for combined complexity.
Finally, we conclude the results of this section as follows:
Data complexity  Combined complexity  

RA + RS  PTimecomplete  2ExpTimecomplete 
RA + Full  in BH  in  
G + Stra  PTimecomplete  2ExpTimecomplete 
G + Full  in BH  2ExpTimecomplete 
Experimental Evaluation
To demonstrate the effectiveness, we have implemented a prototype system for query answering of Racyclic rule languages under the rulerepair semantics w.r.t. , , and , by calling a stateoftheart ASP solver.
From Query Answering to ASP
To improve the efficiency, we adopt particular algorithm for each rulerepair semantics. The algorithms are all based on breadthfirst search. Finding rule repairs w.r.t. uses the basic process illustrated in Alg. 1, and exponential checking will be conducted during the process. For rule repairs w.r.t. , though it works better than for the reason that there is no need to search the rest levels once it finds consistent sets. As for rule repairs w.r.t. , we design an algorithm which iterates over the rules from low to high prioritization. Once finding consistent results in the rules with lower prioritization, the searching stops. It’s known that can be translated into , but not vice versa. As for , we search by deleting rules from the lowest weight to the greatest.
As a whole, the algorithms for situations with prioritization or weights will be much more efficient if the rule set satisfies the following two conditions:

The size of rules with lower prioritization (less weights) is very small, even though the whole rule set is large;

The rule set can be consistent by only deleting rules with lower prioritization (less weights).
These conditions can be easily found in real applications because incorrectness are mostly caused by the rules newly added and the amount of these rules is normally small.
Experiments
We developed a prototype system QAIER^{1}^{1}1http://ss.sysu.edu.cn/%7ewh/qaier.html (Query Answering with Inconsistent Existential Rules) in C++. QAIER can answer queries with inconsistent Racyclic rule sets. When it needs to check the existence of stable models, QAIER invokes an ASP solver clingo4.4.0^{2}^{2}2clingo4.4.0. http://sourceforge.net/projects/potassco/files/clingo/.
Instance id  #facts  #negs  

—  
—  
—  —  
—  —  
—  —  
—  —  
—  —  
—  —  
—  —  
—  —  —  —  
—  —  —  
—  —  —  — 
Instance id  #rules  #negs  

—  —  
—  —  —  —  — 
Benchmarks
To estimate the performance of
QAIER in a view of data complexity, we use the modified LUBM^{3}^{3}3LUBM. http://swat.cse.lehigh.edu/projects/lubm/ as a benchmark. Because LUBM is not Racyclic, we modified LUBM by changing atoms and deleting rules to make sure that modified LUBM is Racyclic. We use HermiT ^{4}^{4}4HermiT. http://www.hermitreasoner.com/ to transform the modified LUBM ontology into DLclauses, and replace atleast number restrictions in head atoms with existential quantification, then get 127 rules. Next we add default negations or constraints, and introduce the prioritization and weight under rule repair semantics. Considering that the number of default negations or constraints would not be very large, we introduce 911 for each instance. The introduced prioritization or weight depends on the reliability of the rules. We use the EUDG^{5}^{5}5EUDG.http://www.informatik.unibremen.de/clu/combined/ to generate a database. By (Table 2) we mean that the instance involves thousands facts and unreliable rules. For the performance in the view of combined complexity, we use the modified ChEBI [Magka, Krötzsch, and Horrocks2013] as a benchmark. By (Table 3) we mean that the instance involves molecules and chemical classes and unreliable rules.Experimental results
Table 2 (Table 3, respectively)^{6}^{6}6All experiments run in Linux Ubuntu 14.04.1 LTS on a HP compaq 8200 elite with a 3.4GHz Intel Core i7 processor and 4G 1333 MHz memory. Real numbers in the tables figure the run time (in seconds) of query answering. If the time exceeds 1800 seconds, we write it as “–”. , , and means the number of facts in database, default negations and constraints, and rules respectively. shows the data (combined, respectively) complexity performance among rule repairs scale up, when and ( and , respectively) grow. , , , , or records the queries answering time. Each instance is computed three times and taken the average. Because QAIER computes all the stable models, the sizes or the types of queries are not the important issues. Clearly, rule repairs w.r.t. , , and have better performances than those of and , which is due to the few number of unreliable rules. This condition can be easily found in realistic cases because most of the rules are reliable, while the latest learned rules considered unreliable are few.
Related Work and Conclusions
In terms of changing the rule set/Tbox for repair, MeyerLBP06 MeyerLBP06 proposed an algorithm running in ExpTime that finds maximally conceptsatisfiable terminologies in . dlog2010Scharrenbach dlog2010Scharrenbach showed that probabilistic description logics can be used to resolve conflicts and receive a consistent knowledge base from which inferences can be drawn again. Also ijcai2009QiD ijcai2009QiD proposed modelbased revision operators for terminologies in DL, and dlog2014Wang dlog2014Wang introduced a modeltheoretic approach to ontology revision. In order to address uncertainty arising from inconsistency, amai2013Gottlob amai2013Gottlob extended the Datalog language with probabilistic uncertainty based on Markov logic networks. More generally, several works have focused on reasoning with inconsistent ontologies, see [Huang, van Harmelen, and ten Teije2005, Haase et al.2005] and references therein. Surprisingly, this paper shows that for Racyclic existential rules with Rstratified or guarded existential rules with stratified negations both the data complexity and combined complexity of query answering under the rule repair semantics do not increase.
We have developed a general framework to handle inconsistent existential rules with default negations. Within this framework, we analyzed the data and combined complexity of inconsistencytolerant query answering under rule repair semantics. We proposed approaches simulating queries answering under rule repairs with calling ASP solvers and developed a prototype system called QAIER. Our experiments show that QAIER can scale up to large databases under rule repairs in practice. Future work will focus on identifying first order rewritable classes under rule repair semantics.
Acknowledgments
We thank the reviewers for their comments and suggestions for improving the paper. The authors would like to thank Yongmei Liu and her research group for their helpful and informative discussions. Hai Wan’s research was in part supported by the National Natural Science Foundation of China under grant 61573386, Natural Science Foundation of Guangdong Province of China under grant S2012010009836, and Guangzhou Science and Technology Project (No. 2013J4100058).
References
 [Alviano and Pieris2015] Alviano, M., and Pieris, A. 2015. Default negation for nonguarded existential rules. In Proceedings of the 34th ACM Symposium on Principles of Database Systems, PODS 2015, Melbourne, Australia, May 31  June 4, 2015, 79–90.
 [Arenas, Bertossi, and Chomicki1999] Arenas, M.; Bertossi, L. E.; and Chomicki, J. 1999. Consistent query answers in inconsistent databases. In Proceedings of the Eighteenth ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems, May 31  June 2, 1999, Philadelphia, Pennsylvania, USA, 68–79.

[Bienvenu, Bourgaux, and
Goasdoué2014]
Bienvenu, M.; Bourgaux, C.; and Goasdoué, F.
2014.
Querying inconsistent description logic knowledge bases under
preferred repair semantics.
In
Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence, July 27 31, 2014, Québec City, Québec, Canada.
, 996–1002.  [Calì, Gottlob, and Lukasiewicz2012] Calì, A.; Gottlob, G.; and Lukasiewicz, T. 2012. A general datalogbased framework for tractable query answering over ontologies. Journal Web Semantics 14:57–83.
 [Chang and Kadin1996] Chang, R., and Kadin, J. 1996. The boolean hierarchy and the polynomial hierarchy: A closer connection. SIAM Journal on Computing 25(2):340–354.
 [Chomicki2007] Chomicki, J. 2007. Consistent query answering: Five easy pieces. In Proceedings of 11th International Conference, Database Theory  ICDT 2007, Barcelona, Spain, January 1012, 2007,, 1–17.
 [Dantsin et al.2001] Dantsin, E.; Eiter, T.; Gottlob, G.; and Voronkov, A. 2001. Complexity and expressive power of logic programming. ACM Computing Surveys 33(3):374–425.
 [Du, Qi, and Shen2013] Du, J.; Qi, G.; and Shen, Y. 2013. Weightbased consistent query answering over inconsistent knowledge bases. Knowledge Information System 34(2):335–371.
 [Eiter and Gottlob1995] Eiter, T., and Gottlob, G. 1995. The complexity of logicbased abduction. Journal of the ACM 42(1):3–42.
 [Ferraris, Lee, and Lifschitz2011] Ferraris, P.; Lee, J.; and Lifschitz, V. 2011. Stable models and circumscription. Artifical Intelligence 175(1):236–263.
 [Gelfond and Lifschitz1988] Gelfond, M., and Lifschitz, V. 1988. The stable model semantics for logic programming. In Proceedings of the Fifth International Conference and Symposium Logic Programming, Seattle, Washington, August 1519, 1988 (2 Volumes), 1070–1080.
 [Gottlob et al.2013] Gottlob, G.; Lukasiewicz, T.; Martinez, M. V.; and Simari, G. I. 2013. Query answering under probabilistic uncertainty in datalog+/ ontologies. Annals of Mathematics and Artificial Intelligence 69(1):37–72.
 [Gottlob et al.2014] Gottlob, G.; Hernich, A.; Kupke, C.; and Lukasiewicz, T. 2014. Stable model semantics for guarded existential rules and description logics. In Proceedings of the Fourteenth International Conference Principles of Knowledge Representation and Reasoning, KR 2014, Vienna, Austria, July 2024, 2014, 258–267.
 [Haase et al.2005] Haase, P.; van Harmelen, F.; Huang, Z.; Stuckenschmidt, H.; and Sure, Y. 2005. A framework for handling inconsistency in changing ontologies. In Proceedings of The Semantic Web  ISWC 2005, 4th International Semantic Web Conference, ISWC 2005, Ireland, November 610, 2005, 353–367.
 [Huang, van Harmelen, and ten Teije2005] Huang, Z.; van Harmelen, F.; and ten Teije, A. 2005. Reasoning with inconsistent ontologies. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, IJCAI 2005, Edinburgh, Scotland, UK, July 30August 5, 2005, 454–459.
 [Kalyanpur et al.2006] Kalyanpur, A.; Parsia, B.; Sirin, E.; and Grau, B. C. 2006. Repairing unsatisfiable concepts in OWL ontologies. In Proceedings of the Semantic Web: Research and Applications, 3rd European Semantic Web Conference, ESWC 2006, Budva, Montenegro, June 1114, 2006,, 170–184.
 [Lehmann et al.2011] Lehmann, J.; Auer, S.; Bühmann, L.; and Tramp, S. 2011. Class expression learning for ontology engineering. Journal Web Semantics 9(1):71–81.
 [Lembo et al.2010] Lembo, D.; Lenzerini, M.; Rosati, R.; Ruzzi, M.; and Savo, D. F. 2010. Inconsistencytolerant semantics for description logics. In Proceedings of Web Reasoning and Rule Systems  Fourth International Conference, RR 2010, Bressanone/Brixen, Italy, September 2224, 2010., 103–117.
 [Lukasiewicz et al.2015] Lukasiewicz, T.; Martinez, M. V.; Pieris, A.; and Simari, G. I. 2015. From classical to consistent query answering under existential rules. In Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence, January 2530, 2015, Austin, USA.
 [Lukasiewicz, Martinez, and Simari2012] Lukasiewicz, T.; Martinez, M. V.; and Simari, G. I. 2012. Inconsistency handling in datalog+/ ontologies. In Proceedings of 20th European Conference on Artificial Intelligence. ECAI 2012 Including Prestigious Applications of Artificial Intelligence (PAIS2012) System Demonstrations Track, Montpellier, France, August 2731 , 2012, 558–563.
 [Lukasiewicz, Martinez, and Simari2013] Lukasiewicz, T.; Martinez, M. V.; and Simari, G. I. 2013. Complexity of inconsistencytolerant query answering in datalog+/. In Informal Proceedings of the 26th International Workshop on Description Logics, Ulm, Germany, July 23  26, 2013, 488–500.
 [Magka, Krötzsch, and Horrocks2013] Magka, D.; Krötzsch, M.; and Horrocks, I. 2013. Computing stable models for nonmonotonic existential rules. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence, IJCAI 2013, Beijing, China, August 39, 2013, 1031–1038.
 [Meyer et al.2006] Meyer, T. A.; Lee, K.; Booth, R.; and Pan, J. Z. 2006. Finding maximally satisfiable terminologies for the description logic ALC. In Proceedings of the TwentyFirst National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 1620, 2006, Boston, Massachusetts, USA, 269–274.
 [Qi and Du2009] Qi, G., and Du, J. 2009. Modelbased revision operators for terminologies in description logics. In Proceedings of the 21st International Joint Conference on Artificial Intelligence IJCAI 2009, Pasadena, California, USA, July 1117, 2009, 891–897.
 [Scharrenbach et al.2010] Scharrenbach, T.; Grütter, R.; Waldvogel, B.; and Bernstein, A. 2010. Structure preserving tbox repair using defaults. In Proceedings of the 23rd International Workshop on Description Logics (DL 2010), Waterloo, Ontario, Canada, May 47, 2010, 384–395.
 [Wang et al.2014] Wang, Z.; Wang, K.; Qi, G.; Zhuang, Z.; and Li, Y. 2014. Instancedriven tbox revision in dllite. In Informal Proceedings of the 27th International Workshop on Description Logics, Vienna, Austria, July 1720, 2014., 734–745.
 [Zhang, Zhang, and You2015] Zhang, H.; Zhang, Y.; and You, J.H. 2015. Existential rule languages with finite chase: Complexity and expressiveness. In Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence, January 2530, 2015, Austin, Texas, USA.
Comments
There are no comments yet.