1 Introduction
The problem of logical representation for word puzzles has recently received considerable attention [Ponnuru et al. (2004), Shapiro (2011), Baral and Dzifcak (2012), Schwitter (2013)]. In all of these studies, however, the input information is assumed to be consistent and the proposed logical representations break on inconsistent input. The present paper proposes an approach that works in the presence of inconsistency and not just for word puzzles.
At first sight, one might think that the mere use of a paraconsistent logic such as Belanp’s four valued logic [Belnap Jr (1977)] or Annotated Logic Programming [Blair and Subrahmanian (1989), Kifer and Subrahmanian (1992)] is all what is needed to address the problem, but it is not so. We do start with a wellknown paraconsistent logic, called Annotated Predicate Calculus (APC) [Kifer and Lozinskii (1992)], which is related to the aforementioned Annotated Logic Programs, but this is not enough: a number of issues arise in the presence of paraconsistency and different translations might seem equivalent but behave differently when inconsistent information is taken into account. As it turns out, several factors can affect the choice of the “right” logical representation for many NL sentences, especially for implications. We formalize several principles to guide the translation of NL sentences into APC, principles that can be incorporated into existing controlled language translators, such as Attempto Controlled English (ACE) [Fuchs et al. (2008)] and PENG Light [White and Schwitter (2009)]. We illustrate these issues with the classical Jobs Puzzle [Wos et al. (1984)] and show how inconsistent information affects the conclusions.
To address the above problems formally, we introduce a new kind of nonmonotonic semantics for APC, which is based on consistencypreferred stable models and is inspired by the concept of the most epistemicallyconsistent models of [Kifer and Lozinskii (1992)]. We argue that this new semantics makes APC into a good platform for dealing with inconsistency in word puzzles and, more generally, for translating natural language sentences into logic.
Finally, we show that the consistencypreferred stable models of APC can be computed using answerset programming (ASP) systems that support preferences over stable models, such as Clingo [Gebser et al. (2011)] with the Asprin addon [Brewka et al. (2015)].
This paper is organized as follows. Section 2 provides background material on APC. In Section 3 we consider the logic programming subset of APC and define preferential stable models for it. In Section 4, we show that the logic programming subset of APC (under the consistencypreferred stable model semantics) can be encoded in ASP in semanticallypreserving way. In Section 5, we discuss variations of Jobs Puzzle [Wos et al. (1984)] when various kinds of inconsistency are injected into the formulation of the puzzle. Section 6 explains that logical encoding of common knowledge in the presence of inconsistency needs to take into account a number of considerations that are not present when inconsistency is not an issue. We organize those considerations into several different principles and illustrate their impact. Section 8 concludes the paper. Finally, Appendix A contains the full encoding of Jobs Puzzle in APC under the consistencypreferred semantics. This appendix also includes variations that inject various kinds of inconsistency into the puzzle, and the derived conclusions are discussed. Appendices B and C contain similar analyses of other wellknown puzzles: Zebra Puzzle^{1}^{1}1https://en.wikipedia.org/wiki/Zebra_Puzzle and Marathon Puzzle [C. Guéret and Sevaux (2000)]. Readytorun encodings of these programs in Clingo/Asprin can be found at https://bitbucket.org/tiantiangao/apc_lp.
2 Annotated Predicate Calculus: Background and Extensions
To make this paper selfcontained, this section provides the necessary background on APC. At the end of the section, we define new semantic concepts for APC, which will be employed in later sections.
The alphabet of APC consists of countablyinfinite sets of: variables , function symbols (each symbol having an arity; constants are viewed as 0ary function symbols), predicate symbols , truth annotations, quantifiers, and logical connectives. In [Kifer and Lozinskii (1992)], truth annotations could come from an arbitrary upper semilattice (called “belief semilattice” there), but here we will use only (unknown), f (false), t (true) and (contradiction or inconsistency), which are partially ordered as follows: and . in APC are constructed exactly as in predicate calculus: from constants, variables and function symbols. A ground term is one that has no variables.
Definition 1 (Atomic formulas [Kifer and Lozinskii (1992)]).
A has the form , where is a nary predicate symbol and , , …, are terms. An APC atomic formula (or an APC predicate) has the form , where is a predicate term and s is annotation indicating the degree of belief (or truth) in the predicate term. A ground atomic formula is an atomic formula that has no variables. ∎
We call an atomic formula of the form a tpredicate (resp., an f, , or predicate) if s is t (resp., f, , or ).
APC includes the usual universal and existential quantifiers, the connectives, and , and there are two negation and two implication connectives: the ontological negation and ontological implication , plus the epistemic negation and epistemic implication . As will be seen later, the distinction between the ontological and the epistemic connectives is useful because they behave differently in the presence of inconsistency.
Definition 2 (APC wellformed formulas [Kifer and Lozinskii (1992)]).
An APC wellformed formula is defined inductively as follows:

[leftmargin=1cm]

an atomic formula : s

if and are wellformed formulas, then so are , , , , , and .

if is a formula and is a variable, then () and () are formulas. ∎
An APC literal is either a predicate or an ontologically negated predicate . An epistemic literal is either a predicate or an epistemically negated predicate .
In [Kifer and Lozinskii (1992)], the semantics was defined with respect to general models, but here we will be dealing with logic programs and the Herbrand semantics will be more handy.
Definition 3 (APC Herbrand universe, base, and interpretations).
The Herbrand universe for APC is the set of all ground terms. The Herbrand base for APC is the set of all ground APC atomic formulas. An Herbrand interpretation for APC is a nonempty subset of the Herbrand base that is closed with respect to the following operations:

[leftmargin=1cm]

if , then also for all ; and

if , and then .
The annotations used in APC form a lattice (in our case a 4element lattice) with the order and with used as the least upper bound operator of that lattice.
We will also use to denote the subset of all predicates in . ∎
As usual, a variable assignment is a mapping that takes a variable and returns a ground term. This mapping is extended to terms as follows: . We will disregard variable assignments for formulas with no free variables (called sentences) since they do not affect ground formulas.
Definition 4 (APC Herbrand Models).
Let be an APC Herbrand interpretation and be a variable assignment. For an atomic formula , we write if and only if . For wellformed formulas and , we write:

[leftmargin=1cm]

if and only if and ;

if and only if or ;

if and only if not ;

if and only if , for every assignment that differs from only in its value;

if and only if , for some that differs from only in its value;

if and only if ;

if and only if , where , , and ;
We also define: , , , , , and .
A formula is satisfied by if and only if for every valuation . In this case we write simply . is a model of a set of formulas if and only if every formula is satisfied in . A set of formulas logically entails a formula , denoted , if and only if every model of is also a model of . ∎
APC has two types of logical entailment: ontological and epistemic. Ontological entailment is the entailment , which we have just defined. Before defining the epistemic entailment, we motivate it with a number of examples. To avoid clutter, in all examples we will only show the highest annotation for each APC predicate. For instance, if a model contains , then we will not show , , or .
Example 1.
Consider the following set of APC formulas . It has four models: , , and . Thus, holds (since occurs in every model of ). ∎
Example 2.
The APC set of formulas has two models: and . Therefore, holds. ∎
Example 3.
This set of formulas is similar to that in Example 1 except that it uses epistemic implication instead of the ontological one. One of the models of that set is and therefore . ∎
Examples 1 and 2 show that ontological implication has the modus ponens property, but it may be too strong, as it allows one to draw conclusions from inconsistent information. Epistemic implication of Example 3, on the other hand, is too cautious and does not have the modus ponens property. However, epistemic implication does have the modus ponens property and it blocks drawing conclusions from inconsistency under the epistemic entailment, defined next.
Definition 5 (Most econsistent models [Kifer and Lozinskii (1992)]).
A Herbrand interpretation is (or equally) econsistent than another interpretation (denoted ) if and only if implies for every ground predicate term .
A model of a set of formulas is a most econsistent model, if there is no other model of that is strictly more econsistent than .
A program epistemically entails a formula , denoted , if and only if every most econsistent model of is also a model of . ∎
Going back to Example 3, it has only one most econsistent model , so holds. The next example shows that does not propagate inconsistency to conclusions.
Example 4.
Let . Observe that has a most econsistent model , in which does not hold. Therefore, holds.∎
Next we observe that not all inconsistent information is created equal, as people have different degrees of confidence in different pieces of information. For instance, one normally would have higher confidence in the fact that someone named Robin is a person than in the fact that Robin is a male. Therefore, given a choice, we would hold it less likely that is inconsistent than that is. Likewise, in the following example, given a choice, we are more likely to hold to a belief that Pete is a person than to a belief that he is rich.
Example 5.
Consider the following formulas

[leftmargin=1cm]




There are three most econsistent models:

[leftmargin=1cm]



Based on the aforesaid confidence considerations, we are more likely to believe that Pete is a person than that he is a businessman or rich. Therefore, we are likely to think that the models and are better descriptions of the real world than .∎
In this paper, we capture the above intuition by extending the notion of most econsistent models with additional preferences over models.
Definition 6 (Consistencypreference relation and consistencypreferred models).
A consistency preference over interpretations, where is a set of ground predicates in APC, is defined as follows:

[leftmargin=1cm]

An interpretation is consistencypreferred over with respect to , denoted , if and only if .

Interpretation and are consistencyequal with respect to , denoted , if and only if .
A consistencypreference relation , where is a sequence of sets of ground predicates, is defined as a lexicographic order composed out of the sequence of consistency preferences . Namely, if and only iff there is such that and .
A model of a set of formulas is called (most) consistencypreferred with respect to if has no other model such that .
We will always assume that — the set of all ground predicates and, therefore, any most consistencypreferred model is also a most econsistent one.
We use the notation to denote epistemic entailment with respect to most consistencypreferred models. A program epistemically entails a formula with respect to a consistencypreference relation , denoted , if and only if every most consistencypreferred model of is also a model of .
∎
3 Logic Programming Subset of APC and Its Stable Models Semantics
In this section, we define the logic programming subset of APC, denoted , and give it a new kind of semantics based on consistencypreferred stable models.
Definition 7.
An program consists of rules of the form:
where each is an epistemic literal. Variables are assumed to be implicitly universally quantified. An formula is either a singleton epistemic literal, or a conjunction of epistemic literals, or a disjunction of them. ∎
The formula is called the of the rule, and is the of that rule.
Recall from Section 2 that epistemic negation can be pushed inside and eliminated via this law: , where , , , and so, for brevity, we assume that all programs are transformed in this way and the epistemic negation is eliminated.
When the rule body is empty, the ontological implication symbol is usually omitted and the rule becomes a disjunction. Such a disjunction can also be represented as an epistemic implication and sometimes this representation may be closer to a normal English sentence. For instance, the sentence, “If a person is a businessman then that person is rich,” can be represented as an epistemic implication: , which is easier to read than the equivalent disjunction .
The notion of stable models for carries over from standard answer set programming (ASP) with very few changes.
Definition 8 (The GelfondLifschitz reduct for ).
Let be an program and be a Herbrand interpretation. The reduct of w.r.t. , denoted , is a program free from ontological negation obtained by

[leftmargin=1cm]

removing rules with in the body, where ; and

removing literals from all remaining rules. ∎
Definition 9 (Stable models for ).
A Herbrand interpretation is a of an program if is a minimal model of . Here, minimality is with respect to set inclusion. ∎
Definition 10 (Consistencypreferred stable models for ).
Let be a consistencypreference relation of Definition 6, where is a sequence of sets of ground predicates. An interpretation is a (most) consistencypreferred stable model of an program if and only if:

[leftmargin=1cm]

is a stable model of , and

is a most consistencypreferred model with respect to .
4 Embedding into ASP
We now show that can be isomorphically embedded in ASP extended with a model preference framework, such as the Clingo system [Gebser et al. (2011)] with its Asprin extension [Brewka et al. (2015)]. We then prove the correctness of this embedding, i.e., that it is onetoone and preserves the semantics. Next, we define the subset of ASP onto which maps.
Definition 11.
is a subset of ASP programs where the only predicate is truth/2, which is used to reify the APC predicate terms and associate them with truth values. That is, these atoms have the form , where the first argument is the reification of an APC predicate term and the second argument is one of these truth annotations: t, f, top, or bottom.
An program consists a set of rules of the form:
where the ’s are truth/2predicates.
An formula is either a singleton truth/2predicate, a conjunction of such predicates, or a disjunction of them. ∎
Definition 12.
The embedding of an program in , denoted , is defined recursively as follows (where is the truth value mapping):

[leftmargin=1cm]

t

f

top

bottom

truth(p,)

, where is an APC predicate

, where is an APC predicate and is a disjunction of APC predicates

, where is an APC literal and is a conjunction of APC literals

, where (resp., ) denotes the head (resp., the body) of a rule.
The embedding also applies to APC Herbrand interpretations: each APC Herbrand interpretation (which is a set of APC atoms of the form ) is mapped to a set of atoms (of the form truth(p,) ). ∎
We require that each program includes the following background axioms to match the semantics of APC:

truth(X,top) : truth(X,t),truth(X,f).

truth(X,t) : truth(X,top).

truth(X,f) : truth(X,top).

truth(X,bottom).
Lemma 1.
The embedding is a onetoone correspondence. ∎
Proof.
As mentioned, we can limit our attention to free programs. First, it is obvious that is injective on APC literals. Injectivity on APC conjunctions and disjunctions can be shown by a straightforward induction on the number of conjuncts and disjuncts. Surjectivity follows similarly because it is straightforward to define the inverse of by reversing the equations of Definition 12. ∎
Next, we show the above APCtoASP embedding preserves models, GelfondLifshitz reduct, stable models, and also consistency preference relations.
Lemma 2.
The models of any program are closed with respect to and downwardclosed with respect to the ordering. Also, is a model of an program if and only if is a model of .
Proof.
Recall that every is required to have the four rules listed right after Definition 12. These rules obviously enforce the requisite closures. The second part of the lemma follows directly from the definitions. ∎
Lemma 3.
preserves the GelfondLifshitz reduct: . ∎
Proof.
For every predicate , we have if and only if , by Lemma 2. By the same lemma, if then where if and only if , where . As a result, rule gets eliminated by GelfondLifschitz reduction if and only if is eliminated and a negative literal in the body of gets dropped if and only if its image in gets dropped. ∎
Lemma 4.
Let be a APC Herbrand interpretation. is an APC Herbrand model of if and only if is a model of . ∎
Proof.
If is a rule then if and only if and if and only if . Thus, if and only if . ∎
Lemma 5.
Let and be APC Herbrand interpretations. if and only if . ∎
Proof.
Follows directly from the definition of and its inverse. ∎
Theorem 6.
is a stable model of an program if and only if is a stable model of . ∎
Proof.
By Lemma 4, is a model of if and only if is a model of . Thus, the set of models for is in a oneone correspondence with the set of models for . By Lemma 5, this correspondence preserves setinclusion, so the set of minimal models of stands in oneone correspondence with respect to with the set of minimal models of . ∎
A consistency preference relation , where , is translated into the following Asprin [Brewka et al. (2015)] preference relation along with several subset preferences relations, each corresponding to one of the that are part of (see Definition 6).
#preference( , lexico){ 1::name( ); 2::name( ); ; n::name( )}.
#preference( , subset){ the list of elements in }.
#preference( , subset){ the list of elements in }.
Lemma 7.
Let and be APC Herbrand interpretations, be a consistency preference relation and be its corresponding Asprin preference relation. if and only if is preferred over with respect to . ∎
Proof.
The definition in the Asprin manual of the Asprin lexico and subset preference relations, as applied to our preference statements given just prior to Lemma 7, is just a paraphrase of the lexicographical consistencypreference relation in Definition 6. The lemma now follows from the obvious fact that maps literals of ASP onto the topliterals of , which have the form . ∎
Theorem 8.
is a of an program with respect to a consistency preference relation (where ) if and only if is a preferred model of with respect to the corresponding Asprin preference relation . ∎
5 Jobs Puzzle and Inconsistency
Jobs Puzzle [Wos et al. (1984)] is a classical logical puzzle that became a benchmark of sorts for many automatic theorem provers [Shapiro (2011), Schwitter (2013)]; it is also included in TPTP.^{2}^{2}2 Thousands of Problems for Theorem Provers (http://www.cs.miami.edu/~tptp/). The usual description of Jobs Puzzle does not include implicit knowledge, like the facts that a person is either a male or a female (but not both), the husband of a person must be unique, etc., so we add this knowledge explicitly, like [Schwitter (2013)]. We also changed the name Steve to Robin in order to better illustrate one form of inconsistency.

There are four people: Roberta, Thelma, Robin and Pete.

Among them, they hold eight different jobs.

Each holds exactly two jobs.

The jobs are: chef, guard, nurse, telephone operator, police officer (gender not implied), teacher, actor, and boxer.

The job of nurse is held by a male.

The husband of the chef is the telephone operator.

Roberta is not a boxer.

Pete has no education past the ninth grade.

Roberta, the chef, and the police officer went golfing together.
In sum there are four people and eight jobs and to solve the puzzle one must figure out who holds which jobs. The solution is that Thelma is a chef and a boxer (and is married to Pete). Pete is a telephone operator and an actor. Roberta is a teacher and a guard. Finally, Robin is a police officer and a nurse.
However, if we inject inconsistency into the puzzle, current logical approaches fail because they are based on logics that do not tolerate inconsistency. Consider the following examples.
Example 6.
Let us add to the puzzle that “Thelma is an actor.” Given that the original puzzle implies that Thelma is not an actor (she was a chef and a boxer), this addition causes inconsistencies. A firstorder encoding of the puzzle (as, say, in TPTP) or an ASPbased one in [Schwitter (2013)] will not find any models. In contrast, an encoding in can isolate inconsistent information. There are two possibilities: one where Thelma is an actor and the other where Thelma is a female. If we add background knowledge that Thelma is a female’s name, it is less likely that Thelma’s gender will be inconsistent, so the only consistencypreferred model will have one inconsistent conclusion that Thelma is an actor, but all other true facts will remain consistent.
Example 7.
Consider adding the sentences “Robin is a male name” and “Robin is a female name,” which will imply that Robin is both a male and female. The firstorder and ASPbased encodings will, again, find no models, while an based encoding will localize inconsistency to just and female.
Example 8.
Consider adding the sentence “Robin is Thelma’s husband.” Since the original job puzzle implies that Pete is Thelma’s husband, this will cause inconsistency. If we add the background knowledge that husband is unique, again, the encoding of this modified puzzle in will localize inconsistency to just the aforesaid husbandfacts.
6 Knowledge Representation Principles for Inconsistency
Mere encoding of Jobs Puzzle in is not enough because it is not unique: when inconsistency is taken into account, more information needs to be provided to obtain the encodings that match user intent. The main problem is that, if inconsistency is allowed, the number of possible worlds can grow to many hundreds even in relatively simple scenarios like Jobs Puzzle, and this practically annuls the benefits of the switch to a paraconsistent logic. We have already seen small examples of such scenarios at the end of Section 2, which motivated our notion of consistency preference, but there are more. We organize these scenarios around six main principles.
Principle 1: Contrapositive inference
Like in classical logic, contrapositive inference may be useful for knowledge representation. Consider the following sentences:

If someone is a nurse, then that someone is educated.

Pete is not educated.
We could encode the first sentence as or as . Classically, the above sentences imply that Pete is not a nurse, but the encoding of the first sentence using the ontological implication would not allow for that. If contrapositive inference is required, epistemic implication should be used.
Example 9.
Consider educated educated It has only one most consistency preferred model with respect to (with ), namely educated. Therefore, holds.
The above example uses contrapositive inference, but this is not always desirable. For instance, suppose Here we use ontological implication to block contrapositive inference. Observe that has a most consistency preferred model with respect to , namely . Therefore, does not hold, and this is exactly what we want, even if Robin happens to be not a male.^{3}^{3}3 In the USA as opposed to the U.K. ∎
Principle 2: Propagation of inconsistency
As discussed in Example 2, APC gives us a choice of whether to draw conclusions from inconsistent information or not, and it is a useful choice. One way to block such inferences, illustrated in that example, is to use epistemic implication. Another way is to use the ontological implication with the pattern in the rule body, e.g.,
Both techniques block inferences from inconsistent information, but the second also blocks inference by contraposition, as discussed in Principle 6. The following examples illustrate the use of both of these methods.
Example 10.
Let educated Observe that there is one most consistency preferred model with respect to (as before, ) educated. Therefore, educated. ∎
Example 11.
Let educated. As in the previous example, has a most consistency preferred model educated and so educated. ∎
In both of these examples, inconsistency is not propagated through the rules, but Example 10 allows for contrapositive inference, while Example 11 does not. Indeed, suppose that instead of we had . Then, in the first case, would be derived, while in the second it would not.
Blocking contrapositive inference and nonpropagation of inconsistency can be applied selectively to some literals but not the others.
Example 12.
Consider the following sentence, “if a person holds a job of nurse then that person is educated”. It can be encoded as
The rule allows propagation of inconsistency through the predicate but blocks such propagation for the predicate. It also inhibits contrapositive inference of if the head of the rule is falsified by the additional facts and educated. However, due to the head of the rule, contrapositive inference would be allowed for if educated was given.
Principle 3: Polarity
This principle addresses situations such as the sentence “A person must be either a male or a female, but not both”. When inconsistency is possible, we want to say three things: that any person must be either a male and or a female, that these facts cannot be unknown, and that if one of these is inconsistent then the other is too.
Example 13.
Let be:

[leftmargin=1cm]

female

female

female

female

Two most consistency preferred models exist, which minimize the inconsistency of :
, and
.
If we add (or female) to , then only one most consistency preferred model remains: female. ∎
Conditional (or ) is generally represented as follows
where condition is a conjunction of atomic formulas and , are polar facts with respect to that condition.
Principle 4: Consistency preference relations
Recall from Example 5 that inconsistent information is not created equal, as people have different degrees of confidence in different pieces of information. For example, we have more confidence that someone whom we barely know is a person compared to the information about this person’s marital situation (e.g., whether a husband exists). Therefore, personfacts are more likely to be consistent than marriagefacts and so we need to define consistency preference relations to specify the degrees of confidence. Consistency preference relations were introduced in Definition 6, and we already had numerous examples of its use. In Jobs Puzzle encoding in Appendix A, we use one, fairly elaborate, consistency preference relation. It first sets person and job information to be of the highest degree of confidence. Then, it prefers consistency of gender information of everybody but Robin. Third, it prefers consistency of the job assignment information. And finally, it minimizes inconsistency in general, for all facts.
Principle 5: Complete knowledge
This principle stipulates that certain information is defined completely, and cannot be unknown (). But it can be inconsistent. Moreover, similarly to closed world assumption, negative information is preferred. For instance, if we do not know that someone is someone’s husband, we may assume that that person is not. Such conclusions can be specified via a rule like this:
Note that, unlike, say ASP, jumping to negative conclusions is not ensured by the stable model semantics of APC and must be given explicitly. But the advantage is that it can be done selectively. More generally, this type of reasoning can be specified as
if is known to be a predicate that is defined completely under the .
Principle 6: Exactly
This principle captures the encoding of cardinality constraints in the
presence of inconsistency. For instance, in
Jobs Puzzle, the sentences “Every person holds exactly two jobs” and
“Every job is held by exactly one person” are encoded as cardinality
constraints:
These constraints count both true and inconsistent facts, but can be
easily modified to count only consistent true facts.
Note the role of the last rule, which closes off the information
being counted by the constraint. This is necessary because if,
say, Pete is concluded to hold exactly two jobs (of an actor and a
phone operator) then there should be nothing unknown about him holding
any other job. Instead, should
be true for any other job .
The general form of the exactly constraint is:
As in ASP, such statements can be represented as a number of ground disjunctive rules. The “exactly ” constraints can be generalized to “at least and at most ” constraints, if we extend the semantics in the direction of [Soininen et al. (2001)].
7 Comparison with Other Work
Although a great deal of work is dedicated to paraconsistent logics and logical formalizations for word puzzles separately, we are unaware of any work that applies paraconsistent logics to solving word puzzles that might contain inconsistencies. As we demonstrated, mere encoding of such puzzles in a paraconsistent logic leads to an explosion of possible worlds, which is not helpful.^{4}^{4}4 Also see Appendix A and the readytorun examples at https://bitbucket.org/tiantiangao/apc_lp. Most paraconsistent logics [Priest et al. (2015), J. Y. Beziau (2007), Belnap Jr (1977), da Costa (1974)] deal with inconsistency from the philosophical or mathematical point of view and do not discuss knowledge representation. Other paraconsistent logics [Blair and Subrahmanian (1989), Kifer and Subrahmanian (1992)] were developed for definite logic programs and cannot be easily applied to solving more complex knowledge representation problems that arise in word puzzles. An interesting question is whether our use of APC is essential, i.e., whether the notions of consistencypreferred models can be adapted to other paraconsistent logics and the relationship with ASP can be established. First, it is clear that such an adaptation is unlikely for prooftheoretic approaches to inconsistency, such as [da Costa (1974)]. We do not know if such an adaptation is possible for modeltheoretic approaches, such as [Belnap Jr (1977)].
On the word puzzles front, [Wos et al. (1984)] used the firstorder logic theorem prover OTTER to solve Jobs Puzzle^{5}^{5}5http://www.mcs.anl.gov/~wos/mathproblems/jobs.txt and [Shapiro (2011)] represented Jobs Puzzle in multiple logical languages: TPTP,^{6}^{6}6http://www.cs.miami.edu/~tptp/cgibin/SeeTPTP?Category=Problems&Domain=PUZ&File=PUZ0191.p Constraint Lingo [Finkel et al. (2004)] layered on top of the ASP system Smodels [Syrjänen and Niemelä (2001)] as the backend, and the SNePS commonsense reasoning system [Shapiro (2000)]. More recently, [Baral and Dzifcak (2012), Schwitter (2013)] represented word puzzles using NL/CNL sentences, and then automatically translate them into ASP. None of these underlying formalisms, FOL, ASP, and SNePS, are equipped to reason in the presence of inconsistency. In contrast, , combined with the knowledge representation principles developed in Section 6, localizes inconsistency and computes useful possible worlds. In addition, has mechanisms to control how inconsistency is propagated through inference, it allows one to prioritize inconsistent information, and it provides several other ways to express user’s intent (through contraposition, completion of knowledge, etc.).
8 Conclusion
In this paper we discussed the problem of knowledge representation in the presence of inconsistent information with particular focus on representing English sentences using logic, as in word puzzles [Wos et al. (1984), Shapiro (2011), Ponnuru et al. (2004), Schwitter (2013), Baral and Dzifcak (2012)]. We have shown that a number of considerations play a role in deciding on a particular encoding, which includes whether or not inconsistency should be propagated through implications, relative degrees of confidence in different pieces of information, and others. We used the wellknown Jobs, Zebra and Marathon puzzles (see the appendices in the supplemental material) to illustrate many of the above issues and show how the conclusions change with the introduction of different kinds of inconsistency into the puzzle.
As a technical tool, we started with a paraconsistent logic called Annotated Predicate Calculus [Kifer and Lozinskii (1992)] and then gave it a special kind of nonmonotonic semantics that is based on consistencypreferred stable models. We also showed that these models can be computed using ASP systems that support preference relations over stable models, such as Clingo [Gebser et al. (2011)] with the Asprin extension [Brewka et al. (2015)].
For future work, we will consider additional puzzles which may suggest new knowledge representation principles. In addition, we will investigate ways to incorporate inconsistency into CNL systems. This will require introduction of background knowledge into these systems and linguistic cues into the grammar.
References
 Baral and Dzifcak (2012) Baral, C. and Dzifcak, J. 2012. Solving puzzles described in english by automated translation to answer set programming and learning how to do that translation. In Principles of Knowledge Representation and Reasoning: Proceedings of the Thirteenth International Conference, KR 2012, Rome, Italy, June 1014, 2012. AAAI Press, Rome, Italy.
 Belnap Jr (1977) Belnap Jr, N. D. 1977. A useful fourvalued logic. In Modern uses of multiplevalued logic. Springer, Volume 2, 5–37.
 Blair and Subrahmanian (1989) Blair, H. and Subrahmanian, V. 1989. Paraconsistent logic programming. Theoretical Computer Science 68, 135–154.

Brewka
et al. (2015)
Brewka, G., Delgrande, J. P., Romero, J., and Schaub, T. 2015.
asprin: Customizing answer set preferences without a headache.
In
Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence, January 2530, 2015, Austin, Texas, USA.
AAAI Press, Austin, Texas, 1467–1474.  C. Guéret and Sevaux (2000) C. Guéret, C. P. and Sevaux, M. 2000. Programmation linéaire  65 problèmes d’optimisation modélisés et résolus avec Visual Xpress. Eyrolles, France. ISBN : 2212092024.
 da Costa (1974) da Costa, N. 1974. On the theory of inconsistent formal systems. Notre Dame J. of Formal Logic 15, 4 (October), 497–510.
 Finkel et al. (2004) Finkel, R. A., Marek, V. W., and Truszczynski, M. 2004. Constraint lingo: towards highlevel constraint programming. Softw., Pract. Exper. 34, 15, 1481–1504.
 Fuchs et al. (2008) Fuchs, N. E., Kaljurand, K., and Kuhn, T. 2008. Attempto controlled english for knowledge representation. In Reasoning Web, 4th International Summer School 2008, Venice, Italy, September 711, 2008, Tutorial Lectures. Lecture Notes in Computer Science, vol. 5224. Springer, Venice, Italy, 104–124.
 Gebser et al. (2011) Gebser, M., Kaminski, R., Kaufmann, B., Ostrowski, M., Schaub, T., and Schneider, M. 2011. Potassco: The Potsdam answer set solving collection. AI Communications 24, 2, 107–124.
 J. Y. Beziau (2007) J. Y. Beziau, W. Carnielli, D. M. G. 2007. Handbook of Paraconsistency (Studies in Logic). College Publications, United States.
 Kifer and Lozinskii (1992) Kifer, M. and Lozinskii, E. L. 1992. A logic for reasoning with inconsistency. 9, 2, 179–215.
 Kifer and Subrahmanian (1992) Kifer, M. and Subrahmanian, V. S. 1992. Theory of generalized annotated logic programming and its applications. J. Log. Program. 12, 3&4, 335–367.
 Ponnuru et al. (2004) Ponnuru, H., Finkel, R. A., Marek, V. W., and Truszczynski, M. 2004. Automatic generation of englishlanguage steps in puzzle solving. In Proceedings of the International Conference on Artificial Intelligence, ICAI ’04, June 2124, 2004, Las Vegas, Nevada, USA, Volume 1. CSREA Press, Las Vegas, Nevada, USA, 437–442.
 Priest et al. (2015) Priest, G., Tanaka, K., and Weber, Z. 2015. Paraconsistent logic. In The Stanford Encyclopedia of Philosophy, Spring 2015 ed., E. N. Zalta, Ed. Stanford, USA.

Schwitter (2012)
Schwitter, R. 2012.
Answer set programming via controlled natural language processing.
In Controlled Natural Language  Third International Workshop, CNL 2012, August 2931, 2012. Proceedings, T. Kuhn and N. E. Fuchs, Eds. Lecture Notes in Computer Science, vol. 7427. Springer, Zurich, Switzerland, 26–43.  Schwitter (2013) Schwitter, R. 2013. The Jobs Puzzle: Taking on the challenge via controlled natural language processing. Theory and Practice of Logic Programming 13, 45, 487–501.
 Shapiro (2000) Shapiro, S. C. 2000. An introduction to sneps 3. In Conceptual Structures: Logical, Linguistic, and Computational Issues, 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 1418, 2000, Proceedings. Springer, Darmstadt, Germany, 510–524.
 Shapiro (2011) Shapiro, S. C. 2011. The jobs puzzle: A challenge for logical expressibility and automated reasoning. In Logical Formalizations of Commonsense Reasoning, Papers from the 2011 AAAI Spring Symposium, California, USA, March 2123, 2011. AAAI, Stanford, California, USA.
 Soininen et al. (2001) Soininen, T., Niemelä, I., Tiihonen, J., and Sulonen, R. 2001. Representing configuration knowledge with weight constraint rules. In Answer Set Programming, Towards Efficient and Scalable Knowledge Representation and Reasoning, Proceedings of the 1st Intl. ASP’01 Workshop, Stanford, March 2628, 2001. Springer, Stanford, California, USA.
 Syrjänen and Niemelä (2001) Syrjänen, T. and Niemelä, I. 2001. The Smodels system. In Logic Programming and Nonmonotonic Reasoning, 6th International Conference, LPNMR 2001, Vienna, Austria, September 1719, 2001, Proceedings. Springer, Vienna, Austria, 434–438.
 White and Schwitter (2009) White, C. and Schwitter, R. 2009. An update on PENG Light. In Proceedings of ALTA. Vol. 7. Springer, Sydney, Australia, 80–88.
 Wos et al. (1984) Wos, L., Overbeck, R., Lusk, E., and Boyle, J. 1984. Automated reasoning: Introduction and applications. Prentice Hall Inc.,Old Tappan, NJ, United States.
Appendix A Jobs Puzzle in with Inconsistency Injections
We now present a complete encoding of Jobs Puzzle and highlight the principles, introduced in Section 6, used in the encoding. We also show several cases of inconsistency injection and discuss the consequences. The English sentences are based on the CNL representation of Jobs Puzzle from Section 3 in [Schwitter (2013)] where “Steve” is changed to “Robin” for the sake of an example (because Robin can be both a male and a female name).

Roberta is a person. Thelma is a person. Robin is a person. Pete is a person.

.

Roberta is a female. Thelma is a female.


Robin is male. Pete is male.

Sentence 4 is encoded based on Principle 6, which treats and female as polar facts.

Exclude that a person is male and that the person is female.



Comments
There are no comments yet.