Controlled Natural Languages and Default Reasoning

by   Tiantian Gao, et al.

Controlled natural languages (CNLs) are effective languages for knowledge representation and reasoning. They are designed based on certain natural languages with restricted lexicon and grammar. CNLs are unambiguous and simple as opposed to their base languages. They preserve the expressiveness and coherence of natural languages. In this report, we focus on a class of CNLs, called machine-oriented CNLs, which have well-defined semantics that can be deterministically translated into formal languages, such as Prolog, to do logical reasoning. Over the past 20 years, a number of machine-oriented CNLs emerged and have been used in many application domains for problem solving and question answering. However, few of them support non-monotonic inference. In our work, we propose non-monotonic extensions of CNL to support defeasible reasoning. In the first part of this report, we survey CNLs and compare three influential systems: Attempto Controlled English (ACE), Processable English (PENG), and Computer-processable English (CPL). We compare their language design, semantic interpretations, and reasoning services. In the second part of this report, we first identify typical non-monotonicity in natural languages, such as defaults, exceptions and conversational implicatures. Then, we propose their representation in CNL and the corresponding formalizations in a form of defeasible reasoning known as Logic Programming with Defaults and Argumentation Theory (LPDA).



There are no comments yet.


page 1

page 2

page 3

page 4


Verbalizing Ontologies in Controlled Baltic Languages

Controlled natural languages (mostly English-based) recently have emerge...

Non-monotonic Reasoning in Deductive Argumentation

Argumentation is a non-monotonic process. This reflects the fact that ar...

A Survey and Classification of Controlled Natural Languages

What is here called controlled natural language (CNL) has traditionally ...

Disjunctive Datalog with Existential Quantifiers: Semantics, Decidability, and Complexity Issues

Datalog is one of the best-known rule-based languages, and extensions of...

DLV - A System for Declarative Problem Solving

DLV is an efficient logic programming and non-monotonic reasoning (LPNMR...

Semantic Construction Grammar: Bridging the NL / Logic Divide

In this paper, we discuss Semantic Construction Grammar (SCG), a system ...

How Controlled English can Improve Semantic Wikis

The motivation of semantic wikis is to make acquisition, maintenance, an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Controlled natural languages (CNLs) are effective languages for knowledge representation and reasoning. According to Kuhn, “A controlled natural language is a constructed language that is based on a certain natural language, being more restrictive concerning lexicon, syntax, and/or semantics while preserving most of its natural properties” [21]. Unlike the languages that develop naturally, constructed languages are the languages whose lexicon and syntax are designed with intent. A CNL is constructed on the basis of an existing natural language, such as English, French, or German. Words in the lexicon of a CNL mainly come from its base language. Some CNLs may include special symbols in their lexicon. For example, in Common Logic Controlled English (CLCE) the parentheses are used for deeply nested sentences and for lists of more than two elements [48]. Words may or may not be used in the same manner as in the base language. Some words are used with fewer senses or reserved as key-words for specific purposes. In CLCE, the word “every“ is treated as a universal quantifier. It does not allow any universally quantified noun phrases to be used as the object of a preposition. CNLs have a well-defined syntax to form phrases, sentences and texts. The syntax of a CNL is generally simpler than that of the source language. Sentences are interpreted in a deterministic way. CNLs are more accurate than natural languages, because the language is more restrictive, but not all CNLs have formal semantics. Those that have formal semantics can be processed by computers for knowledge representation, machine translation, and logical reasoning. Although a CNL may deviate from its base language in the lexicon, syntax, and/or semantics, it still preserves most of the natural properties of the base language, so the reader would correctly comprehend the CNL with little effort.

CNLs generally fall into two categories: human-oriented CNLs and machine-oriented CNLs [42]. Human-oriented CNLs are designed to make the texts easier for readers to understand. They are applied in technical writings and inter-person communications. Basic English was the first English-based controlled natural language created by Charles Ogden in 1930 [34]. It has a tremendous impact on the development of controlled natural languages. Basic English has 850 core root words which are composed of 600 nouns, 150 adjectives and 100 functional words that put the nouns and adjectives into operation. These 850 words can do all the work that 20,000 English words do in daily life. Root words can be extended to form plurals, negative adjectives, etc.. For example, plurals are formed by appending an “S“ to the end of a root word. An adjective is given a negative meaning with the prefix, “UN-“. Basic English substitutes verbs in English with operators. The operator is one of the 18 verbs: put, take, give, get, come, go, make, keep, let, do, be, seem, have, may, will, say, see, and send. A verb can be described by an operator combined with a preposition or a noun. For instance, Basic English uses “put in” to represent the verb, “inject”, in English. There are 10 rules of grammar that define the order of words in a sentence, how root words can be extended to form plurals, adjectives, adverbs, negative adjectives, and compound words, the composition of questions, and so on. In addition to the 850 core root words, Basic English also defines a list of international words. The international words are intended for scientific and technical writing. Basic English is used in business, science, economics, and politics. The benefits of Basic English are two-fold. First, it can serve as an international auxiliary language and work as an aid for teaching English as a second language. Second, it provides a simple introduction to English for foreign language speakers.

Other influential human-oriented CNLs include Special English and Simplified Technical English (STE). Special English was developed by Voice of America in 1959 [33]. The vocabulary is limited to 1580 words. Special English has been used in a multitude of daily news programs in the United States. It also serves as a resource for English learners and has been adopted by other countries for news broadcasting. Simplified Technical English (STE) was developed for aerospace industry maintenance manuals [17]. The dictionary consists of approved and unapproved words. Unapproved words are not allowed to be used. Instead, the dictionary provides alternative words that refer to the same meaning. STE also allows users to add approved words to the dictionary. STE, as a subset of English, reduces ambiguities and improves clarity and comprehensibility through restrictions on grammar and style. It also helps in computer-assisted and machine translation.

Machine-oriented CNLs, as opposed to human-oriented ones, have formal semantics which can be understood and processed by computers for the purpose of knowledge representation and logical reasoning. Attempto Controlled English (ACE) was the first CNL that can be translated to first-order logic [10]. ACE is a subset of English defined by a restricted grammar along with interpretation rules that control the semantic analysis of grammatically correct ACE sentences. ACE uses discourse resolution structure (DRS) as the logical structure to represent the semantics of a set of ACE sentences [20]. ACE is supported by a language processor, Attempto Parsing Engine (APE), and a reasoner, RACE. APE is an online language processor that allows users to compose ACE sentences as input and generates their semantics in DRS and first-order logic clauses as output. RACE is a CNL reasoner that supports theorem proving, consistency checking, and question answering. ACE has been applied to various areas such as semantic web, bioinformatics and so on.

Other examples include PENG, CELT and CLP [40, 36, 5]. Both PENG and CELT are inspired by ACE. They are a subset of English with restricted grammar and use DRS as semantic representation. Unlike ACE, PENG does not require users to learn the grammar of the language. Instead, it designs a predictive editor that informs users of the look-ahead information that guides users to proceed based on the structure of the current sentence. CELT translates the controlled language to formal logical statements that use terms in an existing large ontology, the Suggested Upper Merged Ontology (SUMO) [31]. This gives each term multiple definitions. CPL is a CNL developed by Boeing [5]. It uses Knowledge Machine (KM), an advanced frame-based language, to represent the semantics of its language [6]. The translation is through three main steps: parsing, generation of an intermediate “logical form” (LF), and conversion of the LF to statements in the KM knowledge representation language. The KM statements can be used for reasoning and question-answering.

The above four machine-oriented CNLs are designed for general purposes. They can be applied in various application domains. Some machine-oriented CNLs are designed for specific application domains. For example, SBVR Structured English is intended for describing business vocabulary and for representing business rules [3]. Rabbit is a CNL that can be translated into OWL and achieves both comprehension by domain experts and computational experts [19]. LegalRuleML is used for representing legislative documents in XML-based rules in order to conduct legal reasoning [35].

Although a number of machine-oriented CNLs exist, few of them support nonmonotonic reasoning. In our work, we propose nonmonotonic extensions of CNL to support defeasible reasoning. A logic is nonmonotonic if some conclusions can be defeated by the addition of new knowledge. This contrasts with monotonic formalisms, where the addition of new knowledge will not invalidate any previously derived conclusion. Nonmonotonic reasoning represents the natural way of human reasoning in real life. People draw conclusions based on a number of unstated default assumptions which are supposed to be true. Some conclusions may be retracted when the addition of new knowledge violates one of the default assumptions. For instance, given the following knowledge base,

  1. Typically, birds fly.

  2. Penguins are birds.

  3. Penguins do not fly.

  4. Tweety is a bird.

the first sentence is in fact interpreted as “in the absence of any information to the contrary, we assume that birds fly.” Hence, it is reasonable to make the conclusion that “Tweety flies.” However, if later we add the fact that “Tweety is a penguin,” we will withdraw the previous conclusion and conclude that “Tweety does not fly.” Classical approaches that deal with defaults include default logic [38], circumscription [25], and autoepistemic logic [24]. Modern approaches include answer set programming (ASP) [24] and logic programming with defaults and argumentation theories (LPDA) [51]. In this report, we focus on three logical frameworks: circumscription, ASP and LPDA, where circumscription is used in Wainer’s approach to model conversational implicatures, ASP is used in PENG to handle defaults and exceptions, and LDPA is used in our approach to model nonmonotonicity in CNL. They will be discussed in Section 5.2, 3.4, and 5.1 respectively. Technical details of these three frameworks can be found in the appendix.

The following is organized as follows: in Section 2-4, we study three influential CNL systems, Attempto Controlled English (ACE), Processable English (PENG), and Computer-processable English (CPL). We compare their language design, semantic interpretations, and reasoning services. In Section 5, we first identify typical nonmonotonicity in natural languages, such as defaults, exceptions and conversational implicatures. Then, we propose their representation in CNL and the corresponding formalizations in a form of defeasible reasoning known as Logic Programming with Defaults and Argumentation Theory (LPDA).

2 Attempto Controlled English

Attempto Controlled English (ACE) will be discussed in this section. Subsection 1-4 introduce the language properties of ACE, including vocabulary, construction rules and interpretation rules. Subsection 5 describes Discourse Representation Structure (DRS), the semantic interpretation of ACE sentences. Subsection 6 shows the Attempto Parsing Engine (APE). Subsection 7 presents the Attempto reasoning engine, RACE reasoner.

2.1 Vocabulary

The vocabulary consists of function words, such as determiners, articles, pronouns, and quantifiers, some fixed phrases (e.g. “there is a” and “it is not the case that”), and 100,000 content words, including adverbs, adjectives, nouns, verbs, and prepositions. ACE also allows users to add new content words to the lexicon. Content words are written as Prolog atoms. The predicates describe the part of speech (POS) of the words. Each predicate has at least two arguments. The first argument is a word in ACE lexicon. The second argument specifies the corresponding logical symbol of the first argument used in representing the semantics of ACE sentences. A predicate may represent some additional information by adding a few more arguments. Below is an example that describes the Prolog representations of the words “fast,” “faster,” and “fastest” respectively,

  1. .

  2. .

  3. .

where predicate says that “fast” is an adverb, predicate says that “faster” is a comparative adverb, predicate says that “fastest” is a superlative adverb. All three words use the same logical symbol, “fast,” in logical representations.

2.2 Construction Rules

The construction rules define the format of words, phrases, sentences, and ACE texts.


Function words and some fixed phrases are not allowed to be modified by users. Users can create compound words by concatenating two or more content words with hyphens.


Phrases include noun phrases, modifying nouns, modifying noun phrases, verb phrases, and modifying verb phrases. Noun phrases in ACE are a subset of noun phrases in English plus arithmetic expressions, sets, and lists. Modifying nouns and noun phrases are those that are proceeded or followed by adjectives, relative clauses, or possessives. Verb phrases in ACE form a subset of verb phrases in English with specific definitions of negations and modalities. Modifying verb phrases are those that are accompanied by adverbs or preposition phrases.


ACE sentences include declarative, interrogative, and imperative sentences. Declarative sentences include simple sentences, there is/are-sentences, boolean formulae, and composite sentences. Interrogative sentences include yes/no queries, wh-queries, and how much/many-queries which end with a question mark. Imperative sentences are commands and end with an exclamation mark.

ACE Texts

ACE texts are sequences of declarative, interrogative, and imperative sentences.

2.3 Interpretation Rules

ACE restricts sentences to be interpreted in only one deterministic way. The interpretation rules specify how a grammatically correct ACE sentence is translated. Below are three examples of the interpretation rules:

  1. Prepositional phrases modify the verb not the noun

    • A customer {enters a card with a code} .

  2. Relative clauses modify the immediately preceding noun

    • A customer enters {a card that has a code} .

  3. Anaphora is resolved using the nearest antecedent noun that agrees in gender and number

    • Brad was born in Seattle. It’s a beautiful place.

The first sentence can be paraphrased in two ways in English, “A customer uses a code to enter a card” and “A customer enters a card. The card has a code.” In ACE, it is only translated in the first way. In the second sentence, given a relative clause, an ACE sentence only refers to the closest noun that precedes it. Hence, the relative clause, “that has a code,” modifies the noun, “card.” In the third sentence, ACE resolves the pronoun, “it,” in the second sentence by looking for the nearest antecedent noun that agrees with gender and number, so ACE associates “it” with the word “Seattle.”

2.4 Evaluation of ACE

There are advantages and disadvantages in the design of ACE. The language has a large vocabulary and a multitude of construction rules. The number of words in the ACE lexicon is more than half of the words (171,476) that are used in current English according to the second edition of the 20-volume Oxford English Dictionary [47]. Its construction rules cover various sentence structures plus mathematical expressions and boolean formulae. On one hand, these allow users to describe more things and thereby make the language very expressive. On the other, these add complexity to the language. It is difficult for casual users to learn the language and takes them many trials to get to a grammatically correct sentence. Interpretation rules ensure a deterministic way to interpret an ACE sentence and thus avoid many ambiguities that exist in English, however this does not always lead to natural interpretations and often results in stilted English. For example, “Brad is an actor. He is handsome.” The pronoun, “He,” in the second sentence refers to the proper name, “Brad,” in the first sentence. However, ACE revolves anaphora by referring “He” back to the noun, “actor.” Users can only avoid this problem by rephrasing the sentences in a way that follows the ACE interpretation rules.

2.5 Semantic Interpretations

The semantics of ACE are represented by Discourse Resolution Structures (DRSs) [20]. A DRS is a diagram structure consisting of two parts:

  • a set of discourse referents (discourse variables), called the “universe” of the DRS, which will always be displayed at the top of the diagram [20].

  • a set of DRS-conditions, typically displayed below the universe [20].

A referent can stand for an object introduced by a noun, a predicate introduced by a verb, etc.. A condition can be either simple or complex. A simple condition is a logical atom followed by an index. A complex condition is constructed from other DRSs connected by operators such as negation, disjunction, and implication. For each simple condition, the logical atom is restricted to one of the 8 predicates: object, property, relation, predicate, modifier_adv, modifier_pp, has_part, and query. Their definitions are given in [11]. The index shows the location of the input from which the condition is introduced.

The two ACE sentences, “A bird eats a little worm. An eagle kills a large snake,” are represented in one logical unit, as shown in Figure 1. In the first condition, the object-predicate represents the bird-object. It has 6 arguments: A, bird, countable, na, eq, and 1. The first argument, “A,” is the label for the bird-object. It can be referenced by other conditions if the conditions are within the scope of the DRS to which A belongs. The second argument indicates that the word, “bird,” introduces the object-predicate. The third argument identifies the bird-object as a countable object. The fourth argument shows that the bird-object does not come together with any measurement unit (e.g. kg, cm) in the input. The fifth and the sixth arguments specify the quantity of the bird-object to be one. The index, “1/2,” signifies that the bird-object is introduced by the second word of the first ACE sentence in the input. The same applies to the worm-, eagle-, and snake-objects. In the third condition, the property-predicate describes the little-property. It has 3 arguments: B, little, and pos. The first argument, “B,” refers to the worm-object. The second argument shows that the word “little” introduces the little-property. The third argument denotes the degree of the little-property as positive. The fourth condition represents the eat-predicate. It has 4 arguments: C, eat, A, and B. The first argument, “C,” is the label for the eat-predicate. Similar to the bird-object, C can be used by other conditions as well. The second argument indicates that this predicate introduces the word “eat.” The third and fourth arguments refer to the bird- and worm-objects.

Figure 1: DRS for “A bird eats a little worm. An eagle kills a large snake.”

DRSs can be nested. DRS-conditions are allowed to use the referents from their ancestor but not descendant DRSs. A multi-sentential paragraph is always represented by one DRS instead of a conjunction of individual DRSs. This is because sentences may be cross-referenced among each other. Representing a paragraph by a conjunction of individual DRSs cannot indicate their connections. A DRS is constructed incrementally by processing each sentence in order. Once a sentence gets processed, new referents and conditions are incorporated into the current DRS, which is constructed from all preceding sentences. Anaphora is resolved by referring to a variable from the current DRS.

2.6 Attempto Parsing Engine

Attempto parsing engine (APE) is a language processor that accepts ACE texts as input and generates paraphrases, DRSs and first-order logic clauses. Paraphrases indicate to users how APE comprehends the ACE sentences. Users can rephrase their input to get another paraphrases if they do not accept APE’s interpretations. APE generates first-order logic clauses from DRSs using the methods introduced in [20]. The core part of APE is a top-down parser that implements the construction and interpretation rules. The parser uses the Definite Clause Grammar (DCG) enhanced by feature structures [13]. It is written in GULP (Graph Unification Logic Programming) which extends Prolog by adding the operators “:” and “..,” and a number of built-in predicates [8]. The operator “:” binds a feature name to its value. The operator “..” joins one feature-value pair to the next. In XSB Prolog, list can do the same thing as “..” [39].

There are several problems with APE. First, logical clauses are not represented in the usual sense of first-order logic. An example is shown in Figure 2. In Attempto, a noun is represented by a pre-defined predicate object where the fifth and the sixth fields denote the quantity information. Here, the predicate indicates that there is one bird. If the sentence is changed to “Two birds eat a little worm,” then the object predicate for bird becomes . The way Attempto does simply treats the quantification information as an adjective modifier to the noun. If there is a query that asks for the quantity information of bird, it will check the fifth and the sixth fields of the object predicate. This is not represented correctly in first-order logic. In first-order logic, the quantity information should be represented by existential quantifiers. For instance, the sentence, “There are two birds,” is translated into first-order logic as shown below:

where and denote two unique bird entities.

(a) DRS
(b) First-order Logic
Figure 2: The ACE sentence, “A bird eats a little worm.”

Second, APE does not recognize certain sentence structures such as “he seeks…” which cannot be represented in first-order logic. For example, Figure 3 shows the DRS and first-order logic clause for the sentence, “John seeks a unicorn.” In fact, “John seeks a unicorn” indicates that there may or may not exist a unicorn, so the unicorn cannot be simply existentially quantified as in first-order logic. This type of sentences is represented by intentional logic, which is an extension of first-order logic [9].

(a) DRS
(b) First-order Logic
Figure 3: The ACE sentence, “John seeks a unicorn.”

2.7 RACE Reasoner

RACE is an ACE reasoner that accepts ACE texts as input and supports consistency checking, theorem proving, and question answering [12]. RACE is implemented in Prolog. It is an extension of Satchmo, which is a theorem prover based on the model generation paradigm [23]. Satchmo works with first-order logic clauses of the form where is true or a conjunction of logical atoms, and is fail or a disjunction of logical atoms. Negation is expressed as an implication to false. Satchmo executes the clauses by forward-reasoning and generates a minimal finite model of clauses. RACE extends Satchmo by giving a justification for every proof, finding all minimal unsatisfiable subsets of clauses if the axioms are not consistent, etc..

RACE has certain limitations. First, RACE does not work properly in some cases. An example is shown in Figure 4. There are five consistent axioms. RACE can prove that there is a human based on either the first and fourth sentences or the second and third sentences. However, RACE cannot prove that there are two humans in total. The fifth axiom says that “John is not Mary,” but RACE cannot answer the query, “Is John Mary?” from the axioms.

Figure 4: An example of RACE reasoning

Second, RACE does not support defeasible reasoning [32]. For example, given the sentences, “Birds fly. Penguins are birds. Penguins don’t fly,” RACE shows that these axioms are inconsistent. This is because the second and the third axioms, which state that penguins are the exceptions of birds that fly, challenge the first axiom. Such exceptions are very common in natural languages, especially in law, legislation, policies, but they cause contradiction in RACE.

3 Processable English

In this section, we introduce Processable English (PENG), a system developed by Rolf Schwitter at Macquarie University who was also one of the Attempto group members [49]. The first subsection gives an overview of PENG’s language properties and compares it with Attempto. The second subsection presents the basics of chart parsing and its application to PENG’s language processor. The third subsection discusses the reasoning service in PENG. The fourth subsection describes the way PENG handles defaults and exceptions in CNL and its support for non-monotonic reasoning based on the answer set programming (ASP) paradigm.

3.1 Basics of PENG

The dictionary of PENG consists of predefined function words (including determiners, cardinals, connectives, prepositions), 3,000 content words (e.g. nouns, proper nouns, verbs, adjectives, and adverbs), and illegal words. Illegal words are those that are banned by PENG, such as wish, can, could, should, might, and all personal pronouns. Users can expand the lexicon as in ACE.

A simple PENG sentence is constructed by the following grammar, where curly brackets indicate the enclosed elements, are optional.

Sentence Subject + Predicate
Subject Determiner {+Pre-nominal Modifier}
      + Nominal Head {+Post-nominal Modifier} Nominal Head
Predicate {Negation} + Verbal Head + Complement {+Adjunct}

Complex sentences are built from simple sentences using coordinators (and, or), subordinators (if, before, after, while), and constructors (for every, there). PENG resolves anaphora by referring back to the most recent noun phrase that agrees in gender and number. In this way, PENG functions similarly to ACE. In addition, PENG identifies synonyms within sentences. For example, given the sentence, “The fog hangs over Dreadsbury Mansion. The mist is creepy,” PENG recognizes that “mist” is a synonym for “fog,” and the noun phrase “The mist” is an anaphora for “The fog” [49].

Like ACE, PENG interprets sentences in a deterministic way. Once sentences are evaluated, the PENG language processor generates paraphrases for users. If PENG misinterprets a user’s intentions, it allows that user to rephrase the submission. As discussed in section 2.5, ACE uses DRSs to represent the semantics of the language; PENG also uses DRSs as its semantic representations and translates DRSs to first-order logic clauses.

PENG has several key advantages over ACE. For one, PENG has a much smaller lexicon and simpler grammar. This makes the language very accessible and easy to learn. Most importantly, users do not need to consult the grammar manual when composing sentences. Instead, PENG can inform users of all possible words which can follow their current inputs [46]. Details of this feature are discussed next.

3.2 Chart Parsing in PENG

The PENG language processor is based on a top-down incremental chart parser that operates on DCG grammar enhanced with feature structures. A chart parser is a parser that uses a chart to save the information of the substrings that have already been successfully analyzed. It reuses this information later if needed [49]. This eliminates backtracking and avoids re-discovering the same substring over and over again.

For example, given the following context-free grammar and the sentence, “Tom walks slowly,” a backtracking DCG parser will start from the first rule and evaluate the non-terminal symbols, and . After it finds the first rule fails, it will explore the second rule and re-evaluate and . A chart parser, on the other hand, avoids this duplication of work by storing the computation results of the first rule in a chart and reusing them when exploring the second rule.

The chart for the above example is shown in Figure 5. It is represented by a directed graph consisting of 4 vertices (, , , ) and a number of edges labelled by dotted rules. A dotted rule is the same as a production rule for the context-free grammar except that it has a dot inserted somewhere in its right-hand side. The dot indicates the extent to which the hypothesis that the rule is applicable has been verified. Edges can be active or inactive. Active edges refer to partially verified hypotheses; inactive edges signify fully confirmed hypotheses. For the purposes of this explanation, edge 1 will be referred to as , edge 2 to , etc.. In Figure 5, is an active edge that denotes a hypothesis that can be transformed to a substring that represents a sequence. is also an active edge that represents a similar hypothesis, but in this case the hypothesis has been partially verified. The parser has confirmed that can derive the substring “Tom walks,” but hasn’t analyzed yet. is an inactive edge which indicates a fully confirmed hypothesis that can generate “Tom.”

Chart parsing can be conducted either top-down or bottom-up. PENG implements the top-down approach. The parsing process involves combining active edges with inactive edges based on the fundamental rule described in [49]. It uses a data structure, called agenda, to maintain a list of active edges and organize the order in which these edges are executed. In PENG’s approach, the agenda behaves like a stack that manages the edges in the last-in-first-out manner. In this example, the agenda is initialized with and ; the chart is set up with , , and . Each time, the parser pops the first edge from the agenda and adds it to the chart. This active edge is then combined with applicable inactive edges in the chart to form new edges, which may be active or inactive. The parser keeps the inactive edges in the chart and pushes all active edges to the agenda. This process repeats until the parser finds an inactive edge that spans the whole sentence.

Incremental chart parsing is an extension of the chart parsing framework. It can handle modifications to previously parsed strings without the need for from scratch re-processing. Basically, the parser uses the edge dependency information to determine which edges to recompute. An edge is dependent on another if it is triggered by the latter. Once an edge is changed, only the edges that are dependent on it are updated. For the previous example which is shown in Figure 5, if the user deletes the last word, “slowly,” from the sentence, the parser will first remove and . Next, the parser will eliminate because this edge is dependent on . All other edges are unaffected. The incremental chart parser supports three edit operations: insertion, deletion, and replacement. Users can insert, delete, or replace a word from the sentence. Details of the corresponding algorithms are given in [41].

Figure 5: The chart for the sentence, “Tom walks slowly.”

3.3 Reasoning in PENG

The PENG reasoner accepts grammatically correct sentences as input and generates the output in controlled languages as well. It supports consistency checking, informativity checking, and question/answering. Consistency checking ensures that a set of PENG sentences is semantically consistent. As compared to RACE, which is the Attempto reasoner discussed in section 2.7, PENG can also handle synonyms in sentences. For example, given the input, “Mary lives in Seattle. Mary does not reside in Seattle,” RACE will not find the inconsistency because it treats “live” and “reside” as words having different meanings. PENG, on the other hand, can identify the synonyms information and therefore evaluates the input as inconsistent. Informativity checking guarantees that sentences are concise with no redundant information. For instance, given the sentence, “Mary lives in Seattle. Mary resides in Seattle,” the second sentence violates the informativity constraint because it expresses the same meaning as the first one. For question/answering, the PENG reasoner operates in the same way as RACE.

The implementation of the reasoner is based on a theorem-prover Otter and a model builder MACE [27, 26]. Otter (Organized Techniques for Theorem-proving and Effective Research) is a resolution-based theorem-prover that works on first-order logic with equality. It proves a first-order formula valid by assuming and generating an empty clause (a contradiction) based on the inference rules (e.g. binary resolution, hyper-resolution, UR-resolution, binary-paramodulation). The PENG reasoner does not require users to specify which inference rules to use in the process of theorem proving. Instead, it relies on Otter’s autonomous mode that decides on the inference strategies.

MACE (Models and Computer Examples) is a model builder that searches for finite models of first-order statements. It is based on a SAT solver, which implements the Davis, Putnam, Logemann and Loveland (DPLL) algorithm. The DPLL algorithm decides the satisfiability of a first-order propositional formula in conjunctive normal form (CNF) and builds a model for it if there exists a satisfying assignment [30]. Basic operations of DPLL include unit propagation and pure literal elimination. A unit clause is a clause that contains only one single unassigned literal, say . Once a unit clause is found, unit propagation replaces all occurrences of s to true and s to false and then simplifies the resulting formula. A positive literal is pure if it only appears in one clause of the formula. Pure literal elimination assigns as true and removes the clause that contains it. Initially, the value for each literal in the given input is unassigned. DPLL first runs unit propagation and pure literal elimination while updating the current truth assignment. After simplifying the formula that results from the first step, it chooses an undefined literal and assigns a truth value to it. Next, DPLL calls itself recursively with the current propositional formula and truth assignment. The entire procedure repeats until all clauses are true under the current truth assignment, which is the satisfying assignment. Otherwise, it backtracks to a previous branching point and maps the literal to the opposite truth value. A first-order propositional formula is unsatisfiable if the DPLL procedure terminates without finding a satisfying assignment.

MACE conducts model building in a five-step pipeline. In the first step MACE translates the given input, a first-order formula, to a set of first-order clauses using a subroutine of Otter. In the second step, MACE transforms the first-order clauses to a set of flattened, relational clauses. A flattened relational clause is similar to a first-order clause except that it does not contain any constant or function symbols. Essentially, each -ary function symbol in the first-order clauses is replaced by an +1-ary predicate symbol. For example, the function literal is substituted with a three-place predicate where , , and are variables. A constant symbol, say , is rewritten to the predicate where is a variable. In another example, a positive equality = where and are nonvariable is replaced by two clauses: and where is a variable. In the third step, MACE generates a set of propositional clauses based on the flattened relational clauses and a given domain size . Basically, MACE constructs all ground instances of the flattened relational clauses over the given domain, and then encodes each atom into a unique integer that becomes a propositional variable. In addition, MACE adds two constraints to each predicate introduced from the previous step. For instance, given the function literal and its corresponding flattened relational clause , the first condition is that the clause holds for any and where . This guarantees that the last argument is a function of the first and second arguments. The second condition is that the clause holds. This ensures that the image of each function should be within the given domain. In the fourth step, MACE calls the DPLL procedure to search the model for the propositional clauses. Once a propositional model is found, the fifth step translates it back to the corresponding first-order model based on the encodings in the second and the third steps.

The PENG reasoner runs Otter and MACE in parallel in the process of consistency and informativity checking. Given a theory , the PENG reasoner uses Otter to detect its inconsistency while concurrently running MACE to detect its consistency. If Otter succeeds in finding a proof for , then is inconsistent. If MACE succeeds in constructing a model for , then is consistent. Similarly, given a sentence and its previous context , if Otter succeeds in finding a proof for , then is not informative. If MACE succeeds in constructing a model for , then is informative.

3.4 Nonmonotonic Extensions of PENG

The latest version of PENG incorporates typical nonmonotonicity in natural language and uses the answer set programming paradigm [14] to conduct nonmonotonic reasoning. The syntax and semantics of answer set programming are described in detail in Appendix A. In this subsection, we will show some distinguishing features of ASP that can model the nonmonotonicity that occurs in natural language.

Negation. An ASP program can have two types of negations: negation as failure and strong negation. Negation as failure is used to represent a sentence of the form “there is no evidence that ” while a strong negation is used to represent a negation in natural language. For example, given the sentence, “If Mary has an assignment to do and there is no evidence that the library is not open, then Mary will study in the library,” it is translated into the following ASP rule

Constraints. A constraint is defined as an ASP rule with an empty head. It is used to represent the sentences that denote restrictions in natural language. For instance, given the sentence, “a person cannot be both a male and a female,” it is translated into the following constraint:

The rule enforces the answer set to eliminate the atoms , , where and .

Choice Rules. A choice rule is used to specify the number of head literals that may be included in an answer set given that the body is true. For example, given the following sentences,

  1. There are two vertices: v1, v2 and v3.

  2. There are three colors: red, green, and yellow.

  3. Each vertex can be assigned with exactly one color.

they are translated into the following ASP rules,

  1. [label=,nolistsep]

where the third rule denotes a choice rule. It says that if is in the answer set, then for all ’s such that is true the answer set will choose to include one and only one head literal .

Defaults and Exceptions. The general form of a default is represented as

where stands for default, represents abnormality, refers to negation as failure, and denotes strong negation. The formula says that has the property if 1) is a class of , 2) is not abnormal with respect to , and 3) it cannot be shown that does not have property . Given the default, if is a strong exception, then it is represented as

If is a weak exception, it is represented as

The formula says is abnormal with respect to if there is no evidence that is not an exception.

Next, we will give three examples from [45, 43, 44] that show how ASP is used to represent knowledge with defaults, exceptions, choice rules, and constraints.

3.4.1 Defaults and Exceptions Use Case

PENG extension in [45] defines a default as a statement that contains words such as generally, normally, or typically. As compared to a strict rule, such as “All birds fly,” a default expresses a fact that is true in most cases but not always. For example, the statement, “Generally, birds fly,” indicates that most birds fly with a few exceptions. PENG allows two types of exceptions: strong exception and weak exception. A strong exception refutes the default conclusion and derives the opposite one, e.g., “Penguins are birds. They do not fly.” A weak exception makes the default inapplicable without defeating the default conclusion. For instance, given the weak exception, “An eagle is a bird. It is injured,” it becomes unknown whether the eagle flies or not. To represent a default in DRS, PENG introduces a new operator . An example is given in Figure 6, which shows the DRS for the sentence “Generally, birds fly.”

Figure 6: The DRS for the statement, “Generally, birds fly.”

Next, we will give an example from [45] that shows how ASP is used to represent the semantics of PENG with defaults and exceptions. The knowledge base contains the following CNL sentences:

  1. Sam is a child.

  2. John is the father of Sam and Alice is the mother of Sam.

  3. Every father of a child is a parent of the child.

  4. Every mother of a child is a parent of the child.

  5. Parents of a child normally care about the child.

  6. John does not care about Sam.

  7. Alice is absent.

  8. If there is no evidence that a parent of a child is not absent then the parent abnormally cares about the child.

Sentence (1)-(4) denote strict rules, which can be represented in ASP as follows:

  1. [label=,nolistsep]

Sentence (5) indicates a default. It is translated into the following rule:

The formula says that cares about if i) is a parent of , ii) is a child, iii) there is no evidence that the pair is abnormal with respect to , and iv) there is no evidence that does not care about . Sentence (6) is a strong exception to the default. It is represented as a negative atom, . Sentence (8) denotes that sentence (7) is a weak exception to the default. They are represented as the following rules:

  1. [label=,nolistsep]

The second rule says that the pair is abnormal with respect to if i) is a parent of , ii) is a child, iii) there is no evidence that is not absent.

The answer set for the above rules is

  1. [label=,nolistsep]

  2. ,

Given that is in the answer set, we can conclude that John does not care about Sam. Since neither nor is in the answer set, it is not known whether Alice cares about Sam or not.

3.4.2 The Marathon Puzzle Use Case

In [43], Schwitter rewrites the marathon puzzle in PENG sentences which can be automatically and unambiguously translated into an ASP program to solve the problem. The marathon puzzle is described in natural language as

  1. [label=,nolistsep]

  2. Dominique, Ignace, Naren, Olivier, Philippe, and Pascal have arrived as the first six at the Paris marathon.

  3. Reconstruct their arrival order from the following information:

    1. Olivier has not arrived last.

    2. Dominique, Pascal and Ignace have arrived before Naren and Olivier.

    3. Dominique who was third last year has improved this year.

    4. Philippe is among the first four.

    5. Ignace has arrived neither in second nor third position.

    6. Pascal has beaten Naren by three positions.

    7. Neither Ignace nor Dominique are on the fourth position.

The puzzle is represented in PENG as follows:

  1. Dominique, Ignace, Naren, Olivier, Philippe, and Pascal are runners.

  2. There exist exactly six positions.

  3. Every runner is allocated to exactly one position.

  4. Reject that a runner R1 is allocated to a position and that another runner R2 is allocated to the same position and that R1 is not equal to R2.

  5. Reject that Olivier is allocated to the sixth position.

  6. If a runner R1 is allocated to a position P1 and another runner R2 is allocated to a position P2 and P1 is smaller than P2 then R1 is before R2.

  7. Reject that Naren is before Dominique, Pascal, and Ignace.

  8. Reject that Olivier is before Dominique, Pascal, and Ignace.

  9. Reject that Dominique is allocated to a position that is greater than or equal to 3.

  10. Reject that Philippe is allocated to a position that is greater than 4.

  11. Reject that Ignace is allocated to the second position.

  12. Reject that Ignace is allocated to the third position.

  13. Reject that Pascal is allocated to a position P1 and that Naren is allocated to a position P2 and that P1 is not equal to P2 minus 3.

  14. Reject that Ignace is allocated to the fourth position.

  15. Reject that Dominique is allocated to the fourth position.

Sentence (1) is translated into six ASP facts: , , , , , . Sentence (2) is represented as an ASP fact with the range notation: . Sentence (3) denotes a choice rule with the cardinality constraint:

The rule says that if is in the answer set, then for al ’s such that holds the answer set will include exactly one head literal . Sentence (5) denotes a constraint. It is represented as

The constraint forces the atom to be excluded from the answer set. Other constraints include sentence (7)-(15). They are represented in the similar way as sentence (5) does. Sentence (6) is a conditional sentence. It is represented as the following ASP rule:

The complete ASP program is shown in Appendix B. There is one answer set for the above ASP program where the position information is , , , , , .

3.4.3 The Jobs Puzzle Use Case

Just as the marathon puzzle, Schwitter also solves the jobs puzzle using ASP [44]. The jobs puzzle is described as

  1. There are four people: Roberta, Thelma, Steve, and Pete.

  2. Among them, they hold eight different jobs.

  3. Each holds exactly two jobs.

  4. The jobs are: chef, guard, nurse, telephone operator, police officer (gender not implied), teacher, actor, and boxer.

  5. The job of nurse is held by a male.

  6. The husband of the chef is the telephone operator.

  7. Roberta is not a boxer.

  8. Pete has no education past the ninth grade.

  9. Roberta, the chef, and the police officer went golfing together.

The corresponding CNL sentences of the jobs puzzle are shown below:

  1. Roberta is a person. Thelma is a person. Steve is a person. Pete is a person.

  2. Roberta is female. Thelma is female.

  3. Steve is male. Pete is male.

  4. Exclude that a person is male and that the person is female.

  5. If there is a job then exactly one person holds the job.

  6. If there is a person then the person holds exactly two jobs.

  7. Chef is a job. Guard is a job. Nurse is a job. Telephone operator is a job. Police officer is a job. Teacher is a job. Actor is a job. Boxer is a job.

  8. If a person holds a job as nurse then the person is male.

  9. If a person holds a job as actor then the person is male.

  10. If a first person holds a job as chef and a second person holds a job as telephone operator then the second person is a husband of the first person.

  11. If a first person is a husband of a second person then the first person is male.

  12. If a first person is a husband of a second person then the second person is female.

  13. Exclude that Roberta holds a job as boxer.

  14. Exclude that Pete is educated.

  15. If a person holds a job as nurse then the person is educated.

  16. If a person holds a job as police officer then the person is educated.

  17. If a person holds a job as teacher then the person is educated.

  18. Exclude that Roberta holds a job as chef.

  19. Exclude that Roberta holds a job as police officer.

  20. Exclude that a person holds a job as chef and that the person holds a job as police officer.

For the above CNL sentences, sentence (1)-(3) and (7) denote ASP facts. Sentence (5)-(6) indicate choice rules. Sentence (4), (13)-(14), (18)-(20) signify constraints. Sentence (8)-(12), (15)-(17) represent conditional sentences. The complete ASP program is shown in Appendix B. There is one answer set for the above problem. The job information for the answer set is , , , , , , , where predicate indicates that holds the job .

4 Computer-Processable Language

In this section, we discuss Computer-Processable Language (CPL), which was developed by Peter Clark at University of Texas [5]. The first subsection introduces CPL’s language properties, the second subsection discusses its semantic interpretations, and the third gives an overview of CPL’s inference system.

4.1 Language Properties

The vocabulary of CPL is based on a pre-defined Component Library (CLib) ontology, which was developed at UT Austin [2]

. Unlike ACE or PENG, CPL does not allow users to extend the vocabulary. Instead, it uses WordNet to map the words that are outside of the target ontology to the closest concepts within the CLib ontology. In particular, modal words, such as probably and mostly, are not in CPL because they cannot be represented in first-order logic.

CPL accepts three types of sentences: ground facts, rules, and questions. Ground facts are basic CPL sentences that have the following forms:

There is are NP
NP verb [NP]
NP is are passive-verb [by NP]

where NP denotes a noun phrase, PP refers to a prepositional phrase, [NP] signifies that the noun phrase is optional, and indicates that there can be zero or more prepositional phrases. Rules are of the form:

IF Sentence [AND THEN Sentence [AND

where Sentence denotes a basic CPL sentence, and [AND denotes zero or multiple basic CPL sentences. CPL accepts five forms of questions that begin with “What is,” “What are,” “How many,” “How much,” or “Is it true.”

For ground facts, objects are considered existentially quantified. To express universally quantified objects, users must use rules. Words that indicate universal quantifiers (e.g. all, every, most) are banned in CPL. Compared to ACE and PENG, which use the words such as “all” and “every” to describe universally quantified objects, CPL makes the language more redundant and stilted. For example, given the ACE/PENG sentence, “every student that gets an A is smart,” it is written in CPL as “IF a student gets an A THEN the student is smart.”

Unlike ACE or PENG, CPL does not allow pronouns. Users must use either definite reference or ordinal reference to indicate previously mentioned objects. A definite reference is resolved by searching for the most recent, previously mentioned noun. For example, given the sentences, ”There is a dog in the park. The dog is cute,” “the dog” in the second sentence refers to the dog mentioned in the first sentence. An ordinal reference is resolved in a similar way, except that it counts the number of occurrences of the mentioned noun from the beginning of a paragraph and chooses the appropriate occurrence as defined by the ordinal. For example, given the sentences, “Tom has a dog. Mary has a dog. The first dog is cute. The second dog is smart,” to resolve “the first dog” in the third sentence, the CPL interpreter starts from the first sentence and finds the first occurrence of dog, Tom’s dog. Similarly, CPL resolves “the second dog” in the fourth sentence by searching the second occurrence of dog from the beginning of the paragraph.

4.2 Semantic Interpretations

The semantics of CPL are represented by KM (Knowledge Machine) sentences. KM is a powerful frame-based knowledge representation language [7]. It represents first-order logic clauses in LISP-like syntax.

The CPL interpreter translates a CPL sentence into KM sentences in three steps. First, the interpreter uses a bottom-up, broad coverage chart parser, called SAPIR, to parse a CPL sentence and then generates a logical form (LF) [18]. An LF is a simplified and normalized tree structure with logic-type elements [4]. It uses variables, which are prefixed by underscores, to represent noun phrases. An LF captures the syntactic properties and relations of the sentence, including tense, aspect, polarity, and a tag set consisting of S (sentence), PP (prepositional phrase), NN (noun compound), PN (proper name), PLUR (plural), and PLUR-N (numbered plural). For example, the sentence “the cat sat on the reed mat” is shown below in the LF form:

((VAR _X1 “the” “cat”)
(VAR _X2 “the” “mat” (NN “reed” “mat”))
(S (PAST) _X1 “sit” (PP “on” _X2)))

The first line says _X1 is a variable that represents a noun phrase, “the cat.” The second line says i) _X2 is a variable that denotes the noun phrase, “the reed mat,” ii) “reed mat” is a noun compound where “reed” is the noun modifier of “mat.” The third line says i) the tense of the sentence is simple past, ii) _X1, which denotes “the cat,” is the subject of the sitting event, iii) the prepositional phrase, “on the reed mat,” is the prepositional modifier of “sit.”

Second, an initial logic generator is used to transform the LF into ground logical assertions (KM sentences) by applying a set of simple, syntactic rewrite rules. Logical assertions are binary predicates that represent the syntactic relations and the prepositions of the LF generated in Step 1, including subject (syntactic subject), sobject (syntactic object), mod (modifier), all prepositions, etc.. Other information, such as part of speech, tense, polarity, and aspect, is omitted. For the above example, “the cat sat on the reed mat,” its logical assertions are shown below:

subject(_Sit1, _Cat1)
“on”(_Sit1, _Mat1)
mod(_Mat1, _Reed1)

Here, _Sit1, _Cat1, _Mat1, _Reed1 refer to the words “sit,” “cat,” “mat,” and “reed” respectively in the LF. They are Skolem constants that denote the instances of some concepts (classes) in the target ontology. These concepts are determined in the third step. Predicates subject and mod denote the syntactic relations of the sentence. Predicate “on” signifies the preposition “on” in the LF.

Third, subsequent post-processing is performed based on the logical assertions generated in Step 2, including word sense disambiguation, semantic role labelling, and structural re-organization. For word sense disambiguation, each Skolem instance is assigned with a concept in the target ontology, based on the word each Skolem instance corresponds to and its part of speech information. Essentially, the interpreter selects the most common synset (a group of synonymous words) for a word in WordNet and then maps the WordNet synset to the corresponding concept in CLib ontology. For the above example, four additional sentences are added to the knowledge base:

isa(_Sit1, Sit_n1)
isa(_Cat1, Cat_n1)
isa(_Mat1, Mat_n1)
isa(_Reed1, Reed_n1)

where Sit_n1, Cat_n1, Mat_n1, Reed_n1 denote the CLib ontology concepts. The predicate isa is a binary relation. It indicates that an entity is an instance of a class.

By using word sense disambiguation, CPL is capable of identifying synonyms. Different words that denote the same concept will be mapped to the same concept in CLib ontology, and thus will be co-referenced. This is more advantageous than ACE and PENG. For instance, ACE cannot recognize synonyms. Hence, words that represent the same concept will be regarded as different if they are distinct. PENG has a list of pre-defined synonyms in its dictionary, but its synonym information is much less than that of WordNet, which contains 155,287 words organized in 117,659 synsets for a total of 206,941 word-sense pairs [29].

Next, semantic role labelling (SRL) is performed to replace some syntactic relations with semantically meaningful relations. For the above example, the binary predicates generated in Step 2 are replaced by the following:

agent(_Sit1, _Cat1)
location(_Sit1, _Mat1)
material(_Mat1, _Reed1)

where agent indicates that an entity initiates, performs or causes an event, location signifies that an event ends at a place, and material shows that an entity is made of another entity.

Finally, structural re-organization is deployed. For example, given two binary predicates subject(_Be1, _Rose2) and object(_Be1, _Red3), structural re-organization will merge them into a single predicate, “be”(_Rose2, _Red3). Another example is that any equal predicate, say equal(_Color1, _Red2), will be removed from the logical assertions and the occurrences of _Color1 will be replaced by _Red2 as well.

Similar to ACE and PENG, CPL will display the final interpretations and the paraphrases of the input to users for judgement. If the users do not accept the interpretation, they can rephrase the original sentence and/or modify the word senses and semantic relations.

4.3 Reasoning Services

For CPL’s reasoning service, users can compose questions in CPL using one of the five forms described in section 4.1. Compared to ACE and PENG, CPL does not support consistency checking, informativity checking, nor does it support theorem proving for a given set of sentences.

CPL deploys KM as the underlying inference system. As is mentioned in section 4.2, KM is a frame-based knowledge representation language implemented in Common Lisp. The basic representation unit is a frame, which consists of a set of slots and values. Slots denote binary relations between instances. They can represent the properties of a class or an instance. KM defines three types of properties for a class: its own properties, assertional properties, and definitional properties. A class’s own properties describe the meta-data of the class itself. They do not apply to any member of the class. Assertional properties denote the properties that are implied by the membership of the class. They are formulated as uni-directional implications:

where denotes a class, represents a slot indicating one of the class’s assertional properties, is a value in slot , and is a binary relation that holds between instances of and the value of slot . Definitional properties are the properties that are both implied by the membership of the class and are sufficient to conclude the class membership of an instance. They are formulated as bi-directional implications:

where represents a slot indicating one of the class’s definitional properties. An instance

can be classified into

class if it satisfies all of ’s definitional properties. Both assertional and definitional properties apply to every member of the class. Classes can inherit the properties from their superclasses. Once an instance is created, it will inherit the properties from all of its superclasses.

KM has a mechanism, called automatic classification, which will automatically attempt to re-classify an instance into the most specific class once it is created or modified. For example, given the classes Square and Rectangle where Square is defined to be the subclass of Rectangle with the definitional property of equal length and width, if the user creates an instance of Rectangle with its length equal to its width, KM will search the inheritance hierarchy and re-classify the instance into the Square class. Later, if the user modifies the instance, say changing its length to be smaller than its width, KM will again search the inheritance hierarchy and re-classify the instance into the Rectangle class.

KM performs inference when a query is issued to the knowledge base. A query is of the form:

where represents a slot and denotes a KM expression that evaluates to one or multiple instances. The semantics of a query are formulated as an access path. It is a join of binary predicates of the form:

where is a constant, and is a free variable. It denotes a set S of values for such that

KM evaluates a query from right to left in iterations. In iteration 1, KM processes by computing the value of and then finding the value of for each instance returned by . The automatic classification mechanism is always implicitly applied to each instance generated in the whole process. The results are fed as input to iteration 2. In iteration 2, KM finds the value of for each instance in the input. The results are again passed to iteration 3. This process repeats until it finishes processing .

5 Nonmonotonicity in Controlled Natural Languages

According to [1], nonmonotonic logics are a family of logics that are designed to represent the kind of defeasible inference in everyday life, where reasoners draw a set of conclusions that are justified by the given knowledge base. These conclusions may be invalidated with the addition of more knowledge. Examples of nonmonotonic logic frameworks include circumscription[25], default logic [38], autoepistemic logic [28], etc.. In this section, we first study two types of nonmonotonic phenomena that occur in natural languages: defaults and exceptions and conversational implicatures. Then propose their adapted representations in CNLs and the corresponding formalizations in nonmonotonic reasoning frameworks.

5.1 Defaults and Exceptions

A default is a statement that is true by default about a collection of instances but may be defeated by information about some specific instances. The latter are called exceptions. This contrasts with a definite statement in the real world where the occurrences of such instances will falsify the validity of the statement. In natural languages, a default can be empirically categorized into two types: one that is directly declared and one that is indirectly specified. A default is directly declared if it is a statement that generalizes what an object does and contains particular words (e.g. generally, typically, normally) [45]. For instance, “Normally, birds fly,” “If she has an essay to write, typically she will study in the library,” etc. A default is stated indirectly if it does not contain any keywords but can be identified by the context or based on some commonsense knowledge, e.g., “There is a lot of rain in Seattle.” A default hides a number of unstated assumptions; in the previous case, e.g., “There is no drought in Seattle,” “The climate in Seattle does not change,” etc.. The default is defeated if one of the assumptions does not hold. Similarly, exceptions can be noted directly with the keyword, such as except, or expressed indirectly. Examples are shown below,

  1. If she has an essay to write, she will study late in the library except on weekends.

  1. [label=2.,nolistsep]

  2. Typically, birds fly.

  3. Penguins do not fly.

  1. [label=3.,nolistsep]

  2. If Mary has a writing assignment, typically she will study in the library.

  3. If Mary has a coding assignment, normally she will study in CS lab.

Sentence (1) contains a default statement and its exception information is directly specified by the keyword “except.” The sentence says that normally she will study in the library. But if it is a weekend, she will not study in the library. Sentence (2.a) denotes a default, which generalizes what birds do. For sentence (2.b), although it does not contain any keywords to indicate an exception to (2.a), we can still identify the exception based on the context and on commensense knowledge. Sentence (3.a) and (3.b) are ambiguous. The default conclusions drawn from (3.a) and (3.b) are incompatible with each other because Mary cannot study in two different locations at the same time. However, there is no information of what is an exception to what. Hence, it is impossible to decide where Mary will study when she has both a writing and a coding assignment to do.

We adapt defaults and exceptions that occur in natural languages to the design of CNLs. Unlike natural languages, CNLs have to be precise and unambiguous without losing the naturalness of the language. To the best of our knowledge, PENG is the only CNL that incorporates defaults and supports nonmonotonic reasoning. As described in Section 3.4, PENG defines a default as a general statement that contains words, such as normally, generally, and typically. There are two types of exceptions in PENG: strong exception and weak exception. A strong exception contradicts the conclusion generated by the default. A weak exception makes the default inapplicable. Defaults and exceptions are formalized in ASP programs. One problem with PENG is its stilted way to indicate that a statement is a weak exception to the default. An example is shown below:

  1. [label=4.,nolistsep]

  2. Parents of a child normally care about the child.

  3. Tom is a parent of a child.

  4. Tom does not care about his child.

  5. Alice is a parent of a child.

  6. Alice is absent.

Sentence 4.a is a default, which is formalized as

care(X,Y) :- parent(X,Y), child(Y), not ab(d(care(X,Y))), not care(X,Y).

where stands for default, represents abnormal, signifies negation as failure, and refers to strong negation. Sentence (4.c) denotes a strong exception to (4.a). In order to ensure that sentence 4.e is a weak exception to 4.a, users have to write the cancellation rule in CNL as “If there is no evidence that a parent of a child is not absent then the parent abnormally cares about the child.” This makes the language processor translate it into the following ASP rule:

ab(d(care(X,Y))) :- parent(X,Y), child(Y), not absent(X).

The way the cancellation rule is expressed in PENG is just a direct translation of the intended ASP rule, which is rather unintuitive.

We propose a different approach to the representations and formalizations of defaults and exceptions in CNLs. Each CNL sentence is associated with a unique identifier, which denotes the location of the sentence in the text. The identifier can be an English phrase, e.g. the second sentence of the third paragraph, or a user-defined label, such as para3sent2, where para stands for paragraph and sent stands for sentence. We assume that CNL sentences are defeasible by default and therefore can be defeated. To denote a definite statement, users must add the keyword, strict, at the beginning of the sentence, e.g.,

  1. (strict) Obama won the presidential election in 2012.

Exception information has to be mentioned directly. A default can have three types of exceptions: refutation, rebuttal, and cancellation. The refutation of a default offers a conclusion that is incompatible with the conclusion drawn from the default and will override the default. Here, we assume that the “refutation” relation is transitive. That is, if statement A refutes B and B refutes C, statement A refutes C as well. The structure of a refutation is a CNL statement proceeded by its reference information, which specifies the corresponding sentences the current statement causes an exception to. The reference information is represented by either template 6.a or 6.b,

  1. [label=6.,nolistsep]

  2. except

  3. exception to

where ’s () denote sentence identifiers. Template 6.a indicates that the current sentence is a refutation to its previous one. Template 6.b says that the sentence is a refutation to sentences with ids . The rebuttal of a default is also a conclusion that is incompatible with the default. However, there is no information on which conclusion has more weight. Hence, no conclusion can be drawn from the default and its rebuttal. To specify the situation where two conclusions are incompatible with each other, we write it as a CNL statement preceded by the keyword, conflict constraint, e.g.,

  1. [label=7.,nolistsep]

  2. Tom votes for Obama.

  3. Tom votes for Romney.

  4. Obama is a candidate.

  5. Romney is a candidate.

  6. (conflict constraint) A person can vote for at most one candidate.

According to 7.e, sentence 7.a and 7.b are incompatible and therefore rebut each other. Thus, both 7.a and 7.b are false. The cancellation of a default simply renders the default inapplicable and therefore cancels the default conclusion. A cancellation is represented as a conditional of the form:

If P, then cancel .

where P denotes the premises of the conditional. Note, all exceptions are defeasible. They can be defeated by other statements as well. In this case, an exception can be used to defeat other statements only when the exception itself is not defeated, e.g.,

  1. [label=8.,nolistsep]

  2. If John gets an offer from BOA, John will join BOA.

  3. (except) If John gets an offer from Amazon, John will join Amazon.

  4. John gets an offer from BOA.

  5. John gets an offer from Amazon.

  6. Amazon goes bankcruptcy.

  7. If Amazon goes bankcruptcy, then cancel 8.b.

Sentence 8.b causes an exception to 8.a. However, based on 8.e and 8.f, sentence 8.b is defeated. In this case, sentence 8.b cannot be used to refute 8.a. Hence, we can conclude that “John will join BOA.”

We formalize CNL sentences using Logic Programming with Defaults and Argumentation Theories (LPDA) [51], which is a powerful framework for defeasible reasoning based on well-founded semantics [37]. LPDA has well-defined semantics for defaults and exceptions. There are two types of rules: strict rules and defeasible rules. Each LPDA program is accompanied with an argumentation theory, which decides when a defeasible rule is defeated. There are three special predicates: opposes, overrides, and cancel where opposes indicates the literals that are incompatible with each other, overrides denotes a binary relation between defeasible rules indicating priority, and cancel cancels a defeasible rule. We use predicate overrides and cancel to formalize the “refutation” and “cancellation” relations between a default and its exceptions respectively. We use predicate opposes to formalize the incompatibilities between a defaults and rebuttals.

Next, we give a complicated example that includes three types of exceptions and show how they are formalized in LPDA to perform defeasible reasoning.

  1. [label=9.,nolistsep]

  2. John is a store member.

  3. John is an SBU employee.

  4. John buys a can of coke.

  5. John buys a lobster.

  6. Mary is a store member.

  7. Mary is an SBU employee.

  8. Mary buys salmon.

  9. Mary is in the blacklist.

  10. A Coke is a beverage.

  11. Lobster is seafood.

  12. Salmon is seafood.

  13. If a customer buys a beverage, the customer gets a discount of $1.50.

  14. (except) If a customer is a store member and buys a beverage, the customer gets a discount of $2.50.

  15. If a customer is a store member and buys seafood, the customer gets a discount of $7.50.

  16. If a customer is an SBU employee and buys seafood, the customer gets a discount of $5.00.

  17. If a store member is in the blacklist, then cancel 9.m, 9.n.

  18. (conflict constraint) A customer gets at most one discount for any product.

The CNL sentences are formalized in LPDA as follows:

  1. [label=,nolistsep]

  2. member(John).

  3. sbuemployee(John).

  4. buy(John,coke).

  5. buy(John,lobster).

  6. member(Mary).

  7. sbuemployee(Mary).

  8. buy(Mary,salmon).

  9. blacklist(Mary).

  10. beverage(coke).

  11. seafood(lobster).

  12. seafood(salmon).

  13. @{r1} discount(?Customer,?Product,?Amount):-

  14. buy(?Customer,?Product),beverage(?Product),?Amount is 1.50.

  15. @{r2} discount(?Customer,?Product,?Amount):-

  16. buy(?Customer,?Product),beverage(?Product),

  17. member(?Customer),?Amount is 2.50.

  18. @{r3} discount(?Customer,?Product,?Amount):-

  19. buy(?Customer,?Product),seafood(?Product),

  20. member(?Customer),?Amount is 7.50.

  21. @{r4} discount(?Customer,?Product,?Amount):-

  22. buy(?Customer,?Product),seafood(?Product),

  23. subemployee(?Customer),?Amount is 5.00.

  24. cancel(r2):-member(?Customer),blacklist(?Customer).

  25. cancel(r3):-member(?Customer),blacklist(?Customer).

  26. opposes(discount(?Customer,?Product,?Amount1),

  27. discount(?Customer,?Product,?Amount2)):-

  28. buy(?Customer,?Product),?Amount1 != ?Amount2.

  29. overrides(r2,r1).

For the above example, if the user asks “How much discount does John get for buying a coke,” it will answer $2.50. Sentence 9.l and 9.m give rise to the conclusions that John gets a discount of $2.50 and $1.50 respectively for buying a coke. According to 9.q, they are incompatible because John cannot get two discounts for buying one product. However, since sentence 9.m refutes 9.l, the conclusion drawn from 9.m will override the one made from 9.l. Thus, we have the conclusion that John gets a discount of $2.50 for buying a coke. Next, if the user asks “How much discount does John get for buying a lobster,” it will return no answer. Just like the previous case, sentence 9.n and 9.o generate incompatible conclusions: John gets a discount of $7.50 and $5.00 for buying a lobster. Without any refutation information between 9.n and 9.o, the conclusion that John gets a discount for buying a lobster cannot be justified. Finally, if the user asks “How much discount does Mary get for buying salmon,” it will answer $5.00. Even though 9.n and 9.o generate incompatible conclusions for Mary’s purchase, since Mary is in the blacklist, sentence 9.n will be defeated by 9.p. In this case, sentence 9.n cannot be used to rebut 9.o. Thus, we have the conclusion that Mary gets a discount of $5.00 for buying salmon.

5.2 Conversational Implicatures

The concept of conversational implicatures was proposed and systematically studied by H. P. Grice [16]. Grice stated that conversational implicatures are a component of the meaning of the utterances that are drawn not only from the literal meaning of the given linguistic expressions, but also based on some general principles of conversational rationality. These principles are also known as Grice’s maxims:

  1. [label=]

  2. Quantity

    1. Make your contribution as informative as is required.

    2. Do not make your contribution more informative than is required.

  3. Quality

    1. Do not say what you believe to be false.

    2. Do not say that for which you lack adequate evidence.

  4. Relation

    1. Be relevant.

  5. Manner

    1. Avoid obscurity of expression.

    2. Avoid ambiguity.

    3. Be brief.

    4. Be orderly.

Speakers are presumed to observe and obey the above maxims in their communications. Based on this assumption, when a speaker appears to break one of the maxims, it will cause the interpreter to make some inferences regarding what the speaker really means. For instance,

  1. The final exam will take place in the CS building or the main library.

  1. Speaker A: Students said the exam covered all chapters in the book and was very difficult.
    Speaker B: It covered all chapters in the book.

Sentence 10 says that the speaker thinks both places are possible. Hence, according to the maxim of quantity, we can conclude that the speaker does not know exactly where the exam takes place. In sentence 11, speaker A said that the exam not only covered all chapters but also was very difficult. However, speaker B only mentioned that the exam covered all chapters. Again, according to the maxim of quantity, we can conclude that the exam covered all chapters but was not necessarily difficult. Conversational implicatures are defeasible in nature. This contrasts with the semantics of entailment in classical logic. For instance, if speaker B adds that “in fact, the exam was also very difficult,” it will cancel the implicature that “the exam was not difficult.”

Grice divides conversational implicatures into two classes: generalized implicatures and particularized implicatures. Generalized implicatures do not strongly depend on the context. They arise only when particular lexical items are used in the sentence. Particularized implicatures, on the other hand, are based on specific context and arise only when such particular context occurs. In the previous example, sentence 10 denotes a generalized implicature, which is triggered by the connective or. Sentence 11 denotes a particularized implicature where the implicature generated from B’s utterance arises from the context A provides. Generalized implicatures can be further divided into many sub-classes, e.g., scalar implicatures and clausal implicatures. For the rest of this subsection, we will focus on modelling scalar implicatures using nonmonotonic logics.

Scalar implicatures are a class of generalized implicatures derived from the maxim of quantity. They are based on implicature scales, each denoting a set of lexical items ordered by their informativeness, e.g., a few, some, many, all. Given a sentence that uses one of the lexical items in the implicature scale, its corresponding weaker (and stronger) sentences are defined as those where such lexical items are substituted for the weaker (and stronger) ones in the implicature scale. The theory of scalar implicatures says that i) all weaker sentences are entailed by the given sentence, and ii) all stronger sentence are false by default. For instance, based on the implicature scale, a few, some, many, all, the sentence, “Many students passed the exam,” entails that i) “Some students passed the exam,” and ii) “A few students passed the exam.” Besides, it also implies that “Not all students passed the exam.”

One of the previous works that use nonmonotonic logics to model scalar implicatures is Wainer’s compositional approach, which is based on circumscription [50]. Wainer defines the extended meaning of a sentence as a semantic content of the sentence itself combined with (one of) its generalized implicatures. The general form of the extended meaning is represented as

  1. [label=,nolistsep]

Here, and denote the semantic content of the sentence and (one of) its generalized implicatures respectively. The predicate is the abnormality predicate, which is an ad hoc symbol that does not appear in or . Similarly, the constant is also an ad hoc symbol that is only used in one specific formula. The extended meaning of the sentence is obtained by circumscribing predicate , that is, by minimizing the extension of predicate . Details of circumscription can be found in Appendix A.

As before, we model scalar implicatures using LPDA. For the implicature scales related to quantifiers, e.g., a few, some, many, all, our current approach only supports one instance, some, all, where some and all correspond to the existential and universal quantifiers respectively in first-order logic. For example, given the sentence, “Some students pass exam1,” its extended meaning is “Some students pass exam1. There exists at least one student who does not pass exam1.” The extended meaning is formalized in LPDA as shown below:

  1. [label=,nolistsep]

  2. student(#1).

  3. pass(#1,e1).

  4. student(#2).

  5. @r1 neg pass(#2,e1).

where rule indicates a defeasible rule and #1 and #2 are Skolem constants. The first two rules represent the semantic content of the original sentence. The third and fourth rules represent the semantic content of the scalar implicature. If later, the speaker adds that “in fact, all students pass the exam,” the following rule will be added,

  1. [label=,nolistsep]

  2. pass(?x,e1):-student(?x).

This will refute the default conclusion that “there is a student who did not pass exam1.”

We can also model the class of implicature scales related to predicates, e.g., cute, beautiful, stupendous. For example, given the sentence, “Mary is beautiful,” its extended meaning is “Mary is beautiful. Mary is not stupendous.” The semantics of the implicature scale together with the extended meaning of the sentence is formalized as

  1. [label=,nolistsep]

  2. cute(?x):-beautiful(?x).

  3. beautiful(?x):-stupendous(?x).

  4. beautiful(Mary).

  5. @r2 neg stupendous(Mary).

The above rules entail that Mary is cute. If later, the speaker says that “in fact, Mary is also stupendous,” the strong fact, stupendous(Mary), will be added to the knowledge base and refute rule .

In addition to the implicature scales related to quantifiers and predicates, Wainer also mentions the implicature scale, or, all. For example, the sentence, “Mary or Tom passed the exam,” indicates that one of Mary or Tom passes the exam, but the speaker does not know exactly whom. The sentence also implies that “Mary and Tom pass the exam” is false by default. For our current approach, we do not support this kind of implicature scale. Instead, we propose to simulate the logical operations inclusive or and exclusive or in CNL. We use the connective or, to denote the inclusive or. For example, that sentence, “John votes for Obama or Romney.” indicates that John votes at least for one of them. The sentence is formalized in LPDA as shown below:

  1. [label=,nolistsep]

  2. vote(John,Romney):-neg vote(John, Obama).

  3. vote(John,Obama):-neg vote(John, Romney).

The above rules guarantee that if there is a fact that John does not vote Obama (or Romney), then the conclusion that John votes Romney (or Obama) can be derived. Besides, these rules are consistent also when there are the facts that Tom votes for both Obama and Romney. However, if Tom votes neither for Obama nor Romney, it will cause an inconsistency because it violates the constraint that John votes for at least of the candidates. We can use the connective either …or to denote the exclusive or. For example, the sentence, “John votes either for Obama or for Romney,” is formalized as

  1. [label=,nolistsep]

  2. neg vote(John,Romney):-vote(John, Obama).

  3. neg vote(John,Obama):-vote(John, Romney).

If there is a fact that John votes Obama (or Romney), the above rules ensure that the conclusion that John does not vote for Romney (or Obama) can be derived. If John votes for both Obama and Romney, it will cause an inconsistency because it violates the constraint that John can vote for only one of the candidates.

6 Conclusion

In this report, we study controlled natural languages and propose their extensions to support nonmonotonic reasoning. First, we give an overview of CNLs in terms of language design, classifications and their development. Then, we introduce three CNL systems: Attempto Controlled English (ACE), Processable English (PENG), and Computer-processable English (CPL). We identify their shared traits and distinguishing characteristics in language design, semantic interpretations, and reasoning services. Finally, we propose our extensions of CNL to support defeasible reasoning. We present the representation of defaults, exceptions and conversational implicatures in CNL and their formalizations in Logic Programming with Defaults and Argumentation Theory.


  • [1] G. Aldo Antonelli. Non-monotonic logic. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Winter 2012 edition, 2012.
  • [2] Ken Barker, Bruce Porter, and Peter Clark. A library of generic concepts for composing knowledge bases. In Proceedings of the 1st international conference on Knowledge capture, pages 14–21. ACM, 2001.
  • [3] Donald Chapin, DE Baisley, and H Hall. Semantics of business vocabulary & business rules (SBVR). In Rule Languages for Interoperability, 2005.
  • [4] Peter Clark and Phil Harrison. Boeing’s nlp system and the challenges of semantic representation. In Proceedings of the 2008 Conference on Semantics in Text Processing, pages 263–276. Association for Computational Linguistics, 2008.
  • [5] Peter Clark, Philip Harrison, Thomas Jenkins, John A Thompson, Richard H Wojcik, et al. Acquiring and using world knowledge using a restricted subset of english. In FLAIRS Conference, pages 506–511, 2005.
  • [6] Peter Clark and Bruce Porter. KM - the knowledge machine 1.4: Users manual (revision 1), 1999.
  • [7] Peter Clark, Bruce Porter, and Boeing Phantom Works. KM - the knowledge machine 2.0: Users manual. Department of Computer Science, University of Texas at Austin, 2004.
  • [8] Michael A Covington. GULP 3.1: An extension of Prolog for unification-based grammar. Citeseer, 1994.
  • [9] Flip G Droste and John E Joseph. Linguistic Theory and Grammatical Description: Nine Current Approaches, volume 75. John Benjamins Publishing, 1991.
  • [10] Norbert E Fuchs, Kaarel Kaljurand, and Tobias Kuhn. Attempto Controlled English for knowledge representation. In Reasoning Web, pages 104–124. Springer, 2008.
  • [11] Norbert E. Fuchs, Kaarel Kaljurand, and Tobias Kuhn. Discourse Representation Structures for ACE 6.5. Technical Report ifi-2009.04, Department of Informatics, University of Zurich, Zurich, Switzerland, 2009.
  • [12] Norbert E Fuchs and Uta Schwertel. Reasoning in Attempto Controlled English. Springer, 2003.
  • [13] Norbert E Fuchs and Rolf Schwitter. Specifying logic programs in controlled natural language. arXiv preprint cmp-lg/9507009, 1995.
  • [14] Michael Gelfond and Yulia Kahl. Knowledge representation, reasoning, and the design of intelligent agents. Michael Gelfond, 2012.
  • [15] Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic programming. In ICLP/SLP, volume 88, pages 1070–1080, 1988.
  • [16] H Paul Grice. 4. logic and conversation. The Semantics-Pragmatics Boundary in Philosophy, page 47, 2013.
  • [17] ASD Simplified Technical English Maintenance Group. ASD-STE 100: Simplified Technical English : International Specification for the Preparation of Maintenance Documentation in a Controlled Language. Aerospace and Defence Industries Association of Europe, 2007.
  • [18] Philip Harrison and Michael Maxwell. A new implementation of GPSG. In Proc. 6th Canadian Conf on AI, pages 78–83, 1986.
  • [19] Glen Hart, Martina Johnson, and Catherine Dolbear. Rabbit: Developing a controlled natural language for authoring ontologies. In The Semantic Web: research and applications, pages 348–360. Springer, 2008.
  • [20] Hans Kamp and Uwe Reyle. From discourse to logic: Introduction to modeltheoretic semantics of natural language, formal logic and discourse representation theory. Number 42. Springer, 1993.
  • [21] Tobias Kuhn. A survey and classification of controlled natural languages. 2013.
  • [22] Vladimir Lifschitz. Circumscription. 1996.
  • [23] Rainer Manthey and François Bry. Satchmo: a theorem prover implemented in Prolog. In 9th International Conference on Automated Deduction, pages 415–434. Springer, 1988.
  • [24] Wiktor Marek and Mirosław Truszczyński. Autoepistemic logic. Journal of the ACM (JACM), 38(3):587–618, 1991.
  • [25] John McCarthy. Circumscription—a form of non-monotonic reasoning. Artificial intelligence, 13(1):27–39, 1980.
  • [26] William McCune. Mace 2.0 reference manual and guide. arXiv preprint cs/0106042, 2001.
  • [27] William W McCune. Otter 3.0 reference manual and guide, volume 9700. Argonne National Laboratory Argonne, IL, 1994.
  • [28] Drew McDermott. Nonmonotonic logic ii: Nonmonotonic modal theories. Journal of the ACM (JACM), 29(1):33–57, 1982.
  • [29] George A Miller. Wordnet: a lexical database for English. Communications of the ACM, 38(11):39–41, 1995.
  • [30] Robert Nieuwenhuis, Albert Oliveras, and Cesare Tinelli. Abstract DPLL and abstract DPLL modulo theories. In Logic for Programming, Artificial Intelligence, and Reasoning, pages 36–50. Springer, 2005.
  • [31] Ian Niles and Adam Pease. Origins of the IEEE standard upper ontology. In Working Notes of the IJCAI-2001 Workshop on the IEEE Standard Upper Ontology. Citeseer, 2001.
  • [32] Donald Nute. Defeasible logic, handbook of logic in artificial intelligence and logic programming (vol. 3): nonmonotonic reasoning and uncertain reasoning, 1994.
  • [33] Voice of America (Organization). VOA Special English Word Book: A List of Words Used in Special English Programs on Radio, Television, and the Internet. Voice of America, 2007.
  • [34] Charles Kay Ogden. Basic English: A general introduction with rules and grammar, volume 29. Kegan Paul, Trench, Trubner, 1944.
  • [35] Monica Palmirani, Guido Governatori, Antonino Rotolo, Said Tabet, Harold Boley, and Adrian Paschke. Legalruleml: Xml-based rules and norms. In Rule-Based Modeling and Computing on the Semantic Web, pages 298–312. Springer, 2011.
  • [36] Adam Pease and John Li. Controlled English to logic translation. In Theory and Applications of Ontology: Computer Applications, pages 245–258. Springer, 2010.
  • [37] Teodor C Przymusinski. Well-founded and stationary models of logic programs. Annals of Mathematics and Artificial Intelligence, 12(3-4):141–187, 1994.
  • [38] Raymond Reiter. A logic for default reasoning. Artificial intelligence, 13(1):81–132, 1980.
  • [39] Konstantinos Sagonas, Terrance Swift, and David S. Warren. XSB as an efficient deductive database engine. In In Proceedings of the ACM SIGMOD International Conference on the Management of Data, pages 442–453. ACM Press, 1994.
  • [40] Rolf Schwitter. English as a formal specification language. In Database and Expert Systems Applications, 2002. Proceedings. 13th International Workshop on Natural Language and Information Systems, pages 228–232. IEEE, 2002.
  • [41] Rolf Schwitter. Incremental chart parsing with predictive hints. In Proceedings of the Australasian Language Technology Workshop, pages 1–8, 2003.
  • [42] Rolf Schwitter. Controlled natural languages for knowledge representation. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1113–1121. Association for Computational Linguistics, 2010.
  • [43] Rolf Schwitter.

    Answer set programming via controlled natural language processing.

    In Controlled Natural Language, pages 26–43. Springer, 2012.
  • [44] Rolf Schwitter. The jobs puzzle: Taking on the challenge via controlled natural language processing. Theory and Practice of Logic Programming, 13(4-5):487–501, 2013.
  • [45] Rolf Schwitter. Working with defaults in a controlled natural language. In Australasian Language Technology Association Workshop 2013, page 106, 2013.
  • [46] Rolf Schwitter, Anna Ljungberg, and David Hood. Ecole–a look-ahead editor for a controlled language. EAMT-CLAW03, pages 141–150, 2003.
  • [47] John Andrew Simpson, Edmund SC Weiner, et al. The Oxford English dictionary, volume 2. Clarendon Press Oxford, 1989.
  • [48] John F Sowa. Common logic controlled English (draft), 2004.
  • [49] Kerry Trentelman. Processable English: The theory behind the PENG system. 2009.
  • [50] Jacques Wainer. Modeling generalized implicatures using non-monotonic logics. Journal of logic, language and information, 16(2):195–216, 2007.
  • [51] Hui Wan, Benjamin Grosof, Michael Kifer, Paul Fodor, and Senlin Liang. Logic programming with defaults and argumentation theories. In Logic Programming, pages 432–448. Springer, 2009.

Appendix A Appendix

In this section, we introduce three nonmonotonic reasoning frameworks: circumscription, answer set programming and logic programming with defaults and argumentation theories.

a.1 Circumscription

Circumscription was first proposed by McCarthy [25] and later enhanced by Lifschitz [22] with some extensions to the original approach. Circumscription has the same syntax as classical logic but with different semantics. Unlike classical logic, circumscription only considers the preferred models that have the minimal extensions for certain predicates. For instance, the sentence, “Eagles are birds. Normally, birds fly,” is represented as

where denotes abnormality. Intuitively, circumscription attempts to minimize the set of objects that are considered to be abnormal and therefore deems an object to be abnormal only if is necessary. For the above formula, since there is no information on the abnormality of eagles, we can assume that eagles are not abnormal and therefore get a conclusion that eagles fly. Formally, let be a first-order logic sentence containing predicate , the circumscription of in , denoted as , is represented as the second-order formula :

where is a predicate variable that has the same arity as and indicates that the extension of is a strict subset of . The formula says that there does not exist any predicate such that i) satisfies all of the conditions of in and ii) has a smaller extension than . For the semantics of circumscription, model is a submodel of in , denoted as , if i) and have the same domain, ii) the extension of in is a subset of that in , iii) all other predicate and function symbols that occur in A have the same extensions in and . Thus, model is the minimal model of with respect to if and only if for any model of such that , it implies that . Circumscription leads to a nonmonotonic inference relation: iff . That is, formula can be inferred from if and only if is entailed by . Extensions of circumscription include parallel circumscription, prioritized circumscription, etc. Details can be found in [22].

a.2 Answer Set Programming

Answer set programming is based on stable model semantics [15]. In normal programs, a rule is of the form

where (), (), and are atoms. Given an interpretation of a normal program , the reduct of the program with respect to , denoted as , is obtained by i) removing the rules that contain negative literals, , where , and ii) removing all negative literals that occur in the rest of the rules. The interpretation, , is a stable model of if where stands for the least model of . Stable models are not necessarily unique. There can be zero, one, or multiple stable models for a normal logic program. For example, given the program, , there are two stable models: and . However, for the program, , there is no stable model.

The formal language used in answer set programming paradigm is called AnsProlog, which extends normal programs with i) contraints, ii) strong negation, and iii) disjunction. An ASP rule is formally represented as

where (), (), () are literals, denotes negation as failure, and the rule head is a disjunction of literals. An ASP rule is called a constraint if the rule head is empty. An example is shown below:

The constraint forces () not to be included in the models (answer sets) of the program.

For the semantics of ASP, let be an ASP program that does not contain any negative literal in the body of the rules and be a set of ground literals, is an answer set for if i) satisfies all the rules in and ii) is minimal. Next, let be an ASP program that contains negative literals in the body of the rules and be a set of ground literals, is an answer set for if is the answer set of , the reduct of with respect to . Given the definition of answer sets, an ASP program entails a ground literal if is in every answer set of . For any query of the form where () is a literal, it will give one of the following answers:

  • yes, if

  • no, if there exists such that

  • unknown, otherwise

Answer set programming can represent defaults and exceptions. The general form of a default is

where stands for default, represents abnormality, refers to negation as failure, and denotes strong negation. The formula says that “Normally elements of class has the property ” or “ has the property if 1) is a class of , 2) is not abnormal with respect to , and 3) it cannot be shown that does not have property .” A default can have two types of exceptions: strong exception and weak exception. A strong exception refutes the default conclusion and derives the opposite one. A weak exception stops the application of the default without defeating the default conclusion. Let be a strong exception to the default. It is represented as

where denotes the default conclusion. A weak exception is denoted by a cancellation axiom. Let be a weak exception, it is represented as

The formula says that is abnormal with respect to if there is no evidence that is not an exception.

a.3 Logic Programming with Defaults and Argumentation Theories

Logic programming with defaults and argumentation theories (LPDA) is a unifying defeasible reasoning framework that works on defaults and exceptions with prioritized rules and argumentation theories. It is based on well-founded semantics [37], which are three-valued semantics containing true, false, and undefined. A literal has one of the following forms:

  • An atomic formula.

  • neg A, where A is an atomic formula.

  • not A, where A is an atom.

  • not neg A, where A is an atom.

  • not not L and neg neg L, where L is a literal.

Let be an atom. A not-free literal refers to a literal that can be reduced to or . A not-literal refers to a literal that can be reduced to or . LPDA has two types of rules: definite rules and defeasible rules. A definite rule is of the form:

where is a not-free literal and is a conjunction of literals. A defeasible rule is of the form:

where is a term that denotes the label of the rule.

Each LPAD program is accompanied with an argumentation theory that decides when a defeasible rule is defeated. An argumentation theory is a set of definite rules with four special predicates: defeated, opposes, overrides, and cancel where defeated denotes the defeatedness of a defeasible rule, opposes indicates the literals that are incompatible with each other, overrides denotes a binary relation between defeasible rules indicating priority, and cancel cancels a defeasible rule. There are multiple sets of argumentation theories. Users can specify one of them for use or manipulate with the existing ones. A rule is defeated if it is refuted, rebutted, or disqualified. The meaning of refuted, rebutted, and disqualified depends on the argumentation theories. Generally, a rule is refuted if there is another rule that draws an incompatible conclusion with a higher priority. A rule is rebutted if there is another rule that draws an incompatible conclusion without any priority information. A rule is disqualified if it is cancelled, self-defeated, etc..

Given an LPDA program and the argumentation theory , the well-founded model of , denoted as , is defined as the limit of the following transfinite induction:

  • , where is a non-limit ordinal

  • , where is a limit ordinal

can be reduced to a normal logic program such that has the same well-founded model as . The reduction is done by changing every defeasible rule to a definite rule of the form:

where the term is called the handle of the rule.

Appendix B Appendix

In this section, we show the ASP programs for the marathon puzzle and the jobs puzzle.

b.1 The Marathon Puzzle

  1. [label=,nolistsep]

  2. runner(dominique).

  3. runner(ignace).

  4. runner(naren).

  5. runner(olivier).

  6. runner(pascal).

  7. runner(philippe).

  8. position(1..6).

  9. 1 { allocated to(A,B) : position(B) } 1 :- runner(A).

  10. :- runner(C), position(D), allocated to(C,D), runner(E), allocated to(E,D), C != E.

  11. :- allocated to(olivier,6).

  12. before(F,G) :- runner(F), position(H), allocated to(F,H), runner(G), position(I), allocated to(G,I), H ¡ I.

  13. :- before(naren,dominique).

  14. :- before(naren,pascal).

  15. :- before(naren,ignace).

  16. :- before(olivier,dominique).

  17. :- before(olivier,pascal).

  18. :- before(olivier,ignace).

  19. :- position(J), J ¿= 3, allocated to(dominique,J).

  20. :- position(K), K ¿ 4, allocated to(philippe,K).

  21. :- allocated to(ignace,2).

  22. :- allocated to(ignace,3).

  23. :- position(L), allocated to(pascal,L), position(M), allocated to(naren,M), L != M - 3.

  24. :- allocated to(ignace,4).

  25. :- allocated to(dominique,4).

  26. answer(N,O) :- runner(N), position(O), allocated to(N,O).

  27. #hide. #show answer/2.

b.2 The Jobs Puzzle

  1. [label=,nolistsep]

  2. person(roberta). person(thelma). person(steve). person(pete).

  3. female(roberta). female(thelma).

  4. male(steve). male(pete).

  5. :- person(X), male(X), female(X).

  6. 1 hold(X,Y) : person(X) 1 :- job(Y).

  7. 2 hold(X,Y) : job(Y) 2 :- person(X).

  8. job(chef). job(guard). job(nurse). job(operator). job(police). job(teacher). job(actor). job(boxer).

  9. male(X) :- person(X), job(nurse), hold(X,nurse).

  10. male(X) :- person(X), job(actor), hold(X,actor).

  11. husband(Y,X) :- person(X), job(chef), hold(X,chef), person(Y), job(operator), hold(Y,operator).

  12. male(X) :- person(X), person(Y), husband(X,Y).

  13. female(Y) :- person(X), person(Y), husband(X,Y).

  14. :- job(boxer), hold(roberta,boxer).

  15. :- educated(pete).

  16. educated(X) :- person(X), job(nurse), hold(X,nurse).

  17. educated(X) :- person(X), job(police), hold(X,police).

  18. educated(X) :- person(X), job(teacher), hold(X,teacher).

  19. :- job(chef), hold(roberta,chef).

  20. :- job(police), hold(roberta,police).

  21. :- person(X), job(chef), hold(X,chef), job(police),hold(X,police).

  22. answer(hold(X,Y)) :- job(Y), hold(X,Y).