Characterizing and Extending Answer Set Semantics using Possibility Theory

Answer Set Programming (ASP) is a popular framework for modeling combinatorial problems. However, ASP cannot easily be used for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, where this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP, in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/15/2012

Possibilistic Answer Set Programming Revisited

Possibilistic answer set programming (PASP) extends answer set programmi...
04/14/2011

Backdoors to Tractable Answer-Set Programming

Answer Set Programming (ASP) is an increasingly popular framework for de...
10/16/2019

On the Relation between Weak Completion Semantics and Answer Set Semantics

The Weak Completion Semantics (WCS) is a computational cognitive theory ...
07/21/2017

On the Computation of Paracoherent Answer Sets

Answer Set Programming (ASP) is a well-established formalism for nonmono...
10/01/2019

A Unified Framework for Nonmonotonic Reasoning with Vagueness and Uncertainty

An answer set programming paradigm is proposed that supports nonmonotoni...
12/20/2013

Negation in the Head of CP-logic Rules

CP-logic is a probabilistic extension of the logic FO(ID). Unlike ASP, b...
11/03/2015

Bound Your Models! How to Make OWL an ASP Modeling Language

To exploit the Web Ontology Language OWL as an answer set programming (A...

1 Introduction

Answer set programming (ASP) is a form of logic programming with a fully declarative semantics, centered around the notion of a stable model. Syntactically, an ASP program is a set of rules of the form

where is true whenever is true. Possibilistic ASP (PASP) extends upon ASP by associating a weight with every rule, which is interpreted as the necessity with which we can derive the head of the rule when the body is known to hold. Semantics for PASP have been introduced in [Nicolas et al. (2006)] for possibilistic normal programs and later extended to possibilistic disjunctive programs in [Nieves et al. (2013)]. Under these semantics, a possibilistic rule with certainty  allows us to derive with certainty where denotes the necessity of the body, i.e. the certainty of is restricted by the least certain piece of information in the derivation chain. Specifically, to deal with PASP rules without negation-as-failure, the semantics from [Nicolas et al. (2006)] treat such rules as implications in possibilistic logic [Dubois et al. (1994)]. When faced with negation-as-failure, the semantics from [Nicolas et al. (2006)] rely on the reduct operation from classical ASP. Essentially, this means that the weights associated with the rules are initially ignored, the classical reduct is determined and the weights are then reassociated with the corresponding rules in the reduct. Given this particular treatment of negation-as-failure, the underlying intuition of ‘’ is “’ cannot be derived with a strictly positive certainty”. Indeed, as soon as ‘’ can be derived with a certainty , ‘’ is treated as true when determining the reduct. However, this particular understanding of negation-as-failure is not always the most intuitive one.

Consider the following example. You want to go to the airport, but you notice that your passport will expire in less than three months. Some countries require that the passport is at least valid for an additional three months on the date of entry. As such, you have some certainty that your passport might be invalid (). When you are not entirely certain that your passport is invalid, you should still go to the airport () and check-in nonetheless. Indeed, since you are not absolutely certain that you will not be allowed to board, you might still get lucky. We have the possibilistic program:

0.1:
1:

where and are the weights associated with the rules () and , respectively. Clearly, what we would like to be able to conclude with a high certainty is that you need to go to the airport to check-in. However, as the semantics from [Nicolas et al. (2006)] adhere to a different intuition of negation-as-failure, the conclusion is that you need to go to the airport with a necessity of . Or, in other words, you should not go to the airport at all.

As a first contribution in this paper, we present new semantics for PASP by interpreting possibilistic rules as constraints on possibility distributions. These semantics do not correspond with the semantics from [Nicolas et al. (2006)] when considering programs with negation-as-failure. Specifically, the semantics presented in this paper can be used in settings in which the possibilistic answer sets according to [Nicolas et al. (2006)] do not correspond with the intuitively acceptable results. For the example mentioned above, the conclusion under the new semantics is that you need to go to the airport with a necessity of .

In addition, the new semantics allow us to uncover a new characterization of ASP in terms of possibility theory. Over the years, many equivalent approaches have been proposed to define the notion of an answer set. One of the most popular characterizations is in terms of the Gelfond-Lifschitz reduct [Gelfond and Lifzchitz (1988)] in which an answer set is guessed and verified to be stable. This characterization is used in the semantics for PASP as presented in [Nicolas et al. (2006)]. Alternatively, the answer set semantics of normal programs can be defined in terms of autoepistemic logic [Marek and Truszczyński (1991)], a well-known non-monotonic modal logic. An important advantage of the latter approach is that autoepistemic logic enjoys more syntactic freedom, which opens the door to more expressive forms of logic programming. However, as has been shown early on in [Lifschitz and Schwarz (1993)], the characterization in terms of autoepistemic logic does not allow us to treat classical negation or disjunctive rules in a natural way, which weakens its position as a candidate for generalizing ASP from normal programs to e.g. disjunctive programs. Equilibrium logic [Pearce (1997)] offers yet another way for characterizing and extending ASP, but does not feature modalities which limits its potential for epistemic reasoning as it does not allow us to reason over the established knowledge of an agent. The new characterization of ASP, as presented in this paper, is a characterization in terms of necessary and contingent truths, where possibility theory is used to express our certainty in logical propositions. Such a characterization is unearthed by looking at ASP as a special case of PASP in which the rules are certain and no uncertainty is allowed in the answer sets. It highlights the intuition of ASP that the head of a rule is certain when the information encoded in its body is certain. Furthermore, this characterization stays close to the intuition of the Gelfond-Lifschitz reduct, while sharing the explicit reference to modalities with autoepistemic logic.

As a second contribution, we show in this paper how this new characterization of ASP in terms of possibility theory can be used to uncover a new form of disjunction in both ASP and PASP. As indicated, we have that the new semantics offer us an explicit reference to modalities, i.e. operators with which we can qualify a statement. Epistemic logic is an example of a modal logic in which we use the modal operator  to reason about knowledge, where is intuitively understood as “we know that”. A statement such as can then be treated in two distinct ways. On the one hand, we can interpret this statement as , which makes it explicit that we know that one of the disjuncts is true. This treatment corresponds with the understanding of disjunction in disjunctive ASP and will be referred to as strong disjunction. Alternatively, we can interpret as which only states that we know that the disjunction is true, i.e. we do not know which of the disjuncts is true. We will refer to this form of disjunction as weak disjunction. This is the new form of disjunction that we will discuss in this paper, as it allows us to reason in settings where a choice cannot or should not be made. Still, such a framework allows for non-trivial forms of reasoning.

Consider the following example. A SCADA (supervisory control and data acquisition) system is used to monitor the brewing of beer in an industrialised setting. To control the fermentation, the system regularly verifies an air-lock for the presence of bubbles. An absence of bubbles may be due to a number of possible causes. On the one hand there may be a production problem such as a low yeast count or low temperature. Adding yeast when the temperature is low results in a beer with a strong yeast flavour, which should be avoided. Raising the temperature when there is too little yeast present will kill off the remaining yeast and will ruin the entire batch. On the other hand, there may be technical problems. There may be a malfunction in the SCADA system, which can be verified by running a diagnostic. The operator runs a diagnostic (), which reports back that there is no malfunction (). Or, alternatively, the air-lock may not be sealed correctly (). The operator furthermore checks the temperature because he suspects that the temperature is the problem (), but the defective temperature sensor returns no temperature when checked (). These three technical problems require physical maintenance and the operator should send someone out to fix them. Technical problems do not affect the brewing. As such, the brewing process should not be interrupted for such problems as this will ruin the current batch. If there is a production problem, however, the brewing process needs to be interrupted as soon as possible (in addition, evidently, to interrupting the brewing process when the brewing is done). This prevents the current batch from being ruined due to over-brewing but also allows the interaction with the contents of the kettle. In particular, when the problem is diagnosed to be low yeast the solution is to add a new batch of yeast and restart the process. Similarly, low temperature can be solved by raising the kettle temperature and restarting the fermentation process. Obviously, the goal is to avoid ruining the current batch. An employer radios in that the seal is okay. We have the following program:

The program above does not use the standard ASP syntax since we allow for disjunction in the body. Furthermore, the disjunction used in the head and the body is weak disjunction. The only information that we can therefore deduce from e.g. the first rule is (). At first, this new form of disjunction may indeed appear weaker that strong disjunction since it does not induce a choice. Still, even without inducing a choice, conclusions obtained from other rules may allow us to refine our knowledge. In particular, note that from together with and we can entail . Similarly, conclusions can also have prerequisites that are disjunctions. For example, we can no longer deduce since entails . From and we can deduce that we should call maintenance. However, we do not yet have enough information to diagnose whether yeast should be added or whether the temperature should be raised. The unique answer set of this program, according to the semantics of weak disjunction which we present in Section 4, is given by

The expressiveness of weak disjunction becomes clear when we study its complexity. In particular, we show that while most complexity results coincide with the strong disjunctive semantics, the complexity of brave reasoning (deciding whether a literal ‘’ is entailed by a consistent answer set of program ) in absence of negation-as-failure is lower for weak disjunction. Still, the expressiveness is higher than for normal programs. The complexity results are summarized in Table 1 in Section 5.

The remainder of this paper is organized as follows. In Section 2 we provide the reader with some important notions from answer set programming and possibilistic logic. In Section 3 we introduce new semantics for PASP which can furthermore be used to characterize normal ASP programs using possibility theory. In Section 4 we characterize disjunctive ASP in terms of constraints on possibility distributions and we discuss the complexity results of the new semantics for PASP in detail in Section 5. Related work is discussed in Section 6 and we formulate our conclusions in Section 7.

This paper aggregates and extends parts of our work from [Bauters et al. (2011)] and substantially extends a previous conference paper [Bauters et al. (2010)], which did not consider classical negation nor computational complexity. In addition, rather than limiting ourselves to atoms in this paper, we extend our work to cover the case of literals, which offer interesting and unexpected results in the face of weak disjunction. Complexity results are added for all reasoning tasks and full proofs are provided in appendix.

2 Background

We start by reviewing the definitions from both answer set programming and possibilistic logic that will be used in the remainder of the paper. We then review the semantics of PASP from [Nicolas et al. (2006)], a framework that combines possibilistic logic and ASP. Finally, we recall some notions from complexity theory.

2.1 Answer Set Programming

To define ASP programs, we start from a finite set of atoms . A literal is defined as an atom or its classical negation . For a set of literals, we use to denote the set where, by definition, . A set of literals is consistent if . We write the set of all literals as . A naf-literal is either a literal ‘’ or a literal ‘’ preceded by , which we call the negation-as-failure operator. Intuitively, ‘’ is true when we cannot prove ‘’. An expression of the form

with a literal for every , is called a disjunctive rule. We call the head of the rule (interpreted as a disjunction) and the body of the rule (interpreted as a conjunction). For a rule we use and to denote the set of literals in the head, resp. the body. Specifically, we use to denote the set of literals in the body that are not preceded by the negation-as-failure operator ‘’ and for those literals that are preceded by ‘’. Whenever a disjunctive rule does not contain negation-as-failure, i.e. when , we say that it is a positive disjunctive rule. A rule with an empty body, i.e. a rule of the form (), is called a fact and is used as a shorthand for () with a special language construct that denotes tautology. A rule with an empty head, i.e. a rule of the form , is called a constraint rule and is used as a shorthand for the rule of the form () with a special language construct that denotes contradiction.

A (positive) disjunctive program is a set of (positive) disjunctive rules. A normal rule is a disjunctive rule with at most one literal in the head. A simple rule is a normal rule with no negation-as-failure. A definite rule is a simple rule with no classical negation, i.e. in which all literals are atoms. A normal (resp. simple, definite) program is a set of normal (resp. simple, definite) rules.

The Herbrand base of a disjunctive program is the set of atoms appearing in . We define the set of literals that are relevant for a disjunctive program as . An interpretation of a disjunctive program is any set of literals . A consistent interpretation is an interpretation that does not contain both and for some .

A consistent interpretation is said to be a model of a positive disjunctive rule if or , i.e. the body is false or the head is true. In particular, a consistent interpretation is a model of a constraint rule if . If for an interpretation and a constraint rule we have that , then we say that the interpretation violates the constraint rule . Notice that for a fact rule we require that , i.e. at least one of the literals in the head must be true. Indeed, otherwise would not be a model of . An interpretation of a positive disjunctive program is a model of either if is consistent and for every rule we have that is a model of , or if . It follows from this definition that is always a model of , and that all other models of (if any) are consistent interpretations, which we will further on also refer to as consistent models. We say that is an answer set of the positive disjunctive program when  is a minimal model of w.r.t. set inclusion.

The semantics of an ASP program with negation-as-failure is based on the idea of a stable model [Gelfond and Lifzchitz (1988)]. The reduct of a disjunctive program w.r.t. the interpretation is defined as:

An interpretation is said to be an answer set of the disjunctive program when is an answer set of the positive disjunctive program (hence the notion of stable model). Note that we can also write the disjunctive program as where is the set of constraint rules in . An interpretation then is an answer set of the disjunctive program when is an answer set of and is a model of , i.e.  does not violate any constraints in . Whenever has consistent answer sets, i.e. answer sets that are consistent interpretations, we say that is a consistent program. When has the answer set , then this is the unique [Baral (2003)] inconsistent answer set and we say that is an inconsistent program.

Answer sets of simple programs can also be defined in a more procedural way. By using the immediate consequence operator , which is defined for a simple program without constraint rules and w.r.t. an interpretation as:

We use to denote the fixpoint which is obtained by repeatedly applying starting from the empty interpretation , i.e. it is the least fixpoint of w.r.t. set inclusion. When the interpretation is consistent, is the (unique and consistent) answer set of the simple program P without constraint rules. When we allow constraint rules, an interpretation is a (consistent) answer set of iff is a (consistent) answer set of and is a model of . For both simple and normal programs, with or without constraint rules, we have that is the (unique and inconsistent) answer set of if has no consistent answer set(s).

2.2 Possibilistic Logic

An interpretation in possibilistic logic corresponds with the notion of an interpretation in propositional logic. We represent such an interpretation as a set of atoms , where if and otherwise with the satisfaction relation from classical logic. The set of all interpretations is defined as , with a finite set of atoms. At the semantic level, possibilistic logic [Dubois et al. (1994)] is defined in terms of a possibility distribution on the universe of interpretations. A possibility distribution, which is an mapping, encodes for each interpretation (or world) to what extent it is plausible that is the actual world. By convention, means that  is impossible and means that no available information prevents  from being the actual world. A possibility distribution is said to be normalized if , i.e. at least one interpretation is entirely plausible. We say that a possibility distribution is vacuous when . Note that possibility degrees are mainly interpreted qualitatively: when ,  is considered more plausible than . For two possibility distributions and with the same domain we write when and we write when and .

A possibility distribution induces two uncertainty measures that allow us to rank propositions. The possibility measure is defined by [Dubois et al. (1994)]:

and evaluates the extent to which a proposition is consistent with the beliefs expressed by . The dual necessity measure is defined by:

and evaluates the extent to which a proposition is entailed by the available beliefs [Dubois et al. (1994)]. Note that we always have for any possibility distribution, while (and, related, ) only holds when the possibility distribution is normalized (i.e. only normalized possibility distributions can express consistent beliefs) [Dubois et al. (1994)]. To identify the possibility/necessity measure associated with a specific possibility distribution , we will use a subscript notation, i.e.  and are the corresponding possibility and necessity measure, respectively. We omit the subscript when the possibility distribution is clear from the context.

An important property of necessity measures is the min-decomposability property w.r.t. conjunction: for all propositions and . However, for disjunction only the inequality holds. As possibility measures are the dual measures of necessity measures, they have the property of max-decomposability w.r.t. disjunction, whereas for the conjunction only the inequality holds.

At the syntactic level, a possibilistic knowledge base consists of pairs where is a propositional formula and expresses the certainty that is the case. Formulas of the form are not explicitly represented in the knowledge base since they encode trivial information. A formula is interpreted as the constraint , i.e. a possibilistic knowledge base corresponds to a set of constraints on possibility distributions. Typically, there can be many possibility distributions that satisfy these constraints. In practice, we are usually only interested in the least specific possibility distribution, which is the possibility distribution that makes minimal commitments, i.e. the greatest possibility distribution w.r.t. the ordering  defined above. Such a least specific possibility distribution always exists and is unique [Dubois et al. (1994)].

In Section 4 we will also consider constraints that deviate from the form of constraints we just discussed. As a result, there can be multiple minimally specific possibility distributions rather than a unique least specific possibility distribution. To increase the uniformity throughout the paper we immediately start using the concept of a minimally specific possibility distribution, which is a maximal possibility distribution w.r.t. the ordering , even though the distinction between the least specific possibility distribution and minimally specific possibility distributions only becomes relevant once we discuss the characterization of disjunctive programs.

2.3 Possibilistic Answer Set Programming

Possibilistic ASP (PASP) [Nicolas et al. (2006)] combines ASP and possibility theory by associating a weight with each rule, where the weight denotes the necessity with which the head of the rule can be concluded given that the body is known to hold. If it is uncertain whether the body holds, the necessity with which the head can be derived is the minimum of the weight associated with the rule and the degree to which the body is necessarily true.

Syntactically, a possibilistic disjunctive (resp. normal, simple, definite) program is a set of pairs with  a disjunctive (resp. normal, simple, definite) rule and a certainty associated with . Possibilistic rules with are generally omitted as only trivial information can be derived from them. We will also write a possibilistic rule with a disjunctive rule of the form as:

For a possibilistic rule we use to denote , i.e. the classical rule obtained by ignoring the certainty. Similarly, for a possibilistic program we use to denote the set of rules . The set of all weights found in a possibilistic program is denoted by . We will also use the extended set of weights , defined as .

Semantically, PASP is based on a generalization of the concept of an interpretation. In classical ASP, an interpretation can be seen as a mapping , i.e. a literal is either true or false. This notion is generalized in PASP to a valuation, which is a function . The underlying intuition of is that the literal ‘’ is true with certainty ‘’, which we will also write in set notation as . As such, a valuation corresponds with the set of constraints . Note that, like interpretations in ASP, these valuations are of an epistemic nature, i.e. they reflect what we know about the truth of atoms. For notational convenience, we often also use the set notation . In accordance with this set notation, we write to denote the valuation in which each literal is mapped to . For a certainty and a valuation, we use to denote the classical projection . We also use , i.e. those literals that can be derived to be true with certainty strictly greater than ‘’. A valuation is said to be consistent when is consistent. In such a case, there always exists a normalized possibility distribution such that .

We now present a straightforward extension of the semantics for PASP introduced in [Nicolas et al. (2006)]. Let the -cut of a possibilistic program , with , be defined as:

i.e. the rules in with an associated certainty higher than or equal to ‘’.

Definition 1

Let be a possibilistic simple program and a valuation. The immediate consequence operator is defined as:

The intuition of Definition 1 is that we can derive the head only with the certainty of the weakest piece of information, i.e. the necessity of the conclusion is restricted either by the certainty of the rule itself or the lowest certainty of the literals used in the body of the rule. Note that the immediate consequence operator defined in Definition 1 is equivalent to the one proposed in [Nicolas et al. (2006)], although we formulate it somewhat differently. Also, the work from [Nicolas et al. (2006)] only considered definite programs, even though adding classical negation does not impose any problems.

As before, we use to denote the fixpoint obtained by repeatedly applying starting from the minimal valuation , i.e. the least fixpoint of w.r.t. set inclusion. A valuation is said to be the answer set of a possibilistic simple program if and is consistent. Answer sets of possibilistic normal programs are defined using a reduct. Let be a set of literals. The reduct of a possibilistic normal program is defined as [Nicolas et al. (2006)]:

A consistent valuation is said to be a possibilistic answer set of the possibilistic normal program iff , i.e. if is the answer set of the reduct .

Example 1

Consider the possibilistic normal program from the introduction:

0.1:
1:

It is easy to verify that is a possibilistic answer set of . Indeed, is the set of rules:

0.1:

from which it trivially follows that . The conclusion is thus that we do not need to go to the airport, which differs from our intuition of the problem. We will revisit this example in Example 4 in Section 3.2.

The semantics we presented allow for classical negation, even though this was not considered in [Nicolas et al. (2006)]. However, adding classical negation does not impose any problems and could, as an alternative, easily be simulated in ASP [Baral (2003)].

2.4 Complexity Theory

Finally, we recall some notions from complexity theory. The complexity classes and are defined as follows [Papadimitriou (1994)]:

where is the class of problems that can be solved in polynomial time on a non-deterministic machine with an  oracle, i.e. assuming a procedure that can solve  problems in constant time. We also consider the complexity class  [Cai et al. (1988)], which is the class of all languages such that , where is in  and is in . For a general complexity class , a problem is -hard if any problem in  can be polynomially reduced to this problem. A problem is said to be -complete if the problem is in  and the problem is -hard. Deciding the validity of a Quantified Boolean Formula (QBF) with in disjunctive normal form (DNF) is the canonical -complete problem. The decision problems we consider in this paper are brave reasoning (deciding whether a literal ‘’ (clause ‘’) is entailed by a consistent answer set of program ), cautious reasoning (deciding whether a literal ‘’ (clause ‘’) is entailed by every consistent answer set of a program ) and answer set existence (deciding whether a program has a consistent answer set). Brave reasoning as well as answer set existence for simple, normal and disjunctive programs is -complete, -complete and -complete, respectively [Baral (2003)]. Cautious reasoning for simple, normal and disjunctive programs is -complete, -complete and -complete [Baral (2003)].

3 Characterizing (P)ASP

ASP lends itself well to being characterized in terms of modalities. For instance, ASP can be characterized in autoepistemic logic by interpreting ‘’ as the epistemic formula (“ is not believed”) [Gelfond (1987)]. In this paper, as an alternative, we show how ASP can be characterized within possibility theory. To arrive at this characterization, we first note that ASP is essentially a special case of PASP in which every rule is certain. As such, we will show how PASP can be characterized within possibility theory. This characterization does not coincide with the semantics proposed in [Nicolas et al. (2006)] for PASP, as the semantics from [Nicolas et al. (2006)] rely on the classical Gelfond-Lifschitz reduct. Rather, the semantics that we propose for PASP adhere to a different intuition of negation-as-failure. A characterization of ASP is then obtained from these new semantics by considering the special case in which all rules are entirely certain.

This characterization of ASP, while still in terms of modalities, stays close in spirit to the Gelfond-Lifschitz reduct. In contrast to the characterization in terms of autoepistemic logic it does not require a special translation of literals to deal with classical negation and disjunction. The core idea of our characterization is to encode the meaning of each rule as a constraint on possibility distributions. Particular minimally specific possibility distributions that satisfy all the constraints imposed by the rules of a program will then correspond to the answer sets of that program.

In this section, we first limit our scope to possibilistic simple programs (Section 3.1). Afterwards we will broaden the scope and also consider possibilistic normal programs (Section 3.2). The most general case, in which we also consider possibilistic disjunctive programs, will be discussed in Section 4.

3.1 Characterizing Possibilistic Simple Programs

When considering a fact, i.e. a rule of the form , we know by definition that this rule encodes that the literal in the head is necessarily true, i.e. . If we attach a weight to a fact, then this expresses the knowledge that we are not entirely certain of the conclusion in the head, i.e. for a possibilistic rule we have that . Note that the constraint uses , as there may be other rules in the program that allow us to deduce with a greater certainty.

In a similar fashion we can characterize a rule of the form () as the constraint which is equivalent to the constraint due to the min-decomposability property of the necessity measure. Indeed, the intuition of such a rule is that the head is only necessarily true when every part of the body is true. When associating a weight with a rule, we obtain the constraint for a possibilistic rule with . Similarly, to characterize a constraint rule, i.e. a rule of the form , we use the constraint , or, in the possibilistic case with , the constraint .

Definition 2

Let be a possibilistic simple program and a possibility distribution. For every , the constraint imposed by with , and is given by

(1)

is the set of constraints imposed by program . If satisfies the constraints in , is said to be a possibilistic model of , written . A possibilistic model of will also be called a possibilistic model of . We write for the set of all minimally specific possibilistic models of .

Definition 3

Let be a possibilistic simple program. Let be a minimally specific model of , i.e. . Then is called a possibilistic answer set of .

Example 2

Consider the possibilistic simple program with the rules:

The set consists of the constraints:

It is easy to see that the last constraint is trivial and can be omitted and that the other constraints can be simplified to , and . The least specific possibility distribution that satisfies these constraints is given by:

By definition, since the possibility distribution satisfies the given constraints, is a possibilistic model. Furthermore, it is easy to see that is the unique minimally specific possibilistic model (due to least specificity). We can verify that since we have that and that since . Furthermore it is easy to verify that , and . Hence we find that is a possibilistic answer set of .

In particular, when we consider all the rules to be entirely certain, i.e. , the results are compatible with the semantics of classical ASP.

Example 3

Consider the program . The set of constraints is given by and . The first constraint can be rewritten as , i.e. as . The last constraint can be rewritten as , i.e. as . Given these two constraints, we find that contains exactly one element, which is defined by

Notice how the first constraint turned out to be of no relevance for this particular example. Indeed, due to the principle of minimal specificity and since there is nothing that prevents , we find that . Therefore the first constraint simplifies to . Once more, due to the principle of minimal specificity we thus find that as there is no information that prevents . To find out whether , and are necessarily true w.r.t. the least specific possibility distribution arising from the program, we verify whether , , and , respectively, with the necessity measure induced by the unique least specific possibility distribution . As desired, we find that whereas . The unique possibilistic answer set is therefore . As we will see, it then follows from Proposition 1 that the unique classical answer set of is .

In Propositions 1 and 2, below, we prove that this is indeed a correct characterization of simple programs. First, we present a technical lemma.

Lemma 1

Let be a set of literals, a consistent set of literals and let the possibility distribution be defined as if and otherwise. Then .

The proof is given in the online appendix of the paper, pp. 1–2.

Proposition 1

Let be a simple program. If then either the unique consistent answer set of is given by or is the vacuous distribution, in which case does not have any consistent answer sets.

The proof is given in the online appendix of the paper, pp. 2–4.

Proposition 2

Let be a simple program. If is an answer set of then the possibility distribution defined by iff and otherwise belongs to .

The proof is given in the online appendix of the paper, pp. 4.

3.2 Characterizing Possibilistic Normal Programs

To deal with negation-as-failure, we rely on a reduct-style approach in which a valuation is guessed and it is verified whether this guess is indeed stable. The approach taken in [Gelfond and Lifzchitz (1988)] to deal with negation-as-failure is to guess an interpretation and verify whether this guess is stable. We propose to treat a rule of the form as the constraint

where is the guess for the valuation and where we assume . Or, when we consider a possibilistic rule , we treat it as the constraint

We like to make it clear to the reader that the characterization of normal programs in terms of constraints on possibility distributions in its basic form is little more than a reformulation of the Gelfond-Lifschitz approach. The key difference is that this characterization can be used to guess the certainty with which we can derive particular literals from the available rules, rather than guessing what may or may not be derived from it. Nevertheless, this difference plays a crucial role when dealing with uncertain rules. In particular, this characterization of PASP does not coincide with the semantics of [Nicolas et al. (2006)] and adheres to a different intuition for negation-as-failure.

Definition 4

Let be a possibilistic normal program and let be a valuation. For every , the constraint induced by with , and is given by

(2)

is the set of constraints imposed by program and valuation , and is the set of all minimally specific possibilistic models of .

Definition 5

Let be a possibilistic normal program and let be a valuation. Let be such that

then is called a possibilistic answer set of .

Example 4

Consider the possibilistic normal program from Example 1. The constraints induced by are:

From the first constraint it readily follows that we need to choose to comply with the principle of minimal specificity. The other constraint can then readily be simplified to:

Hence it follows that is the unique possibilistic answer set of .

It is easy to see that the proposed semantics remain closer to the intuition of the possibilistic normal program discussed in the introduction. Indeed, we conclude with a high certainty that we need to go to the airport.

Still, it is interesting to further investigate the particular relationship between the semantics for PASP as proposed in [Nicolas et al. (2006)] and the semantics presented in this section. Let the possibilistic rule be of the form:

When we determine the reduct w.r.t. a valuation  of the possibilistic program containing , then the certainty of the rule in the reduct that corresponds with can be verified to be:

with a fuzzy negator, i.e. where is a decreasing function with and . In particular, for the semantics of [Nicolas et al. (2006)] we have that is the Gödel negator , defined as and with . In the semantics for PASP presented in this section, is the Łukasiewicz negator with . Thus, for a rule such as:

and a valuation we obtain under the approach from [Nicolas et al. (2006)] the reduct , whereas under our approach we obtain the constraint , which can be encoded by the rule . Essentially, the difference between both semantics can thus be reduced to a difference in the choice of negator. However, even though the semantics share similarities, there is a notable difference in the underlying intuition of both approaches. Specifically, in the semantics presented in this paper, we have that ‘’ is understood as “the degree to which ‘’ is possible”, or, equivalently, “the degree to which it is not the case that we can derive ‘’ with certainty”. This contrasts with the intuition of ‘’ in [Nicolas et al. (2006)] as a Boolean condition and understood as “we cannot derive ‘’ with a strictly positive certainty”.

Interestingly, we find that the complexity of the main reasoning tasks for possibilistic normal programs remains at the same level of the polynomial hierarchy as the corresponding normal ASP programs.

While we will see in Section 5 that the complexity of possibilistic normal programs remains unchanged compared to classical normal programs, it is important to note that under the semantics proposed in this section there is no longer a 1-on-1 mapping between the classical answer sets of a normal program and the possibilistic answer sets. Indeed, if we consider a possibilistic normal program constructed from a classical normal program where we attach certainty to each rule, then we can sometimes obtain additional intermediary answer sets. Consider the next example:

Example 5

Consider the normal program with the single rule . This program has no classical answer sets. Now consider the possibilistic normal program with the rule

The set of constraints is given by

This constraint can be rewritten as

We thus find that the set is a singleton with defined by and . We can now establish for which choices of it holds that :

and thus, since , we have . The unique possibilistic answer set of is therefore . In the same way, one may verify that the program

1: 1:

has an infinite number of possibilistic answer sets, i.e. for every . For practical purposes, however, this behavior has a limited impact as we only need to consider a finite number of certainty levels to perform brave/cautious reasoning. Indeed, we only need to consider the certainties used in the program, their complement to account for negation-as-failure and to account for the intermediary value as in Example 5. Thus, for the main reasoning tasks it suffices to limit our attention to the certainties from the set .

We now show that when we consider rules with an absolute certainty, i.e. classical normal programs, we obtain a correct characterization of classical ASP, provided that we restrict ourselves to absolutely certain conclusions, i.e. valuations for which it holds that .

Example 6

Consider the program with the rules

The set of constraints is then given by
We can rewrite the first constraint as and thus . The second constraint is trivially satisfied and, since it does not entail any new information, can be dropped. The last constraint can be rewritten as , which imposes an upper bound on the value that can assume. Since we already know that we can further simplify this inequality to . In conclusion, the program imposes the constraints

The set then contains exactly one element, which is defined by