Formal models of belief revision differ in what they consider as representations of the epistemic states of an agent. In the AGM approach [Gärdenfors1988] epistemic states are identified with logical theories, that is, sets of formulas closed under classical inference. Other approaches like those discussed in [Nebel1992] consider finite sets of formulas, sometimes called belief bases, as epistemic states. They investigate how to revise such belief bases. Only a rather small fraction of work in belief revision has studied an obvious alternative: the revision of epistemic states expressed as nonmonotonic theories [Brewka1991, Williams & Antoniou1998, Antoniou et al.1999, Chopra & Parikh1999].
This is somewhat surprising since close relationships between properties of nonmonotonic inference relations and postulates for belief revision have been established [Makinson & Gärdenfors1991, Gärdenfors & Makinson1994]. Indeed, one of the reasons why nonmonotonic logics were invented is their ability to handle conflicts and inconsistencies, one of the major issues in belief revision. If this is the case, shouldn’t it be possible to use the power of nonmonotonic inference to simplify revision? In fact, what we have in mind is a complete trivialization of the revision problem. We want to be able to revise nonmonotonic theories simply by adding new information, and we want to leave everything else to the nonmonotonic inference relation.
An early approach in this spirit was the author’s paper [Brewka1991] where an extension of Poole-systems [Poole1988], the so-called preferred subtheory approach, was used. New information, possibly equipped with information about the reliability level of this information, was simply added to the available information. The nonmonotonic inference relation determined the acceptable beliefs.
We are not satisfied with this approach any longer for several reasons. Existing theories of belief revision, including the one presented in the earlier paper, have difficulties to model the way real agents revise their beliefs. One of the reasons for this is that they do not represent information which is commonly used by agents for this purpose. For instance, new information always comes together with certain meta-information (formulas don’t fly into the agent’s mind): Where does the information come from? Was it an observation? Did you read it in the newspaper? Did someone tell you, and if so, who? Did the person who gave you the information have a motive to lie? and so on. In most cases we reason with and about this meta-information when revising our beliefs. We strongly believe that realistic models of revision should provide the necessary means to represent this kind of information.
The meta-information is used to determine the entrenchment of pieces of information. The less entrenched the information is, the more willing we are to give it up. Again, entrechment relations are not just there, they result from reasoning processes. To model this kind of reasoning, entrenchment should be expressible in the logical language. Once we have the possibility to express entrenchment (or plausibility, or preference) in the language, it will also become possible to represent revision strategies declaratively. This in turn makes it possible to revise the revision strategies themselves.
Here is a real life example that can be used to illustrate what we have in mind. Assume Peter tells you that your girl-friend Anne went out for dinner with another man yesterday. Peter even knows his name: it was John, a highly attractive person known for having numerous affairs. You are concerned and talk to Anne about this. She tells you she was at home yesterday evening waiting for you to call. Peter insists that he saw Anne with that man. You are not sure what to believe. Luckily, you find out that Anne has a twin sister Mary. Mary indeed went out with her new boy-friend John. This explains why Peter got mixed up. You now believe Anne and happily continue your relationship.
What this example nicely illustrates is the way we reason about the reliability of information. There is no given fixed entrenchment ordering to start with. In the example there is also, at least in the beginning, no reason to trust Peter more than the girl-friend, or vice versa. And obviously, it is not the new information that is accepted in each situation. It is the additional context information which is relevant here: it gives us an explanation for Peter’s mistake and decreases the reliability of Peter’s observation enough to break the tie.
To be able to formalize examples of this kind we propose in this paper an approach to belief revision where
nonmonotonic belief bases represent epistemic states and nonmonotonic inference is used to completely trivialize revision,
it is possible to express and reason about meta-information, including the reliability of formulas,
revision strategies can be represented declaratively, that is, logical formulas express how conflicts among different pieces of information are resolved.
The outline of the paper is as follows. In the next section we introduce the nonmonotonic formalism we use here to represent epistemic states. In the following section we show how to use this formalism for representing revision strategies. We then discuss the AGM postulates for revision and show that almost all of them are not valid in our approach (which does not bother us). In the following section we briefly deal with contraction. We then discuss forgetting in the context of our approach. Finally, we discuss related work and conclude.
2 Representing reliability relations
In this section we introduce the formalism used in this paper. As mentioned in the introduction, one of the distinguishing features of our approach is that we want to be able to reason about the reliability of the available information in the logical language. In the AGM approach [Gärdenfors1988, Gärdenfors & Makinson1988] entrenchment relations are used to represent how strongly an agent sticks to his beliefs: the more entrenched a formula, the less willing to give it up the agent is. Entrenchment relations have several properties which are based on the logical strength of the formulas. For instance, logically weaker formulas are not less entrenched than logically stronger ones. The intuition is that if a weaker formula has to be given up, the stronger formula has to be given up anyway.
In our approach we do not require such properties. We may even have equivalent formulas and with different reliability. This may happen when, for instance, and come from different sources and with different reliability. Note that although the less reliable information does not add to the accepted beliefs as long as the more reliable equivalent information is in force, the situation may change when new information about the reliability of is obtained. Should turn out to be highly unreliable later (of course, beliefs about the reliability of sources may be revised as any other beliefs) then it becomes important to have with, say, somewhat lower reliability available.
All we require, therefore, is the existence of a strict partial order between formulas which tells us how to resolve potential conflicts. To avoid misunderstandings we will not call an entrenchment relation. Instead, we speak of reliability, or simply priority among formulas.
Since we want to represent in the logical language we need to be able to refer to formulas. Instead of using a quoting mechanism for this purpose, we will use named formulas, that is pairs consisting of a formula and a name for the formula. Technically, names are just ground terms that can be used everywhere in the language.
We will present our formalism in two steps: we first introduce an extension of Poole systems which allows us to express preference information in the language, together with an appropriate definition of extensions. It turns out that due to the potential self-referentiality of preference information not all theories expressed in this formalism possess extensions, that is, acceptable sets of beliefs. In a second step, we therefore introduce a new notion of prioritized inference defined as the least fixed point of a monotone operator. Epistemic states, then, are identified with preferential default theories under this least fixed point semantics.
Our basic formalism extends the well-known Poole systems [Poole1988]. Recall that Poole systems consist of a consistent set of (first order) formulas , the facts, and a possibly inconsistent set of formulas , the defaults. A set of formulas is an extension of a Poole-system iff where is a maximal -consistent subset of .
Our formalism differs from this approach in the following respects:
In the context of belief revision it seems inappropriate to consider some information as absolutely certain and unrevisable. We therefore do not use . Instead, we have a single set containing all the information.111One of the reviewers of this paper points out that using may have representational advantages since it eliminates the need to use preferences to indicate the most reliable information, see our examples in the rest of the paper. We might therefore reintroduce in future versions of this paper for purely practical reasons. This does not seem to pose any technical problems.
We represent preference and other meta-information in the language. We therefore introduce names for formulas and a special symbol . intuitively says that in case of a conflict should be given up rather than since the latter is more reliable. We require that represents a strict partial order.222We assume that the properties of , like those of equality, are part of the underlying logic and need not be represented through explicit axioms in our default theories.
We introduce a new notion of extension which takes the preference information into account adequately.
To avoid confusion we want to emphasize that belongs to the logical language, whereas is a meta level symbol. For the following definitions it is essential to clearly separate between these levels.
For simplicity, we only consider finite default theories in this paper. A generalization to the infinite case would have to reduce partial orderings to well-orderings rather than total orders.
A named formula is a structure of the form , where is a first order formula and a ground term representing the name of the formula.
We use the functions and to extract the name respectively formula of a named formula, that is and . We will also apply both functions to sets of named formulas with the obvious meaning.
A preference default theory is a finite set of named formulas such that
is a set of first order formulas whose logical language contains a reserved symbol representing a strict total order, and
, and implies .
The last item in the definition guarantees that different formulas have different names.
Let be a preference default theory, a total order on . The extension of generated by , denoted , is the set where
, and for
if this set is consistent, otherwise.
Here is the -th element of according to the total order .
The set is called the extension base of .
We say is an extension of if there is some total order such that . Obviously, all maximal consistent subsets of are extension bases. We now consider the general case of partial orders.
Let be a preference default theory, a strict partial order on . The set of extensions of generated by is
We next define two notions of compatibility:
Let be a preference default theory, a strict partial ordering of , a set of formulas. We say is compatible with iff
An extension of is compatible with iff there is a strict partial ordering of compatible with such that .
The set of extensions of compatible with is denoted .
Let be a preference default theory. A set of formulas is called a preferred extension of iff .
Intuitively, is a preferred extension if it is the deductive closure of a maximal consistent subset of which can be generated through a total preference ordering compatible with the preference information in itself. The preference information in certainly does not have to be total.
Here is a simple example illustrating preference default theories:
As is common in Poole systems, rules with exceptions, that is, formulas whose instances can be defeated without defeating the formula as a whole (here ), are represented as schemata used as abbreviations for all of their ground instances. As above we will make the intended instances explicit in all examples. To make sure that the different ground instances can be distinguished by name we have to parameterize the names also. We assume that terms used as names can be distinguished from other terms which we call object terms.333A more elaborate formalization would be based on sorted logic with sorts for names and other types of objects from the beginning. We do not pursue this here since we want to keep things as simple as possible. In our case, is a proper rule name, is not. Since we only consider finite theories we must also assume that the set of object terms is finite.
In our example we obtain 3 extensions , and . In the instance of with is rejected, in is rejected, and rejects . All extensions contain and . It is not difficult to see that only can be constructed using a total ordering of which is compatible with this information. is thus the single preferred extension of this preference default theory.
Preference default theories under extension semantics are very flexible and expressive. The reason we are not yet fully satisfied with them is that they can express unsatisfiable preference information: there are theories which do not possess any preferred extensions. The simplest example is as follows:
Accepting the first of the two contradictory formulas requires to give preference to the second, and vice versa. No preferred extension exists for this theory.
This means that preference default theories together with the standard notion of nonmonotonic inference where a formula is considered derivable whenever it is contained in all (preferred) extensions do not seem fully adequate for representing epistemic states of rational agents.
We will therefore introduce another, somewhat less standard notion of nonmonotonic consequence.444An alternative way of handling this problem would be to introduce some kind of “stratification” into our theories. Stratification, a term taken form the area of logic programming, would prohibit formulas from speaking, directly or indirectly, about their own priority. Unfortunately, it turns out that only highly restrictive forms of stratification guarantee existence of extensions. For this reason we do not pursue this approach here.
This approach shares some intuition with the fixed point formulation of well-founded semantics for logic programs with negation due to Baral and Subrahmanian[Baral & Subrahmanian1991]. In particular, it is based on the least fixed point of a monotone operator.
Let us first explain the underlying idea. Starting with the empty set, we iteratively compute the intersection of those extensions which are compatible with the information obtained so far. Since the set of formulas computed in each step may contain new preference information the number of extensions may be reduced, and their intersection thus may grow. We continue like this until no further change happens, that is, until a fixed point is reached.
Let be a preference default theory, a set of formulas. We define an operator as follows:
The operator is monotone.
Proof: implies that an ordering is compatible with whenever it is compatible with . We thus have and therefore .
Monotone operators, according to the well-known Knaster-Tarski theorem [Tarski1955], possess a least fixed point. This fixed point can be computed by iterating the operator on the empty set. We, therefore, can define the accepted conclusions of a preference default theory as follows:
Let be a preference default theory. A formula is an accepted conclusion of iff , where is the least fixed point of the operator .
We call extensions which are compatible with accepted extensions.
Several illustrative examples will be given in the next section. Here we just show how the theory without preferred extension is handled in this approach. We have . We first compute . Since no preference information is available in the empty set we obtain which is equivalent to . This set is already the least fixed point.
Let be a preference default theory, an accepted conclusion of . Then is contained in all preferred extensions of .
Proof: If has no preferred extension the proposition is trivially true. So assume possesses preferred extension(s). A simple induction shows that each preferred extension is among the extensions compatible with the formulas computed in each step of the iteration of . Therefore each preferred extension is also an accepted extension.
Let be a preference default theory. The set of accepted conclusions of is consistent.
Proof: We show by induction that, for arbitrary , the set of formulas obtained after applications of is consistent. For this is trivial. Assume the set of formulas obtained after iterations is consistent. Since is consistent and formalizes a strict partial ordering there must be at least one strict partial ordering compatible with , so the set of all extensions compatible with is nonempty. Since each extension is by definition consistent the intersection of an arbitrary nonempty set of extensions must also be consistent.
Since preference default theories under accepted conclusion semantics always lead to consistent beliefs, we will in the next section identify epistemic states with preference default theories and belief sets with their accepted conclusions.
3 Revising epistemic states
3.1 The revision operator
Given an agent’s epistemic state is identified with a preference default theory as introduced in the last section, it is natural to identify the set of beliefs accepted by the agent with the accepted conclusions of this theory. We therefore define belief sets as follows:
Let be an epistemic state. , the belief set induced by , is the set of accepted conclusions of .
It is a basic assumption of our approach that belief sets cannot be revised directly. Revision of belief sets is always indirect, through the revision of the epistemic state inducing the belief set. Note that since two different epistemic states may induce the same belief set, the revision function which takes an epistemic state and a formula and produces a new epistemic state does not induce a corresponding function on belief sets.
Given an epistemic state , revising it with new information simply means generating a new name for it and adding the corresponding named formula.
Let be an epistemic state, a formula. The revision of with , denoted , is the epistemic state where is a new name not appearing in .
Notation: in the rest of the paper we assume that names are of the form where is a numbering of the formulas. If has elements and a new formula is added, then its new name is .
3.2 Representing revision strategies
In this subsection we show how revision strategies used by an agent can be represented in our approach. We first discuss an example where the strategy is based on the type of the available information. We distinguish between strict rules, observations and defaults. Strict rules have highest priority because they represent well-established or terminological information. Observations can be wrong, but they are considered more reliable than default information. Consider the following epistemic state :
has 4 extensions. The corresponding extension bases are obtained from by leaving out , , , or , respectively. All extensions, and thus , contain information stating that has lower preference than the other three formulas. Therefore, the only extension compatible with is the one generated by leaving out . This set is also the least fixpoint of . thus does not contain .
The next example formalizes the revision strategy of an agent who prefers newer information over older information and information from a more reliable source over information from a less reliable source. In case of a conflict between the two criteria the latter one wins.
Assume the following specific scenario: At time 10 Peter informs you that holds. At time 11 John tells you this is not true. Although you normally prefer later information, you also have reason to prefer what Peter told you since you believe Peter is more reliable than John. Since you consider reliability of your sources even more important than the temporal order you believe .
Here is the formal representation of this scenario. We use where is a finite set of names as an abbreviation for . Note that we have to make sure, by adding adequate preferences, that the rules representing our revision strategy cannot be used - via contraposition - to defeat our meta-knowldge about and :
This preference default theory has 14 extensions which are obtained by leaving out one of and one of . All extensions contain . This means that after the next iteration of the -operator we are left with 2 extensions which are obtained by leaving out one of and . Both extensions contain the formula . The next and final iteration of thus eliminates the extension containing . We are left with a single extension and is among the accepted conclusions.
We next present the example from the introduction. This time we use categories , and 555We assume uniqueness of names for the categories. Otherwise the set would be consistent and could be used to defeat which, obviously, is unintended. to express reliability: the reliability of a formula with name is . We have the following information:
Although the agent initially considers and as equally reliable, the information that Anne has a twin sister Mary who is dating John decreases the reliability of to . and say how the reliability categories are to be translated to preferences. and make sure that meta-information is preferred, and that can defeat .Taking all reliability information into account the agent accepts .
Note that we do not discuss here how agents form meta-beliefs of the kind required to represent the example (induction, folk psychology?). We simply assume that this information is available to the agent.
We now discuss the postulates for revision which are at the heart of the AGM approach [Gärdenfors1988]. Since our approach uses epistemic states rather than deductively closed sets of formulas (belief sets) as substrate of revision, some of the postulates need reformulation. In particular, AGM use the expansion operator in some postulates. Expansion of a belief set with a formula means adding to the belief set and closing under deduction, that is . Since epistemic states always induce consistent belief sets the distinction between revising and expanding an epistemic state does not seem to make much sense in our context. We therefore translate expansion in the following postulates to expansion of the induced belief set.
In the following we present the AGM postulates (K*i) in their original form together with our corresponding reformulations (T*i). In each case K is a belief set in the sense of AGM, an epistemic state as defined in this paper, are formulas:
(K*1) is a belief set.
(T*1) is belief set.
Not satisfied. New information is not necessarily accepted in our approach. We see this as an advantage since otherwise belief sets would always depend on the order in which information was obtained.
Not satisfied. Assume we have , that is is the set of tautologies. Let . Now contains which is not contained in . (K*4) if then (T*4) if then Not satisfied. It may be the case that , although not in the belief set, is contained in one of the accepted extensions. Adding to the epistemic state does not necessarily lead to a situation where this extension disappears.
Not satisfied. Revising an epistemic state with logically inconsistent information has no effect whatsoever. The information is simply disregarded. Inconsistent belief sets are impossible in our approach, so the right to left implication does not hold.
(K*6) If then
(T*6) If then
Satisfied under the condition that and are given the same name, or the names of and do not yet appear in . But note that logically equivalent information may have different impact on the belief sets when different meta-information is available. For instance, and may have different effects if different meta-information about the sources of and , respectively, is available.
Not satisfied. Here is a counterexample. Assume we have . Now let and . Clearly, revising the epistemic state with leads to a single accepted extension containing since the two conflicting formulas are less preferred. is thus in the belief set induced by the revised state. On the other hand, revising the epistemic state with leads to two extensions, one containing , the other . is thus not in the belief set induced by the new state. This does not change when we expand the belief set with . (K*8) If then (T*8) If then Not satisfied. This is immediate from the fact that does not necessarily contain , that is from the failure of (T*2).
This analysis shows that the intuitions captured by the AGM postulates are indeed very different from those underlying our approach.
Contraction means making a formula underivable without assuming its negation. There may be different reasons for this, not all of them requiring extensions of our framework. For instance, the reliability of a source of a certain piece of information may be in doubt due to extra information. In that case it may happen that a belief is no longer in the belief set for appropriate even if . Such effects are handled implicitly in our approach.
If, however, the agent may obtain information of the kind “do not believe ” rather than “believe ”, then extra mechanisms seem necessary. In the context of AGM-style approaches the contraction operator can be defined through revision on the basis of the so-called Harper identity: . The intuition here is that revision with removes the formulas used to derive , and the intersection with guarantees that no new information is derived from .
This intuition can, to a certain extent, be captured using Poole’s constraints [Poole1988]. Constraints, basically, are formulas used in the construction of maximal consistent subsets of the premises, but not used for derivations.
To model contraction of epistemic states we must distinguish between these two types of formulas, premises and constraints. Extension bases consist of both types and also the compatibility of preference orderings is checked against premises and contraints. Extensions, however, are generated only from the premises. Constraints, as regular formulas, have names and may come with meta-information, e.g., information about their reliability.
We do not want to go into further technical detail here. Instead, we illustrate contraction using an example. We indicate constraints by choosing names of the form for them. Assume the epistemic state is as follows:
The agent receives the information “do not believe ”. The following constraint is added:
Note that the constraint is not necessarily preferred to the premises. Let denote the set of all ground instances of . We obtain three extension bases
Although and contain this formula is not in the extensions generated from these extension bases, and for this reason not in the belief set, since it is a constraint.
Note that constraints do not necessarily prohibit formulas from being in the belief set since they may have low reliability. For example, if we revise the epistemic state obtained above with and then the belief set contains .
Although we used the Harper identity above to motivate the use of constraints for contraction, its natural reformulation
is not valid in our approach. Assume . Obviously, is the set of tautologies. Now let . We contract by adding the constraint . Now the single accepted extension of the new epistemic state and thus its belief set is , a strict superset of .
In our approach revising a knowledge base means adding a formula to the epistemic state. Even in the case of contraction the epistemic state grows. For ideal agents this may be adequate since every piece of information, whether it contributes to the current belief set or not, may turn out to be relevant later. However, for agents with limited resources the expansion of the epistemic state cannot go on forever.
This raises the question how and when pieces of information should be forgotten. What we need is some kind of a mental garbage collection strategy. In LISP systems garbage collection is the process of identifying unaccessable memory space which is then made available again. In our context there is no clear distinction between garbage and non-garbage. As mentioned before, every information may become relevant through additional information, so it would not be reasonable to throw away information just because it is, say, not contained in any extension base. On the other hand, even those formulas contributing to the current belief set may be considered as garbage if the corresponding part of the belief set is not relevant to the agent. It appears that a satisfactory treatment of forgetting would have to take the utility of information for the agent into account. This is beyond the scope of this paper and a topic of further research.
7 Related work and discussion
In this paper we proposed a framework for belief revision where preference default theories together with a corresponding nonmonotonic inference relation are used to represent epistemic states and belief sets, respectively. Our underlying formalism draws upon ideas developed in [Brewka1989] and [Brewka & Eiter2000], the notion of accepted conclusions introduced to guarantee consistency of belief sets and its application to belief revision is new. The framework is expressive enough to represent and reason about reliability and other properties of information. It thus can be used to represent revision strategies of agents declaratively. Another advantage of the framework is that it lends itself to iteration in an obvious and natural way.
In an earlier paper [Brewka1991] the author used nonmonotonic belief bases in the preferred subtheories framework to model revision. This approach, however, did not represent reliability information explicitly. Williams and Antoniou [Williams & Antoniou1998] investigated revision of Reiter default theories. In a similar spirit, Antoniou et al [Antoniou et al.1999] discuss revision of theories expressed in Nute’s defeasible logic. Also these approaches do not reason about the reliability of information. This is also true for existing work in revising logic programs, see [Alferes & Pereira1996] for an example.
Forms of revision where new information is not necessarily accepted were investigated by Hansson [Hansson1997]. This form of revision is sometimes referred to as non-prioritized belief revision. Hansson called his version ”semi-revision”. Explicit reasoning about the available information is not modelled in Hansson’s approach.
Structured belief bases were investigated by Wassermann [Wassermann1998]. Rather than using the structure to model meta-level and preference information, Wassermann uses structure to determine relevant parts of the belief base. The focus is thus on local revision operations and related complexity issues. Chopra and Parikh [Chopra & Parikh1999] propose a model where belief bases are partitioned into subbases according to syntactic criteria. Belnap’s four-valued logic is used for query answering. Again the focus is on keeping the effects of revision as local as possible. It is assumed that the local revision operators used satisfy the AGM postulates. The approach is thus very different from ours.
The work presented in this paper was funded by DFG (Deutsche Forschungsgemeinschaft), Forschergruppe Kommunikatives Verstehen. I thank R. Booth, S. Lange, H. Sturm and F. Wolter for helpful comments. Thanks also to the anonymous reviewers of the paper.
- [Alferes & Pereira1996] Alferes, J., and Pereira, L. 1996. Reasoning with Logic Programming. Springer LNAI 1111.
[Antoniou et al.1999]
Antoniou, G.; Billington, D.; Governatori, G.; and Maher, M.
Revising nonmonotonic belief sets: The case of defeasible logic.
Proc. 23rd German Conference on Artificial Intelligence, LNAI 1701, 101–112. Springer.
- [Baral & Subrahmanian1991] Baral, C., and Subrahmanian, V. 1991. Duality between alternative semantics of logic programs and nonmonotonic formalisms. In Intl. Workshop on Logic Programming and Nonmonotonic Reasoning.
- [Brewka & Eiter2000] Brewka, G., and Eiter, T. 2000. Prioritizing default logic. In Festschrift 60th Anniversary of W. Bibel. Kluwer.
- [Brewka1989] Brewka, G. 1989. Preferred subtheories - an extended logical framework for default reasoning. In Proc. IJCAI-89.
- [Brewka1991] Brewka, G. 1991. Belief revision in a framework for default reasoning. In Fuhrmann, A., and Morreau, M., eds., The Logic of Theory Change. Springer LNAI 465, Berlin.
- [Chopra & Parikh1999] Chopra, S., and Parikh, R. 1999. An inconsistency tolerant model for belief representation and belief revision. In Proc. IJCAI-99, Morgan Kaufmann.
- [Gärdenfors & Makinson1988] Gärdenfors, P., and Makinson, D. 1988. Revisions of knowledge systems using epistemic entrenchment. In Vardi, M., ed., Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge, Morgan Kaufmann, Los Altos.
- [Gärdenfors & Makinson1994] Gärdenfors, P., and Makinson, D. 1994. Nonmonotonic inference based on expectations. Artificial Intelligence 65 197–245.
- [Gärdenfors1988] Gärdenfors, P. 1988. Knowledge in Flux. MIT Press, Cambridge, MA.
- [Hansson1997] Hansson, S. 1997. Semi-revision. Journal of Applied Non-Classical Logic 7 (1–2) 151–175.
- [Makinson & Gärdenfors1991] Makinson, D., and Gärdenfors, P. 1991. Relations between the logic of theory change and nonmonotonic logic. In Fuhrmann, A., and Morreau, M., eds., The Logic of Theory Change, Springer LNAI 465, Berlin.
- [Nebel1992] Nebel, B. 1992. Syntax based approaches to belief revision. In Gärdenfors, P., ed., Belief Revision. Cambridge Univ. Press.
- [Poole1988] Poole, D. 1988. A logical framework for default reasoning. Artificial Intelligence 36.
- [Tarski1955] Tarski, A. 1955. A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of Mathematics 5:285–309.
- [Wassermann1998] Wassermann, R. 1998. On structured belief bases - a preliminary report. In Proc. International Workshop on Nonmonotonic Reasoning (NM’98).
- [Williams & Antoniou1998] Williams, M.-A., and Antoniou, G. 1998. A strategy for revising default theory extensions. In Proc. KR-98.