DeepAI

# Relevance Sensitive Non-Monotonic Inference on Belief Sequences

We present a method for relevance sensitive non-monotonic inference from belief sequences which incorporates insights pertaining to prioritized inference and relevance sensitive, inconsistency tolerant belief revision. Our model uses a finite, logically open sequence of propositional formulas as a representation for beliefs and defines a notion of inference from maxiconsistent subsets of formulas guided by two orderings: a temporal sequencing and an ordering based on relevance relations between the conclusion and formulas in the sequence. The relevance relations are ternary (using context as a parameter) as opposed to standard binary axiomatizations. The inference operation thus defined easily handles iterated revision by maintaining a revision history, blocks the derivation of inconsistent answers from a possibly inconsistent sequence and maintains the distinction between explicit and implicit beliefs. In doing so, it provides a finitely presented formalism and a plausible model of reasoning for automated agents.

• 3 publications
• 6 publications
• 1 publication
09/29/2021

### BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...
03/20/2013

### Evidential Reasoning in a Categorial Perspective: Conjunction and Disjunction of Belief Functions

The categorial approach to evidential reasoning can be seen as a combina...
03/20/2013

### Non-monotonic Reasoning and the Reversibility of Belief Change

Traditional approaches to non-monotonic reasoning fail to satisfy a numb...
03/31/2016

### A New Approach for Revising Logic Programs

Belief revision has been studied mainly with respect to background logic...
05/05/2014

### Belief revision in the propositional closure of a qualitative algebra (extended version)

Belief revision is an operation that aims at modifying old beliefs so th...
04/29/2021

### A General Katsuno-Mendelzon-Style Characterization of AGM Belief Base Revision for Arbitrary Monotonic Logics

The AGM postulates by Alchourrón, Gärdenfors, and Makinson continue to r...
04/16/2021

### Enriching a Model's Notion of Belief using a Persistent Memory

Although pretrained language models (PTLMs) have been shown to contain s...

## 1 Introduction

Belief revision is the process of transforming a belief state upon receipt of new information . There are two fundamental approaches to this problem. In the logic-constrained or horizontal approach [Gärdenfors & Rott (1995)], the belief representation is a theory and given a new proposition , [Alchourron, Gärdernfors, & Makinson (1985)] propose postulates for , the theory revised with . In this approach the belief state is itself sophisticated, and constructing the updated belief state requires work. The usual constructions for AGM revisions using selection functions and epistemic entrenchments often fail to provide an adequate account of iterated revision. AGM-like postulates do not specify how we came to believe and after revision, it is assumed that is a generic theory. But these postulates ignore the fact that was our last information.

The vertical approach, in contrast, uses trivial (and repeatable) operations of revision and expansion on finite, logically open, belief representations, but utilizes a sophisticated notion of non-monotonic inference, see, e.g. [Doyle (1979)], [Brewka (1991)].

We suggest, in conformance with the vertical approach, that be taken to be the belief sequence , i.e., a finite, logically open, sequence of propositions with being the most recent. This suggestion that the most perspicuous way to represent our beliefs is a finite, logically open, set of sentences, i.e., a belief base has been made (amongst others) by [Hansson (1992)], [Nebel (1992)]; the notion that a sequence of formulas captures the importance of temporal ordering and of maintaining a revision history is noted by [Ryan (1991)] and [Lehmann (1995)].

Since updating becomes simple under this approach, the notion of inference must be correspondingly more sophisticated. We describe a method for non-monotonic inference from belief sequences which does this, but departs significantly from previous approaches in one respect. It makes heavy use of relevance relations amongst formulas in a belief base.

This method incorporates the insights in earlier proposals made by [Georgatos (1996)], [Parikh (1999)] and [Chopra & Parikh (1999)]. [Georgatos (1996)] uses the linear order of a belief sequence as a prioritization to generate a variety of inference relations, shows that these schemes are non-monotonic and, therefore, induce a method for belief revision. [Parikh (1999)] shows that if we have a theory referring to two or more disjoint subjects, then our language can be partitioned into corresponding sub-languages, and it is suggested that new information about one  of them should not affect any other. This ensures a relevance or context sensitive, localized notion of belief revision and serves as one way of capturing a more general notion of relatedness amongst propositions in a belief base (as studied by [Wasserman (1999)]). [Chopra & Parikh (1999)] consider sets of theories called -structures, which are individually consistent, but can be jointly inconsistent, to capture the intuition that real agents often reason with an inconsistent, yet usable, set of beliefs which is divided into individually consistent compartments.

Our (current) method of inference blocks the derivation of explicitly inconsistent beliefs from a possibly inconsistent belief sequence by using a notion of inference from maxiconsistent subsets of relevant formulas. Choosing maxiconsistent subsequences in order to avoid inconsistency was used in [Georgatos (1996)], while relevance is determined, as in [Parikh (1999)] and [Chopra & Parikh (1999)] by a specialized notion of language overlap or by other context determined features. The formula whose inference from the sequence is to be determined imposes a prioritization on the formulas present in the sequence by virtue of its relevance relations with them (thus reorganizing the temporal ordering present in the sequence).

Thus we do not treat a belief sequence as a set but rather as a linear order much like an entrenchment ([Gärdenfors & Makinson (1988)], [Georgatos (1997)]) except that two orderings, level of relevance and temporal order govern the sequence. The resulting procedure for inference serves as a generalization of the methods presented in [Georgatos (1996)] and [Chopra & Parikh (1999)]. In this way, we hope to present a model for belief revision that is a plausible representation of real agents’ reasoning.

In the first section of the paper, we present preliminary definitions and establish the notion of relatedness that we will work with. In the second section we define our notion of inference and examine its properties.

Notation: In the following, is a finite propositional language with the usual logical connectives (). The constants true, false  are in . Greek letters denote arbitrary formulas while Roman lower case letters denote propositional atoms. means that is a tautology. will denote the usual classical consequence relation. We reserve the letters for belief sequences.

## 2 Belief Sequences

We begin with a definition of a belief sequence:

###### Definition 1

A belief sequence is a sequence of formulas under a temporal ordering, i.e., a sequence of formulas, where for any pair of beliefs if , is more recent than . Given two sequences, we say that if is obtained from by the concatenation of zero or more formulas; will be referred to as an initial segment of

Under a temporal ordering the most recent formulas occur at the tail of the sequence. We assume that each formula in the sequence is expressed in its smallest language as defined below. Note that the linear temporal order can be replaced with some other linear order expressing prioritization. For example, one could order the propositions on the basis of the trustworthiness of their source.

We now present a relation of relatedness amongst formulas in a sequence (originally proposed and used in [Parikh (1999)], [Chopra & Parikh (1999)] for localized belief revision) as a preliminary to its modification for use in this study. First, a distinction between different languages that a formula can be expressed in:

###### Definition 2

The language is the set of propositional variables which actually occur in a formula ; the language of is the smallest set of propositional variables which can be used to express , a formula logically equivalent to .

So, if then is and while . ( is unique, cf. Lemma LS1 in [Parikh (1999)]). has logically attractive properties, e.g. if , then . Hence we shall work exclusively with this notion.

###### Definition 3

are related by syntactic language overlap ( if . are related by logical language overlap (if .

It is easily seen that implies .

In earlier work on relatedness, Epstein [Epstein (1995)] imposes the following ‘plausibility’ requirements on any relatedness relation ( is related to ):
R1 iff .
R2 iff .
R3 iff .
R4 .
R5 iff or .

Rodrigues [Rodrigues (1997)] has shown that the relation (, i.e., syntactic language overlap, is the smallest relation satisfying Epstein’s conditions.

However, we might also consider a condition not considered by Epstein:
R6 if and then .

###### Observation 1

does not satisfy R6 whereas the relatedness relation does satisfy R6 as well as conditions R1, R3-4 (but not the conditions R2, R5).

Since condition R6 is very natural, we wonder if conditions R2, R5 were adopted out of a feeling that they are compatible with R6. Indeed, in general, though not always, we do have . In such a case it will be the case that is relevant to the composite formula iff it is relevant to at least one part. Note that our does satisfy half  of R5:
R5a: If then or .

For an actual example, notice that if we let , then is equivalent to and of course relevant to . However, is , a downright contradiction and not relevant (in our opinion) to anything. This fact casts some doubt on the intuition behind R2. Indeed it turns out that R2,R5 are incompatible with the natural requirement R6.

With the discussion above as a guide, we now develop a context-sensitive measure for relevance amongst formulas in a belief sequence.

###### Definition 4

Two formulas are (logically) disjoint iff .

###### Definition 5
1. A pair of formulas, are directly relevant if they are not logically disjoint, i.e., if .

2. Given a belief sequence , a pair of formulas are -relevant wrt if such that:
i) are directly relevant
ii) are directly relevant for
iii) are directly relevant.
We write to indicate that are -relevant w.r.t . If above, the formulas are directly relevant.

3. A pair of formulas are irrelevant if they are not -relevant for any .

4. is the the lowest such that are -relevant wrt (we let it be if are irrelevant).

In the following observations, we omit the sequence when clear from the context.

###### Observation 2
1. If a pair of formulas are -relevant, then, , they are -relevant as well.

2. Let occurs in . We say that iff . If , then .

3. The relation is both symmetric and reflexive in the first two arguments but obviously not transitive.

4. If then the sequence is irrelevant to the question of -relevance of formulas ; if , then is a parameter in determining relevance between .

depends only on and and not on and themselves.

So if two formulas are relevant to each other at one level, then they are relevant at all weaker (higher) levels. The definition of relevance above explicitly brings in (as a third parameter) the sequence which can form a bridge between formulas which do not have a direct overlap. Normally, relevance has been thought of as a binary relation; our definition renders it a ternary one. Thus the sequence can play the role of connecting up formulas which do not have any overlap in language but are connected through  other beliefs. A fact about Taj Mahal and one about India will be connected because of our (true) belief that the Taj Mahal is in India. We can also have more distant - and less convincing - indirect connections. E.g. we can think of beliefs linking the subject matter European History, to European Music and then to Music in general, to Indian Music. But it is unlikely that we have beliefs directly linking European History to Indian Music and this level ( = 2) may be too weak (too high) to be useful in most considerations.

The above definition extends the definition of relevance proposed in [Parikh (1999)] and makes explicit the contextual nature of the relevance definition: two formulas may have different degrees of relevance to each other in virtue of different belief sequences. A belief sequence defines a particular context for subject matters; pairs of formulas acquire different relationships to one another given differing contexts. The more basic beliefs (i.e., elements of ) that a person has, the more likely (s)he is to connect two apparently unconnected subjects.

## 3 Revision and Inference on Belief Sequences

Revision on belief sequences is easily achieved: we simply concatenate the new formula to the sequence. The sequence becomes where .

###### Definition 6

where represents the concatenation of to a belief sequence .

Note that .

Example: Consider the sequence . If we receive the information , it is simply appended to the sequence to give us . is now inconsistent, and we need a notion of inference which renders the agent’s beliefs coherent.

### 3.1 Prioritized Inference

Suppose that we have a sequence of formulas which is our current belief base and we are asked about some formula , whether we believe it or not. As we saw just above, the set may well be inconsistent, and hence to decide about the status of we will need to pick some consistent subset of which we bring into play. The choice of formulas to be in such a subset will be governed by two considerations. One is temporality (which we provisionally adopt) under which more recently received formulas have priority over older formulas. The other is relevance according to which more relevant formulas have priority. Clearly we need to decide which order counts more. We have made the decision in this paper that relevance is more important than temporality but that between two formulas of equal relevance, the more recent formula has priority, We concede however, that the other procedure, to count temporality more, also has something to say in its favor. Ordinary human reasoning, in our opinion is a pragmatic blend of the two techniques.

We use the maxiconsistent approach to define prioritized inference on a belief sequence . This method employs a consistent subset of obtained as follows. Consider a formula in its smallest language . We construct a maxiconsistent subset (of -relevant to formulas) of . The construction of this set is regulated by the ordering that creates on , which arranges into as follows:

###### Definition 7

Given a formula , a sequence , , if either
a) (i.e., is more relevant to than )
or b) are equally relevant (i.e., ) but is more recently received than

The are the under this order. In the definition below is (short for) the set referred to above. is some preselected level of relevance.

###### Definition 8

,

.

We check formulas for addition to in order of their decreasing relevance to . The lower the level of relevance allowed (i.e., the higher the value of ), the larger the part of considered. We now define the inference operation .

###### Definition 9

iff

Once has been constructed, the inference operation defined above enables a query answering scheme for the agent with definite responses:

Even if is inconsistent, the agent is able to give consistent answers to every individual query.

#### 3.1.1 Discussion

The notion of inference thus defined has some desirable features. As an example, suppose our belief sequence is initiated by first being told and then . overrides and we will no longer answer ‘yes’ to . However, if we are now told , this new information overrides . Thus, our maxiconsistent set is and the query will now be answered in the affirmative. This is plausible since the latest information decreases the reliability of and the original information regains its original standing. Such accomodations are not easily made within traditional, AGM-based frameworks for belief revision.

The maxiconsistent set obtained depends on whether some new information came “in several pieces” or as a single formula. Receiving two pieces of information individually and together often has very different effects on an agent’s epistemic state. If we receive and seperately, then a later information which undermines need not undermine as well. But if we received the conjunction then undermining one will undermine both. As an example, consider the arguments against the AGM Axioms 7,8 in [Parikh (1999)] where it is argued that revision by conjunctions is not the same as revising by conjuncts individually.

Example: Suppose our current beliefs are and we receive the information that . If is much more believable than we might now decide that holds. Suppose we next hear that . Since this is consistent with our current state, we accept it, ending with a state generated by . However, if we had received the conjunction , this would be equivalent to receiving just and we would never have believed at all. Note that this is also a situation where we found Epstein’s postulate R2 to be implausible, for the language does not equal in this case.

For a concrete example let be “The stove is smoking” and be “The house is on fire”. If I am told that either the stove is smoking or that the house is on fire, I will choose to believe that the stove is smoking. If I later find that either the stove is not smoking or the house is on fire, since this belief is consistent, I must now add it, and conclude that the stove is smoking and the house is on fire! But if I had been told the conjunction, which is equivalent to “The house is on fire”, I would never have thought about the stove at all.

As a final point, consider the sequence . Both and are derivable although their respective derivation is based on incompatible information. This is due to the fact that the formulas which are preferred in each case depend on the query. This is plausible however; agents often think about unrelated pieces of information in isolation from each other. Here and are not directly related and therefore may be thought about by the agent separately.

#### 3.1.2 Prioritization of Directly relevant formulas

Further depth could be introduced into the method provided above by introducing a prioritization amongst formulas based on the amount by which formulas of overlap with the query formula . Thus such a prioritization might be defined as follows: let if . Under this prioritization scheme, we would use the size of overlap of languages as a measure.

Such a measure however, is open to objections that propositions that share fewer symbols might actually be more relevant than those that share more symbols depending upon the symbols shared. As an example, consider our intuition that if two formulas both mention aardvarks, they are (most likely) more relevant to each other than if they mentioned cats. Or, the fact that the authors of this paper are logicians is a closer relatedness relation than the fact that they are all human beings. The ordering can be further

refined then, by the following heuristic: if symbols shared by formulas occur frequently in

, then there is a smaller likelihood that the formulas are relevant to each other whereas if they share symbols that occur with less frequency in the sequence . then the relvance is greater. This provides for a notion of degree of relevance amongst directly (and only for directly) relevant formulas. However, a fuller discussion would take us too far afield and we postpone such refinements to a more extended treatment.

#### 3.1.3 Answer sets and Consequence relations

We now define an attendant notion of a consequence relation at a given level of relevance, :

###### Definition 11

In view of the following propositions, if we were not worried about computational costs then would be the only notion which would interest us as and so we would only stop at a smaller to conserve resources.

###### Proposition 1

The inference procedure defined above is monotonic in , the degree of relevance, i.e., .

The above follows immediately from the fact that if then It will also be useful to remember that if two formulas have the same language, , then .

Remark: The inference procedure is of course non-monotonic in expansions of a belief sequence, i.e., if and , it is not necessarily the case that . For example is derivable from the sequence , but not from, say . But if we revise by formulas that are ‘irrelevant’ to the query in question then they will have no effect on the derivation of and will still be derived from the longer sequence.

Even though our technique might give disparate answers to formulas and which have different languages, the answer set for any fixed set of subject matters will have quite nice properties.

###### Observation 3

If is a set of propositional atoms, then .

The agent’s responses to a particular subject matter are guaranteed to be consistent by the query answering scheme.

Remark: Our inference method corresponds to the liberal inference defined on a linearly prioritized sequence of formulas in [Georgatos (1996)]. [Georgatos (1996)] defines a strict notion of inference as well, which in our case would correspond to stopping the construction of upon encountering the first formula that would make the set inconsistent.

### 3.2 Properties of ⊢k

###### Proposition 2

The following properties hold for the process of revision defined on sequences.

• Weak Inclusion: If then

The following additional properties hold under the condition  that :

• Weak (or Cautious) Monotonicity:

 σ⊢kα,σ⊢kβσ∗α⊢kβ
• Rational Monotonicity:

 σ⊬k¬α,σ⊢kβσ∗α⊢kβ
• Weak Cut:

 σ∗α⊢kβ,σ⊢kασ⊢kβ

 σ⊢kα,σ⊢kβσ⊢kα∧β
• Right Weakening:

 σ⊢kα,α⊢βσ⊢kβ

The condition is important for the following reason. As we have remarked, may be, and usually is, inconsistent. To get consistent answers to a query we restrict ourselves to a certain portion of and this portion will depend on the language . It is natural then that different with different languages will evoke different subsets of . We cannot then require that these subsets will always produce coherent answers which obey our rules. However, in practice there will be some  coherence.

For just one example that Weak Monotonicity can fail in general unless : let . Now, , but also, . But clearly, . Similar examples can be constructed for the other rules mentioned above. From a practical point of view, if two queries are asked on the same  occasion, a smarter query answering procedure should use as the language for determining direct relevance. The answers thus generated will be compatible with each other.

New formulas can block the derivation of formulas which were derivable before and so they provide a simple modeling for loss of belief in a proposition. After all, agents do not lose beliefs without a reason: to drop the belief that is to revise by some information that changes our reasoning. Still, it is possible in our model to lose without acquiring . For example, consider the sequences and . The revised sequence no longer answers ‘yes‘ to but neither does it answer ‘yes‘ to . has undermined without actually making derivable.

#### 3.2.1 Equivalence

Given the notion of inference defined above, we say that belief sequences which yield the same answers to all queries are equivalent. But equivalence is not always preserved under extensions. Consider the following example: . are equivalent, but neither nor implies as a conclusion. However, revising by the formula yields as a conclusion for though not for . Given this observation, we would like to define a strong notion of equivalence which is unaffected by revision.

###### Definition 12
• Sequences are equivalent if .

• Sequences are strongly equivalent if for all sequences of revisions .

• subsumes if .

We suspect that the notion of strong equivalence is testable via a single revision, i.e., if and are not strongly equivalent, then there is a such that and are not equivalent.

An obvious, related, question is, how can all sequences be trimmed or reduced to their shortest equivalent form? The task of reducing sequences to their simplest equivalent form is most likely, computationally non-trivial.

#### 3.2.2 Complexity of the Inference Procedure

In the methods defined above, there are two sources of complexity. For any query the first one involves the calculation of the smallest language , which is a co-NP complete problem [Herzig & Rifi (1999)], but only in the (relatively short) length of the individual formula and not in the size of the entire belief base . The second one involves checking the consistency of the set at each step of the construction of the set of the maxiconsistent set related to the query formula . However, for normal bases it is likely to be the case that symbols which occur in formulas which are -relevant to are few in number. In other words, there is a small language with such that for any formula , with -relevant to , . The consistency check then will be quite feasible.

Our answering method first calculates the relevance relation exhaustively for the entire sequence. Then the set is constructed formula by formula. Since each stage of the construction involves a consistency check, the complexity of the procedure is polynomial with an NP oracle, but only in which will be small if we keep small. Moreover, in checking for -derivability, the costs would be reduced sharply as most formulas in the sequence would not be -relevant and the size of could be quite small. This is indeed how we reason in practice. While the logical problem of deriving consequences from our (possibly inconsistent) beliefs is daunting, the notion of relevance cuts down sharply on the effort involved.

#### 3.2.3 Comparison with B-structures

In [Chopra & Parikh (1999)] a method for answering queries from -structures is presented that allowed the answer , i.e., (over-defined or inconsistent). In our method of inference, we ensure that is always consistent, thus preventing an inconsistent answer. The procedure of [Chopra & Parikh (1999)] will (after suitable interpretation) agree with our method when  the former gave ’yes’ or ’no’ answers. The B-structure query answering method corresponds to , i.e., to constructed only with directly relevant formulas.

#### 3.2.4 Conformance with AGM axioms

It is natural to ask how our inference conforms to AGM-like postulates. We cannot apply the results in [Georgatos (1996)] that show that maxiconsistent inference on a belief sequence is rational; our inference procedure takes into account relevance relations that cause a reordering of our sequence, depending on the formula whose inference we are testing. However, some AGM-like properties do hold.

( 1) is a belief sequence

( 2) provided that is consistent.

( 3) If then iff .

Note too, that there is no counterpart to the AGM expansion operation in our model; revision just is concatenation and the content of the new epistemic state upon revision is given by the inference operation defined above.

#### 3.2.5 Conformance with Lehmann Axioms

A comparison with the Lehmann axioms shows that our method of inference defined above does not conform to them. The only one that the method conforms to is I2 which says that . The reason is that in addition to the temporal ordering of the belief set, we have also imposed a relevance ordering. The combination of these two orderings ensures that formulas whose inference was possible at one stage of the inference procedure could be blocked at later stages in the history of the agent not just by newer information but also by the ancillary phenomenon of more relevant beliefs moving to the front of the reorganized sequence. Thus if we acquire a new belief which links , then a query about will regard formulas involving as 1-relevant. For example, after China invades Tibet (and we find out about it) beliefs about China may move to the front of the relevance ordering, even in a query which is originally only about Tibet. We believe ours is the first method of updating and revision which takes such phenomena into account. Granted that some rather elegant properties of other formalisms are lost in our method, or, rather, hold only under conditions like . However, this is a necessary  consequence of the fact that we are handling sophisticated and delicate mechanisms used by actual agents to balance the need for coherenece with the limits on resources.

## 4 Conclusion

The method of inference (and belief revision) described is easily implemented via a finite representation of the sequence. A full revision history is maintained and iterated revision is routine. Explicit attention is paid to temporal ordering and relevance relations. Our method extends intuitions found in the work on belief bases, sequences and inference relations based on prioritizations of beliefs and goes beyond by making clear the importance of a plausible notion of relevance to guide inference. The method presented is (for small ) resource conscious; in future work, we plan to implement this method and further investigate its properties.

## References

• Alchourron, Gärdernfors, & Makinson (1985) Alchourron, C.; Gärdernfors, P.; and Makinson, D. 1985. On the logic of theory change: Partial meet functions for contraction and revision. Journal of Symbolic Logic 50:510–530.
• Brewka (1991) Brewka, G. 1991. Belief revision in a framework for default reasoning. In Fuhrmann, A., and Morreau, N., eds., The Logic of Theory Change

, Lecture Notes in Artificial Intelligence, Number 465. Berlin: Springer.

206–222.
• Chopra & Parikh (1999) Chopra, S., and Parikh, R. 1999. An inconsistency tolerant model for belief representation and belief revision. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, 192–197. Morgan-Kaufmann.
• Doyle (1979) Doyle, J. 1979. A truth maintenance system. Artificial Intelligence 12:231–272.
• Epstein (1995) Epstein, R. 1995. The Semantical Foundations of Logic: Propositional Logics. Oxford University Press.
• Gärdenfors & Makinson (1988) Gärdenfors, P., and Makinson, D. 1988. Revisions of knowledge systems using epistemic entrenchment. In Vardi, M., ed., Proceedings of Theoretical Aspects of Reasoning about Knowledge, 83–96, 661–672. Morgan Kaufmann.
• Gärdenfors & Rott (1995) Gärdenfors, P., and Rott, H. 1995. Belief revision. In Gabbay, D. M.; Hogger, C. J.; and Robinson, J. A., eds.,

Handbook of Logic in Artificial Intelligence and Logic Programming, Volume IV: Epistemic and Temporal Reasoning

. Cambridge: Oxford University Press.
35–132.
• Georgatos (1996) Georgatos, K. 1996. Ordering-based representations of rational inference. In Alferes, J.; Pereira, L.; and Orlowska, E., eds., Logics in Artificial Intelligence (JELIA ’96), Lecture Notes in Artificial Intelligence, 176–191. Berlin: Springer-Verlag.
• Georgatos (1997) Georgatos, K. 1997. Entrenchment relations: A uniform approach to nonmonotonicity. In Proceedings of the International Joint Conference on Qualitative and Quantitative Practical Reasoning (ESCQARU/FAPR 97), Lecture Notes in Computer Science, Number 1244, 282–297. Berlin: Springer-Verlag.
• Hansson (1992) Hansson, S. O. 1992. In defense of base contraction. Synthese 91:239–245.
• Herzig & Rifi (1999) Herzig, A., and Rifi, O. 1999. Propositional belief base update and minimal change. Artificial Intelligence 115(1):107–138.
• Lehmann (1995) Lehmann, D. 1995. Belief revision, revised. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 1534–1540.
• Nebel (1992) Nebel, B. 1992. Syntax based approaches to belief revision. In Gärdenfors, P., ed., Belief Revision, Theoretical Computer Science. Cambridge: Cambridge University Press.
• Parikh (1999) Parikh, R. 1999. Beliefs, belief revision, and splitting languages. In Moss, L.; Ginzburg, J.; and de Rijke, M., eds., Logic, Language, and Computation, Volume 2, CSLI Lecture Notes No. 96. CSLI Publications. 266–268. Initially presented in Preliminary Proceedings of Information Theoretic Approaches to Logic, Language, and Computation 1996.
• Rodrigues (1997) Rodrigues, O. T. 1997. A methodology for iterated information change. Ph.D. Dissertation, Imperial College, University of London.
• Ryan (1991) Ryan, M. D. 1991. Ordered theory presentations. In Proceedings of the 8th Amsterdam Colloquium.
• Wasserman (1999) Wasserman, R. 1999. Resource bounded belief revision. Erkenntnis. To appear.