Applying Maxi-adjustment to Adaptive Information Filtering Agents

03/07/2000
by   Raymond Lau, et al.
qut
0

Learning and adaptation is a fundamental property of intelligent agents. In the context of adaptive information filtering, a filtering agent's beliefs about a user's information needs have to be revised regularly with reference to the user's most current information preferences. This learning and adaptation process is essential for maintaining the agent's filtering performance. The AGM belief revision paradigm provides a rigorous foundation for modelling rational and minimal changes to an agent's beliefs. In particular, the maxi-adjustment method, which follows the AGM rationale of belief change, offers a sound and robust computational mechanism to develop adaptive agents so that learning autonomy of these agents can be enhanced. This paper describes how the maxi-adjustment method is applied to develop the learning components of adaptive information filtering agents, and discusses possible difficulties of applying such a framework to these agents.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/16/2013

Probabilistic Models for Agents' Beliefs and Decisions

Many applications of intelligent systems require reasoning about the men...
08/01/2018

Imaginary Kinematics

We introduce a novel class of adjustment rules for a collection of belie...
01/16/2018

Belief Control Strategies for Interactions over Weak Graphs

In diffusion social learning over weakly-connected graphs, it has been s...
12/30/2016

Curiosity-Aware Bargaining

Opponent modeling consists in modeling the strategy or preferences of an...
10/06/2021

Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

Many AI applications involve the interaction of multiple autonomous agen...
04/21/2015

How do you revise your belief set with

In the classic AGM belief revision theory, beliefs are static and do not...
02/22/2018

Eliciting Expertise without Verification

A central question of crowd-sourcing is how to elicit expertise from age...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the explosive growth of the Internet and the World Wide Web (Web), it is becoming increasingly difficult for users to retrieve relevant information. This is the so-called problem of information overload on the Internet. Augmenting existing Internet search tools with personalised information filtering agents is one possible method to alleviate this problem. Adaptive information filtering agents are computer systems situated on the Web. They autonomously filter the incoming stream of information on behalf of the users. Users’ information needs will change over time, and information filtering agents must be able to revise their beliefs about the users’ information needs so that the accuracy of the filtering process can be maintained. The AGM belief revision paradigm [Alchourrón, Gärdenfors, & Makinson1985] provides a rich and rigorous foundation for modelling such revision processes. It enables an agent to modify its beliefs in a rational and minimal way. Maxi-adjustment [Williams1996, Williams1997a] is a specific change strategy that follows the AGM’s rationale of belief revision. In particular, it transmutes the underlying entrenchment ranking of beliefs in an absolute minimal way under maximal information inertia. In information retrieval models [Salton & McGill1983, Salton1989], information objects are often assumed independent unless semantic relationships among them can be derived. This intuition coincides with the underlying assumption of the maxi-adjustment strategy. The advantage of employing the maxi-adjustment strategy as the agents’ learning mechanism is that semantic relationships among information items can be taken into account during the agents’ learning and adaptation processes. Less users’ relevance feedback [Salton & Buckley1990] is required to train the filtering agents, and hence a higher level of learning autonomy can be achieved when compared with the other learning approaches employed in adaptive information agents [Billsus & Pazzani1999, Moukas & Maes1998, Balabanovic1997, Pazzani, Muramatsu, & Billsus1996, Armstrong et al.1995]. This paper focuses on the application of the maxi-adjustment method to the development of learning mechanisms in adaptive information filtering agents. Moreover, difficulties of applying such a framework to the filtering agents are discussed.

2 The Adaptive Filtering Agent

Figure 1 is an overview of the major functional components of an adaptive information filtering agent. The focus of this paper is on the learning component of the adaptive information filtering agent. The filtering agent’s memory holds the current representation of a user’s information needs. In particular, the notion of belief state [Gärdenfors1988] is used to represent these information needs. The learning component accepts a user’s relevance feedback about filtered Web documents. Based on the proposed induction method, these feedback is converted to the beliefs of a user’s information needs. The maxi-adjustment strategy is then applied to revise the beliefs stored in the agent’s memory with respect to these newly induced beliefs. As the user’s information needs will change over time, such a belief revision process needs to be conducted repeatedly. Technically, the learning behavior demonstrated by the agent is a kind of reinforcement learning [Langley1996]. The matching component filters out relevant information from a stream of incoming Web documents. It is underpinned by logical deduction. In other words, if the representation of a Web document is logically entailed by the set of beliefs stored in the agent’s memory, the Web document will be considered as relevant and presented to the user. According to the user’s relevance feedback, new beliefs may be added to, or existing beliefs may be contracted from the agent’s memory. The adaptive information filtering agent is one of the main elements in the agent-based information filtering system (AIFS) [Lau, Hofstede, & Bruza1999].

Figure 1: An Overview of Adaptive Information Filtering Agent

3 The AGM Belief Revision Paradigm

The AGM paradigm [Alchourrón, Gärdenfors, & Makinson1985] provides a rigorous foundation for modelling consistent and minimal changes to an agent’s beliefs. In particular, belief revision is taken as transitions among belief states [Gärdenfors1988]. A belief state can be represented by a belief set which is a theory in a propositional language  [Gärdenfors & Makinson1988]. For the discussion in this paper, it is assumed that is the classical propositional language. In the AGM framework [Alchourrón, Gärdenfors, & Makinson1985, Gärdenfors1988, Gärdenfors1992, Gärdenfors & Makinson1988], three principle types of belief state transitions are identified and modelled by corresponding belief functions: expansion (), contraction (), and revision (). The process of belief revision can be derived from the process of belief contraction and vice versa through the so-called Levi Identity i.e. , and the Harper Identity i.e. . Essentially, the AGM framework proposes sets of postulates to characterise these functions such that they adhere to the rationales of consistent and minimal changes. In addition, it also describes the constructions of these functions based on various mechanisms. One of them is epistemic entrenchment ([Gärdenfors & Makinson1988]. For instance, if are beliefs in a belief set , means that is at least as entrenched as . If inconsistency arises after applying changes to a belief set, beliefs with the lowest degree of epistemic entrenchment are given up. Technically, epistemic entrenchment [Gärdenfors & Makinson1988, Gärdenfors1992] is a total preorder of the sentences (e.g. ) in , and is characterised by the following postulates: (EE1): If and , then ; (EE2): If , then ; (EE3): For any and , ; (EE4): When , for all ; (EE5): If for all , then .

It has been proved that an unique contraction function can be defined by the underlying epistemic entrenchment through the (C-) condition [Gärdenfors & Makinson1988]:

where is the strict part of epistemic entrenchment defined above. Moreover, the (C-R) condition [Gärdenfors1992] also ensures that if an ordering of beliefs satisfies (EE1)-(EE5), the contraction function, uniquely determined by (C-R), satisfies all but the recovery postulates for contraction.

Nevertheless, for a computer based implementation, a finite representation of epistemic entrenchment ordering and a policy of iterated belief changes are required. Williams [Williams1995, Williams1997b] proposed the finite partial entrenchment ranking that ranked the sentences of a theory in with the minimum possible degree of entrenchment . Moreover, maxi-adjustment [Williams1996, Williams1997a] was proposed to transmute a finite partial ranking using an absolute measure of minimal change under maximal information inertia. Belief revision is not just taken as adding or deleting a sentence from a theory but the transmutation of the underlying entrenchment ranking. Williams [Williams1995, Williams1996, Williams1997a] formally defined the following definitions for a computational model of belief revision.

Definition 1

A finite partial entrenchment ranking is a function that maps a finite subset of sentences in into the interval such that the following conditions are satisfied for all :

(PER1) .

(PER2) If then .

(PER3) if and only if .

The set of all partial entrenchment rankings is denoted . is referred as the degree of acceptance of . The explicit information content of is , and is denoted . Similarly, the implicit information content represented by is , and is denoted . is the classical consequence operator. In order to describe the epistemic entrenchment ordering generated from a finite partial entrenchment ranking , it is necessary to rank implicit sentences.

Definition 2

Let be a non tautological sentence. Let be a finite partial entrenchment ranking. The degree of acceptance of is defined as:

The maxi-adjustment strategy transmutes a partial entrenchment ranking based on the rationale of absolute minimal change under maximal information inertia. It is assumed that sentences in exp() (e.g. ) are independent unless logical dependence exists between them. In particular, is defined as a reason of if and only if .

Definition 3

Let be finite. The range of is enumerated in ascending order as . Let be a contingent sentence, and . Then the maxi-adjustment of is defined by:

where for all , is defined as follows:

1. For with .

2. For with , assuming that for is defined with for , then for with ,

3. For with .

For all is defined as follows:

It has been stated that if then is an AGM revision, and satisfies all but the recovery postulates for AGM contraction [Williams1996].

4 Knowledge Representation

A Web page is characterised by a set of weighted keywords based on traditional information retrieval (IR) techniques [Salton & McGill1983, Salton1989]. At the symbolic level, each keyword is mapped to the ground term of the positive keyword predicate pkw i.e. . Basically, is a proposition since its interpretation is either true or false. The intended interpretation of these sentences is that they are satisfied in a document i.e.  if is taken as a model [Chiaramella & Chevallet1992, Lalmas1998]. For example, if d = { business, commerce, trade, …} is the document representation at the keyword level, the corresponding representation at the symbolic level will be d = { , , , …}. Similarly, a user’s information needs are also represented as a set of weighted keywords at the keyword level. However, this set of weighted keywords is derived from a set of relevant documents and a set of non-relevant documents with respect to the user’s information needs [Allan1996, Buckley, Salton, & Allan1994]. Based on the frequencies of these keywords appearing in and , it is possible to induce a preference ordering among the keywords with respect to the user’s information needs. The basic idea is that a keyword appearing more frequently in is a more preferred keyword than another keyword that appears less frequently in . Once this preference ordering is induced, it is taken as the epistemic entrenchment ordering of the corresponding beliefs. It is observed that the postulates of epistemic entrenchment is valid in the context of information retrieval in general [Lau, Hofstede, & Bruza1999]. For example, if is a set of information carriers [Bruza & Huibers1994], and implies . In other words, if an information searcher prefers information carrier rather than information carrier , and information carrier rather than information carrier , they prefer retrieving rather than . This characteristic of information carriers matches the epistemic entrenchment postulate e.g. (EE1) of beliefs.

Moreover, it is necessary to classify a keyword as

positive, neutral, or negative [Kindo et al.1997]. Intuitively, positive keywords represent the information items in which the users interested. Negative keywords represent the information items that the users do not want to retrieve. Neutral keywords mean that these keywords are not useful for determining the users’ interests. Eq.(1) is developed based on the keyword classifier [Kindo et al.1997]. It can be used to induce the preference value of a keyword , and classify it as positive, negative, or neutral.

(1)

where is used to restrict the range of such that . The examples illustrated in this paper assume that . is the sum of the number of relevant documents and the number of non-relevant documents that contains the keyword , and is the hyperbolic tangent. The rarity parameter is used to control rare or new keywords and is expressed as int, where is the total number of Web documents judged by a user, and int is an integer function that truncates the decimal values.

is the estimated probability that a document containing keyword

is relevant and is expressed as the fraction . is the estimated probability that a document is relevant. In our system, it is assumed that the probability that a Web document presented by the filtering agent and judged as relevant by a user is . A positive value of implies that the associated keyword is positive, whereas a negative value of indicates a negative keyword. If is below a threshold value , the associated keyword is considered neutral. It is assumed that for the examples demonstrated in this paper. Basically a positive keyword is mapped to , and a negative keyword is mapped to . There is no need to create the symbolic representations for neutral keywords. For or , the entrenchment rank of the corresponding formula is defined as:

(2)

The following is an example of computing the entrenchment rank from a set of judged Web documents. It is assumed that there are a set of five documents having been judged as relevant (i.e. ) and another set of five documents having been judged as non-relevant by a user (i.e. ). Each document is characterised by a set of keywords e.g. . Table 1 summarises the frequencies of these keywords appearing in both and , their preference values, and the entrenchment ranks of corresponding formulae.

Keywords Formula: Rank:
business 5 0 0.856 0.856
commerce 4 0 0.836 0.836
system 2 2 0 - -
art 0 5 -0.856 0.856
sculpture 0 3 -0.785 0.785
insurance 1 0 0.401 - -
Table 1: Representation of users’ information preferences

5 Learning and Adaptation

Whenever a user provides relevance feedback for a presented Web document, the belief revision process can be invoked to learn the user’s current information preferences. Conceptually, the filtering agent’s learning and adaptation mechanism is characterised by the belief revision and contraction processes. For example, if is a set of formulae representing a Web document , the belief revision process is invoked for each , where is the belief set stored in the filtering agent’s memory. On the other hand, the belief contraction process is applied for each and . The sequence of revising or contracting the set of beliefs is determined by their entrenchment ranks and whether it is a revision or contraction operation. At the computational level, belief revision is actually taken as the adjustment of entrenchment ranking in the theory base . Particularly, maxi-adjustment [Williams1996, Williams1998] is employed by the learning component of the filtering agent to modify the ranking of its beliefs in an absolute minimal way under maximal information inertia. As the input to the maxi-adjustment algorithm consists of a sentence and its entrenchment rank , the procedure described in the Knowledge Representation Section is used to induce the new rank for each . Moreover, for our implementation of the maxi-adjustment algorithm, the maximal ordinal in the interval is chosen as .

One advantage of a symbolic representation of the filtering agent’s domain knowledge is that semantic relationships among keywords can be captured. For example, if the keywords business and commerce are taken as synonymous, this semantic relationship can be modelled as a formula in the filtering agent’s memory. Moreover, classification knowledge such as sculpture is a kind of art can also be used by specifying the rule such as . It is believed that capturing the semantic relationships among keywords can improve the effectiveness of the matching process [Hunter1995, Nie, Brisebois, & Lepage1995]. In fact, by employing maxi-adjustment as the filtering agent’s learning mechanism, these semantic relationships can be reasoned about during the reinforcement learning process. This could lead to a higher level of learning autonomy since changes to related keywords can automatically be inferred by the filtering agent. As a result, less users’ relevance feedback may be required. The following examples assume that the formulae and

have been manually added to the the agent’s memory through a knowledge engineering process.


Example 1:

The first example shows how adding one belief to the agent’s memory will automatically raise the entrenchment rank of another related belief. It is assumed that the belief and the belief have been learnt by the filtering agent. If several Web documents characterised by the keyword art are also judged as non-relevant by the user later on, the preference value of the keyword art can be induced according to Eq.(1). Assuming that , the corresponding entrenchment rank can be computed as according to Eq.(2). By applying to the theory base , the before and after images of the agent’s explicit beliefs (i.e. ) can be tabulated in Table 2. Based on the maxi-adjustment algorithm, if .

The implicit belief in is derived from the explicit belief in the theory base , and its degree of acceptance is according to Definition 2. As the belief implies the belief and the agent believes in , the belief should be at least as entrenched as the belief according to (PER1) of Definition 1 or (EE2). In other words, whenever the agent believes that the user is not interested in art (i.e. ), it must be prepared to accept that the user is also not interested in sculpture at least to the degree of the former. The proposed learning and adaptation framework is more effective than other learning approaches that can not take into account the semantic relationships among information items. This example demonstrates the automatic revision of the agent’s beliefs about related keywords given the relevance feedback for a particular keyword. Therefore, less users’ relevance feedback may be required during reinforcement learning. Consequently, learning autonomy of the filtering agent can be enhanced.

Formula: Before After
1.000 1.000

1.000 1.000
0.856 0.856
0.785 0.856
0 0.856
Table 2: Raising related beliefs

Example 2:

The second example illustrates the belief contraction process. In particular, how the contraction of one belief will automatically remove another related belief from the agent’s memory if there is a semantic relationship between the underlying keywords. Assuming that more Web documents characterised by the keyword sculpture are judged as relevant by the user at a later stage, the belief could be induced. As if , where and in this example, leads to the contraction of the belief from the theory base . Moreover, is computed because and the set is obtained. The before and after images of the filtering agent’s explicit beliefs are tabulated in Table 3.

Formula: Before After
1.000 1.000

1.000 1.000
0.856 0.856
0 0.785

0.856 0
0.856 0
Table 3: Contracting related beliefs

Example 3:

This example demonstrates how multiple sentences from the same Web document judged by a user can be contracted from the agent’s memory. If some Web documents characterised by sculpture and business have recently been judged as non-relevant by the user, both the belief and the belief could be induced. Since beliefs with minimal entrenchment rank (i.e. ) are not supposed to be stored in the agent’s memory, maxi-adjustement for these two beliefs is still required so that they can be removed from the agent’s memory. Basically, the contraction process is applied to both sentences. As opposed to the belief expansion process where the most entrenched sentence is applied to the agent’s memory first, the least entrenched belief is first contracted from the agent’s memory for belief contraction. Consequently, is invoked first. As there is not other logically related sentences in the theory base , the belief is simply removed from the agent’s memory. Similarly, the sentence is also removed from by applying . The before and after images of the filtering agent’s explicit beliefs are tabulated in Table 4.

Formula: Before After
1.000 1.000

1.000 1.000
0.856 0
0.785 0

Table 4: Contracting multiple beliefs in one cycle

6 Filtering Web Documents

In our current framework, the matching function of the filtering agent is modelled as logical deduction. Moreover, a Web document is taken as the conjunction of a set of formulae [Chiaramella & Chevallet1992, Hunter1995]. The following example illustrates the agent’s deduction process with reference to previous examples. The resulting belief sets from examples , , and in the previous section are used to determine the relevance of the following three Web documents:

The filtering agent’s conclusions about the relevance of the Web documents are summarised as follows:

Time: (t1)

Time: (t2)

Time: (t3)

As can be seen tentative conclusion drawn at time may not hold when new information e.g. is processed by the agent at time . Strictly speaking, the deduction process of the agent should be described as , where is a nonmonotonic inference relation because the inferred beliefs (i.e. conclusions) will not grow monotonically. It is not difficult to see that this should belong to the class of nonmonotonic inference called expectation inference ([Gärdenfors & Makinson1994]. The basic idea of expectation inference is that given a sentence of a propositional language , if and the subset of sentences in a belief set that is consistent with (i.e. ) can classically entail another sentence , can be deduced. With reference to our examples, since classically entails or not entails , and , and , the definition of expectation inference can trivially be applied to describe the characteristics of the inference mechanism in the filtering agent. Therefore, , where is the filtering agent’s belief set and is the logical representation of a Web document, represents an inference conducted by the adaptive filtering agent.

7 Discussion

An alternative approach for developing the filtering agent’s learning mechanism is to apply the (C-) or the (C-R) condition to revise the agent’s beliefs. It has been stated that both the (C-) and the (C-R) conditions can be used to construct the same class of belief revision functions that satisfy the AGM postulates for belief revision [Gärdenfors1992]. The following example illustrates how belief revision may be conducted based on the (C-R) condition. Assuming that the filtering agent’s initial belief set is as follows:

Moreover,

If a user perceives some Web documents characterised by the keyword business as non-relevant, the revision process will be invoked. Based on the Levi Identity, the sentence should first be contracted from the belief set . Consequently, all the beliefs from will be contracted according to the (C-R) condition. The resulting belief set becomes . Nevertheless, a user who does not require information objects about business may still be interested in information objects about art and sculpture

. Therefore, applying (C-R) or (C-) to construct belief contraction or belief revision function seems producing drastic changes in the context of information retrieval and filtering. In fact, term independence is often assumed in information retrieval. This intuition is reflected in the vector space model of information retrieval 

[Salton & McGill1983], where changing the weight of a particular keyword may not affect the others in the weight vector. Therefore, the maxi-adjustment strategy produces a better approximation in terms of revising or contracting beliefs about information objects with respect to a user’s information needs.

Under the current framework, domain knowledge such as semantic relationships among information objects is transferred to the filtering agent’s memory through a knowledge engineering process. Since not all semantic relationship is highly certain (i.e. assigning the maximal entrenchment rank), by applying belief revision to the agent’s memory, the corresponding beliefs may be contracted from the memory over time. Therefore, domain knowledge perhaps needs to be transferred to the agent’s memory periodically in accordance with the postulates of epistemic entrenchment. This can be taken as an off-line process to minimise its impact on the availability of the filtering agents. However, further investigation is required to apply such a background learning process to the filtering agents.

The belief set is actually used by the agents to infer the relevance of Web documents. On the other hand, maxi-adjustment is employed to revise the theory base and to maintain its consistency after applying changes. Though maxi-adjustment ensures that the revised theory base is consistent, it is possible that the belief set becomes inconsistent (i.e. ) after applying changes such as . The following is a classical example to explain such a problem. It is assumed that the set of explicit beliefs as well as their entrenchment ranking is as follows:

In the context of information retrieval and filtering, can be interpreted as: if a user is interested in information objects about , it is likely that the user is also interested in information objects about . The other formulae in can be interpreted in similar way. Given the fact that the user is interested in which is a penguin i.e.  and , by applying maxi-adjustment , the revised entrenchment ranking will be:

The degree of acceptance of implicit beliefs is computed based on definition 2:

As can be seen, even though , it is clear that , where is classical derivability relation. If the filtering agent employs the belief set to deduce the relevance of Web documents, any documents will be considered as relevant. This problem must be addressed before the filter agents can be put to practical use. One possible solution is to make use of the degree of acceptance of beliefs to produce the largest cut of so that it does not entail . In other words, only the set of consistent beliefs , where is the strict part of epistemic entrenchment, will be used by the filtering agent to infer the relevance of Web documents. Similar idea has been explored in developing the expectation inference relation [Gärdenfors & Makinson1994]. For instance, iff is developed based on . So, with reference to the above example, after applying , the agent should only make use of the following set of beliefs for reasoning:

Therefore, the agent can conclude that the user is interested in information objects about non-flying tweety. The above reasoning process can also be explained based on nontrivial possibilistic deduction ([Dubois, Lang, & Parade1993, Dubois, Lang, & Prade1994]. In possibilistic logic, the inconsistency degree of a possibilistic knowledge base is defined as the least certain formula involved in the strongest contradiction of . Moreover, nontrivial possibilistic deduction is defined as: . If the entrenchment rank of a formula is taken as the certainty of a possibilistic formula, with reference to the above example. By employing possibilistic resolution, and can be obtained, where is possibilistic entailment. Since the certainty degree of equals and is greater than , . Nevertheless, can not be deduced from based on () because . However, further investigation is required to apply possibilistic based inference to the matching components of adaptive information filtering agents.

8 Conclusions

The AGM belief revision paradigm offers a powerful and rigorous foundation to model the changes of an agent’s beliefs. The maxi-adjustment strategy, which follows the AGM rationale of consistent and minimal belief changes, provides a robust and effective computational mechanism for the development of the filtering agents’ learning components. As semantic relationships among information items can be reasoned about via the maxi-adjustment method, less human intervention may be required during the agents’ reinforcement learning processes. This opens the door to better learning autonomy in adaptive information filtering agents. The technical feasibility of applying the maxi-adjustment method to adaptive information filtering agents has been examined. However, quantitative evaluation of the effectiveness of these agents needs to be conducted to verify the advantages of applying such a framework to construct the learning mechanisms of these agents.

Acknowledgments

The work reported in this paper has been funded in part by the Cooperative Research Centres Program through the Department of the Prime Minister and Cabinet of Australia.

References

  • [Alchourrón, Gärdenfors, & Makinson1985] Alchourrón, C.; Gärdenfors, P.; and Makinson, D. 1985. On the logic of theory change: partial meet contraction and revision functions. Journal of Symbolic Logic 50:510–530.
  • [Allan1996] Allan, J. 1996. Incremental relevance feedback for information filtering. In Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Filtering, 270–278.
  • [Armstrong et al.1995] Armstrong, R.; Freitag, D.; Joachims, T.; and Mitchell, T. 1995. Webwatcher: A learning apprentice for the world wide web. In AAAI Spring Symposium on Information Gathering, 6–12.
  • [Balabanovic1997] Balabanovic, M. 1997. An adaptive web page recommendation service. In Johnson, W. L., and Hayes-Roth, B., eds., Proceedings of the First International Conference on Autonomous Agents (Agents’97), 378–385. New York: ACM Press.
  • [Billsus & Pazzani1999] Billsus, D., and Pazzani, M. 1999. A personal news agent that talks, learns and explains. In Proceedings of the Third International Conference on Autonomous Agents (Agents’99), 268–275. Seattle, WA: ACM Press.
  • [Bruza & Huibers1994] Bruza, P., and Huibers, T. 1994. Investigating Aboutness Axioms Using Information Fields. In Croft, W., and Rijsbergen, C. v., eds., Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 112–121. Dublin, Ireland: Springer-Verlag.
  • [Buckley, Salton, & Allan1994] Buckley, C.; Salton, G.; and Allan, J. 1994. The effect of adding relevance information in a relevance feedback environment. In Proceedings of the Seventeenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Routing, 292–300.
  • [Chiaramella & Chevallet1992] Chiaramella, Y., and Chevallet, J. P. 1992. About retrieval models and logic. The Computer Journal 35(3):233–242.
  • [Dubois, Lang, & Parade1993] Dubois, D.; Lang, J.; and Parade, H. 1993. Possibilistic Logic. In Gabbay, D. M.; Hogger, C. J.; Robinson, J. A.; and Nute, D., eds.,

    Handbook of Logic in Artificial Intelligence and Logic Programming

    , volume 3. Oxford: Oxford University Press.
    439–513.
  • [Dubois, Lang, & Prade1994] Dubois, D.; Lang, J.; and Prade, H. 1994. Automated Reasoning using Possibilistic Logic: Semantics, Belief Revision, and Variable Certainty Weights. IEEE Transactions on Knowledge and Data Engineering 6(1):64–71.
  • [Gärdenfors & Makinson1988] Gärdenfors, P., and Makinson, D. 1988. Revisions of knowledge systems using epistemic entrenchment. In Vardi, M. Y., ed., Proceedings of the Second Conference on Theoretical Aspects of Reasoning About Knowledge, 83–95. San Francisco, CA: Morgan Kaufmann Inc.
  • [Gärdenfors & Makinson1994] Gärdenfors, P., and Makinson, D. 1994. Nonmonotonic inference based on expectations. Artificial Intelligence 65(2):197–245.
  • [Gärdenfors1988] Gärdenfors, P. 1988. Knowledge in flux: modeling the dynamics of epistemic states. Cambridge, Massachusetts: The MIT Press.
  • [Gärdenfors1992] Gärdenfors, P. 1992. Belief revision: An introduction. In Gärdenfors, P., ed., Belief Revision. Cambridge, UK: Cambridge University Press. 1–28.
  • [Hunter1995] Hunter, A. 1995. Using default logic in information retrieval. In Froidevaux, C., and Kohlas, J., eds., Symbolic and Quantitative Approaches to Uncertainty, volume 946 of Lecture Notes in Computer Science, 235–242.
  • [Kindo et al.1997] Kindo, T.; Yoshida, H.; Morimoto, T.; and Watanabe, T. 1997. Adaptive personal information filtering system that organizes personal profiles automatically. In Pollack, M. E., ed., Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, 716–721. San Francisco, CA: Morgan Kaufmann publishers Inc.
  • [Lalmas1998] Lalmas, M. 1998. Logical models in information retrieval: Introduction and overview. Information Processing & Management 34(1):19–33.
  • [Langley1996] Langley, P. 1996.

    Elements of Machine Learning

    .
    San Francisco, CA: Morgan Kaufmann Publishers.
  • [Lau, Hofstede, & Bruza1999] Lau, R.; Hofstede, A. H. M.; and Bruza, P. D. 1999. A Study of Belief Revision in the Context of Adaptive Information Filtering. In Proceedings of the Fifth International Computer Science Conference (ICSC’99), volume 1749 of Lecture Notes in Computer Science, 1–10. Berlin: Springer.
  • [Moukas & Maes1998] Moukas, A., and Maes, P. 1998. Amalthaea: An evolving information filtering and discovery system for the WWW. Journal of Autonomous Agents and Multi-Agent Systems 1(1):59–88.
  • [Nie, Brisebois, & Lepage1995] Nie, J.; Brisebois, M.; and Lepage, F. 1995. Information retrieval as counterfactual. The Computer Journal 38(8):643–657.
  • [Pazzani, Muramatsu, & Billsus1996] Pazzani, M.; Muramatsu, J.; and Billsus, D. 1996. Syskill and Webert: Identifying interesting web sites. In Proceedings of the Thirteenth National Conference on Artificial Intelligence and the Eighth Innovative Applications of Artificial Intelligence Conference, 54–61. Menlo Park: AAAI Press / MIT Press.
  • [Salton & Buckley1990] Salton, G., and Buckley, C. 1990. Improving retrieval performance by relevance feedback. Journal of American Society for Information Science 41(4):288–297.
  • [Salton & McGill1983] Salton, G., and McGill, M. 1983. Introduction to Modern Information Retrieval. New York: McGraw-Hill.
  • [Salton1989] Salton, G. 1989. Automatic Text Processing–The Transformation, Analysis, and Retrieval of Information by Computer. Reading, Massachusetts: Addison-Wesley.
  • [Williams1995] Williams, M.-A. 1995. Iterated theory base change: A computational model. In Mellish, C. S., ed., Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 1541–1547. San Francisco, CA: Morgan Kaufmann Publishers Inc.
  • [Williams1996] Williams, M.-A. 1996. Towards a practical approach to belief revision: Reason-based change. In Aiello, L. C.; Doyle, J.; and Shapiro, S., eds., KR’96: Principles of Knowledge Representation and Reasoning, 412–420. San Francisco, CA: Morgan Kaufmann Publishers Inc.
  • [Williams1997a] Williams, M.-A. 1997a. Anytime belief revision. In Pollack, M. E., ed., Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, 74–79. San Francisco, CA: Morgan Kaufmann Publishers Inc.
  • [Williams1997b] Williams, M.-A. 1997b. Implementing Belief Revision. In Antoniou, G., ed., Nonmonotonic Reasoning. Cambridge, Massachusetts: The MIT Press. 197–211.
  • [Williams1998] Williams, M.-A. 1998. Applications of belief revision. Lecture Notes in Computer Science 1472:287–316.