Introduction
Dependence^{1}^{1}1In this paper, we do not differentiate between “dependence” and “relevance”, and use these terms interchangeably. is an important concept in artificial intelligence (AI). Before performing an intelligent task (e.g., reasoning and planning), it is intuitive to first determine what are irrelevant, and discard them for better efficiency. For example, when a student takes an examination on literature, she does not need to keep the knowledge of mathematics in mind. This process of discarding irrelevant information involves two issues:

What is the irrelevant information in the background knowledge base (KB) for the task?

How to discard it in the KB?
In the context of propositional logic, various authors [Boutilier1994, Lakemeyer1997] studied the first issue by systematically analyzing a dependence relation between a formula and a variable, namely formulavariable dependence (FVdependence). Loosely speaking, a formula depends on a variable if always occur in every formula equivalent to . Further, LangLM2003 LangLM2003 pointed out that, in some applications, we are concerned about not only which variable the formula depends on but also the polarity of the variable. To distinguish the case where a formula conveys some information about a literal but no information about its complement, they proposed a more finegrained notion, namely formulaliteral dependence (FLdependence). For the second issue, variable forgetting is highly related to FVdependence. It yields the strongest consequence independent of a variable. The intuitive meaning of literal forgetting is similar.
As mentioned in [Darwiche1997]
, dependence is not only a philosophical notion but also a pragmatic notion. Over the past years, this notion has been widely used in many fields of AI, including automated reasoning
[Levy, Fikes, and Sagiv1997, Kautz, McAllester, and Selman1997], knowledge compilation [Bryant1992, Minato1993, Darwiche1997], reasoning about actions [Lin and Reiter1997], and especially belief change [Zhang and Zhou2009, Oveisi et al.2017]. Belief update, a type of belief change, studies how an agent modifies her belief base in the presence of new information in a dynamically changing environment. HerLM2013 HerLM2013 suggested that belief update should be based on the notion of dependence, and proposed the dependencebased update scheme that consists of first removing any belief on which the negation of the new information depends, and then adding the new information.After investigating FLdependence and literal forgetting, a natural next step is to study two more general notions: formulaformula dependence (FFdependence) and formula forgetting. This paper is intended to fill this gap. The main contributions are as follows. First of all, we give a formal definition of FFdependence, which is a dependence relation between formulas. We also provide a modelbased characterization result and analyze some properties for FFdependence. In addition, based on FFdependence, we introduce formula forgetting as generalizations of literal forgetting. We give various equivalent formulations of this notion, including an axiomatic definition by four postulates, a syntactic definition via conditioning, and a modeltheoretic definition. Finally, we apply these two notions in two wellknown issues of AI: belief update and conservative extension. Following the dependencebased update scheme, we define an update operator by first forgetting the negation of the new information in the initial belief base, and then conjoining the resulting belief base with the new information. We characterize by a set of postulates and assess it against the wellknown KM postulates. We compare it with other operators from three perspectives including information preservation, computational complexity and empirical results. The comparison shows that is a suitable alternative to belief update. We finally give the correspondence between conservative extension and formula forgetting. It turns out that the former can be reduced to the latter.
Preliminaries
In this section, we first recall some basic concepts of propositional logic, and then present the notions of FLdependence and literal forgetting. Most contents of the first two subsections originate from [Lang, Liberatore, and Marquis2003]. Finally, we review some background work regarding belief update.
Propositional logic
We assume the propositional language is built from a finite set of variables, the connectives , , and two logical constants (true) and (false). We use , , and to range over formulas. The notation denotes the set of variables appearing in . A formula is trivial if it is equivalent to or . A literal is a variable (positive literal) or a negated one (negative literal). For a literal , denotes the complementary literal of . A term is a conjunction of literals. A disjunctive normal form (DNF) is a disjunction of terms. We say that a term is in a DNF formula , written , if is a disjunct of . For a subset of , a minterm over is a term where it uses only and each variable of appears exactly once. For simplify, we omit if . We use to denote the set of minterms over .
A formula is in full DNF, if it is a disjunction of minterms. A formula is in negation normal form (NNF), if is only applied to variables. It is wellknown that every formula can be equivalently transformed into full DNF and NNF. A full DNF formula of is the disjunction of all minterms entailing . Throughout this paper, we take all variables of into consideration, and assume that there is a unique full DNF formula equivalent to . We call it the full DNF formula of . An NNF formula of can be acquired by pushing the negation inwards via De Morgan’s law and eliminating double negation. For convenience, we call the formula generated by the above process the NNF formula of .
Let be an NNF formula. The variable in , where is not in the scope of , is called an occurrence of a positive literal in . The symbols in are called an occurrence of a negative literal in . Note that any occurrence of the negative literal cannot be considered as an occurrence of its complement . A simple example is that the formula contains no occurrence of .
An interpretation is a subset of . A model of is an interpretation satisfying , and denotes the set of models of . A formula is satisfiable, if there is a model of . A satisfiable term is a term satisfied by at least one interpretation. In other words, it does not contain positive and negative literals of the same variable simultaneously. The following is an operation on an interpretation w.r.t. a satisfiable term.
Definition 1.
Let be an interpretation and a satisfiable term. Forcing on , written , is defined as .
Both and agree on the valuations of all variables except those of . The latter is the model of that is the closest to . For instance, let and , then .
Formulaliteral dependence and literal forgetting
The main intuition FLdependence aims to capture is that the literal is an indispensable part of the formula : roughly speaking, every NNF formula equivalent to contains the occurrence of .
Definition 2.
Let be a formula and a literal. We say is Litdependent on , written , if every NNF formula equivalent to contains the occurrence of . Otherwise, is Litindependent from , written .
Example 1.
Let , and . It is obvious that is an NNF formula equivalent to , and does not contain . Hence neither nor depends on .
Although Definition 2 is a syntactic formulation of FLdependence, it is syntaxindependent.
Proposition 1.
Let and be formulas where , and a literal. Then, iff .
The notion of FVdependence can be easily defined from that of FLdependence.
Definition 3.
Let be a formula and a variable. We say is Vardependent on , written , if or .
We also say is a dependent variable of , if . We use (resp. ) to denote the set of dependent literals (resp. variables) of .
We hereafter present the notion of literal forgetting. We first introduce term conditioning [Darwiche1998] that is an important syntactic operation for literal forgetting.
Definition 4.
Let be a formula and a satisfiable term. The conditioning of on , written , is defined by substituting each variable of by (resp. ) if (resp. ) is a positive (resp. negative) literal of .
For instance, conditioning on gives a formula , which is equivalent to .
According to the Shannon expansion [Shannon1938], any formula can be decomposed into . From the syntactic point of view, forgetting just removes the occurrence of from the expansion. The notion of literal forgetting can be defined in a syntactic way via conditioning.
Definition 5.
Let be a formula and a literal. The result of forgetting in , written , is defined as .
For example, .
One of key propositions of literal forgetting is as follows: Forgetting a literal generates the strongest consequence that does not depend on .
Proposition 2.
Let be a formula and a literal. is the strongest consequence of that is Litindependent of .
Forgetting a single literal can be extended to a variable, a set of literals or variables. Forgetting a variable is defined by forgetting the positive literal and the negative one sequentially. As the order in which literals of are handled does not matter, forgetting a set of literals can be sequentially computed by forgetting a single literal of one by one. The operation of forgetting a set of variables is similar.
Definition 6.
Let be a set of literals, a set of variables, and .

;

;

.
Note that if (resp. ), we let (resp. ).
Finally, we establish the link between term conditioning and variable forgetting. The conditioning of on can be computed via forgetting all dependent variables of in .
Proposition 3.
Let be a formula and a satisfiable term. Then, .
Belief update
Belief update focuses on the evolution of the belief base in line with the new information, reflecting the modification of the world. Based on the principle of minimal change, KatM1991 KatM1991 proposed the KM postulates to capture rational belief update operators, which map the initial belief base and the new information to a new belief base . The postulates are as follows:
 U1

;
 U2

If , then ;
 U3

If and are satisfiable, then is also satisfiable;
 U4

If and , then ;
 U5

;
 U6

If and , then ;
 U7

If is a minterm, then ;
 U8

.
According to the postulate (U8), the new belief base collects the updates of each model of by . More formally,
.
The Possible Models Approach (PMA) and Forbus operators, introduced by Win1988 Win1988 and For1989 For1989 respectively, are two famous update operators. Both of them are based on the principle of minimal change, thereby satisfying all of the KM postulates. The PMA operator is based on minimization of the distance between interpretations, while the Forbus operator is defined in terms of the cardinality of distances. The definitions of updating an interpretation by of and are as follows:
;
.
where is the symmetric difference between and and is the cardinality of .
On the other hand, HerLM2013 HerLM2013 suggested that belief update should be based on the notion of dependence, and proposed the dependencebased update scheme, consisting of two steps: (1) forget every piece of which depends on; (2) expand the resulting belief base with . This scheme has some resemblance to the socalled Levi Identity [Levi1977] which defines an update operator from a given erasure operator. Based on the notion of FVdependence, HerR1998 HerR1998 proposed an operator which first forgets in all variables on which depends^{2}^{2}2Two equivalent update operators were proposed in [Hegner1987] and [Doherty, Łukaszewicz, and MadalinskaBugaj1998] respectively.. More precisely,
.
HerLM2013 HerLM2013 criticized the update operator for forgetting too much in the initial belief base, and proposed an update operator by forgetting all dependent literals of rather than dependent variables. The definition of is as follows:
.
Besides the above work about dependencebased belief update, Par1999 Par1999 proposed a postulate (P) of relevance for belief change. The following is a stronger version introduced by PepWCF2015 PepWCF2015.
 SP

If and , then .
Postulate (SP) says that if a belief base can be split into two disjoint compartments, then only the compartment affected by the new information will be modified.
Formulaformula dependence
In this section, we introduce the notion of FFdependence. We first extend the notion of occurrence of literals to that of arbitrary formulas. We then define the notion of FFdependence in terms of occurrence. We further give a modeltheoretical characterization of FFdependence. Finally, some properties of FFdependence are given.
Recall that the definition of FLdependence (cf. Definition 2) says that a formula is Litdependent on a literal if every NNF formula equivalent to contains the occurrence of . It is hard to extend this definition to FFdependence. We therefore resort to another equivalent definition: does not depend on if substituting every occurrence of in the NNF formula of with leads to a logically different formula.
This definition cannot be directly transferred to the notion of FFdependence. It is possible that depends on but the latter does not explicitly occur in the former even if both and are in NNF. For example, consider the two equivalent formulas and . Because appears in itself, so replacing in itself by results in . We get that , and hence depends on itself. Based on the principle of syntaxindependence, should depend on . However, replacing all occurrences of in does not modify since does not explicitly occur in . We obtain that does not depend on .
To define the notion of FFdependence, we need to solve two problems:

Which normal form is suitable for the notion of FFdependence?

How to define the occurrence of a formula in the normal form of another formula?
For the first problem, we choose full DNF instead of NNF. In the following, we introduce the notion of occurrence of a formula in the full DNF of another formula. This notion should obey two principles: syntaxindependence and selfelimination. Suppose that two formulas and are equivalent. The former requires that “given a full DNF formula , the occurrence of in is also that of in ”. The latter means that “if is in full DNF, then replacing the occurrence of in with results in ”. To illustrate the definition of occurrence, it is necessary to define the notion of dependent minterms.
Definition 7.
We say a term is a dependent minterm of , if is a minterm over and .
We use to denote the set of dependent minterms of . We also let be empty if , and be if . Clearly, the disjunction of is equivalent to .
Example 2.
Let . The sets of dependent variables and dependent minterms of are as follows: and . Neither nor is a dependent minterm of since the former contains the variable not in , and the latter does not entails .
We hereafter give a definition of occurrence. We first consider a simple case that is the occurrence of a term in a minterm.
Definition 8.
Let be a term and be a minterm. We say a literal of is an occurrence of in , if is an occurrence of and .
Example 3.
Consider three terms , , and . The literals and are occurrences of in . But neither of them is an occurrence of in since . Moreover, contains no occurrence of .
The notion of occurrence of a formula in a full DNF formula can be defined as that of every dependent minterm of in every disjunct of .
Definition 9.
Let and be two formulas where is in full DNF. Let be a minterm of , and a literal of . We say the occurrence of in is an occurrence of in , if there is a dependent minterm of s.t. is an occurrence of in .
Since two equivalent formulas have the same set of dependent minterms, Definition 9 satisfies syntaxindependence. It is easily verified that Definition 9 satisfies selfelimination.
Example 4.
Continued with Example 2, we have and . Consider the full DNF formula . The occurrences of and in the first term of are that of in . So are the occurrences of and in the second term. However, the occurrences of and in the third and fourth disjuncts are not since no dependent minterm of satisfies the condition of Definition 9.
We now are ready to define the notion of FFdependence.
Definition 10.
We say is Fmldependent on , written , if substituting every occurrence of in the full DNF formula of with leads to another formula that is not equivalent to . Otherwise, is Fmlindependent from , written .
We illustrate Definition 10 with the following example.
Example 5.
Continued with Example 4, we have . Let . It is easily verified that . Replacing every occurrence of in with results in a formula . Clearly, this formula is not equivalent to . Hence, both and depend on .
Definition 10 is a simple approach to define the notion of FFdependence. To capture this notion comprehensively, we present a modeltheoretic characterization.
Proposition 4.
Let and be formulas, and . Then, is Fmldependent on iff there exist an interpretation and two terms and s.t. and .
This proposition means that depends on , if there exist an interpretation and a dependent minterm of such that (1) ; and (2) is essential for to satisfy , i.e., there is a minterm over such that forcing on no longer satisfies .
Example 6.
We have , and . Then, we let , and . Clearly, , and . Thus, that does not satisfy . So depends on .
Finally, we analyze some properties of FFdependence. It is obvious that a formula is Fmldependent on a literal if and only if the former is Litdependent on the latter.
Proposition 5.
Let be a formula and be a literal. Then iff .
FFdependence satisfies syntaxindependence, symmetry and almost reflexivity (i.e., every nontrivial formula depends on itself). Trivial formulas do not depend on any formula.
Proposition 6.

Syntaxindependence: iff when and ;

Symmetry: iff ;

Almost reflexivity: when is not trivial.

For any formula , and .
We analyze the computational complexity of FFdependence as follows:
Proposition 7.
FFdependence is in and hard.
Proof.
Upper bound: First, we compute by calling an oracle that decides whether depends on for each variable of . Further, we guess an interpretation and two terms and , and then check whether they satisfy the condition of Proposition 4. The whole procedure calls oracles times where is the number of . So, FFdependence is in .
Lower bound: By Proposition 5, FLdependence is a restriction of FFdependence. The complexity of FLdependence is complete [Lang, Liberatore, and Marquis2003]. Hence, FFdependence is hard. ∎
Formula forgetting
In the previous section, we investigate the notion of FFdependence. Following this notion, we study the notion of formula forgetting in this section. We first propose a set of postulates that precisely characterize the notion of formula forgetting. We then discuss some properties of formula forgetting. Finally, the computation and modeltheoretic characterization of formula forgetting are also studied.
ZhangZ2009 ZhangZ2009 proposed four postulates (W), (IR), (PP) and (NP) for variable forgetting in modal logic S5. We extend these postulates to formula forgetting in propositional logic.
Definition 11.
We say is a result of forgetting in , if it satisfies the following postulates:
 W

Weakening: ;
 IR

Independence: ;
 PP

Positive Persistence: for any formula , if and , then ;
 NP

Negative Persistence: for any formula , if and , then .
Postulate (W) says that forgetting weakens the original formula. Postulate (IR) requires that after forgetting, the resulting formula should be irrelevant to the formula which we have forgotten. Finally, postulates (PP) and (NP) viewed together state that forgetting does not affect entailment of queries that are Fmlindependent from .
Similarly to literal forgetting, the result of formula forgetting is unique up to logical equivalence. We use to denote the result of forgetting in .
The following proposition reflects strong relationships between formula forgetting and FFdependence: Forgetting in does not change if and only if does not depend on .
Proposition 8.
Let and be two formulas. Then, iff .
The following proposition further illustrates some essential properties of formula forgetting.
Proposition 9.

for any literal ;

is satisfiable iff is satisfiable.

If and , then .

;

If , then .
Firstly, formula forgetting is a generalization of literal forgetting. Secondly, forgetting any formula preserves satisfiability of the original formula. Thirdly, it is syntaxirrelevance, and distributive over disjunction. Finally, forgetting in a conjunction of two formulas does not affect the counterpart that shares no dependent variable of .
We next investigate the computation of formula forgetting. Recall the definition of literal forgetting (cf. Definition 5), forgetting a literal in consists of two steps:

Transform into via the Shannon expansion;

Eliminate the occurrence of , i.e., .
The Shannon expansion can be generalized to a multivariable expansion w.r.t. a set of variables, i.e., . It is natural to imagine that forgetting a formula in should first decompose w.r.t. the set of dependent variables of , and then remove every dependent minterm of . We therefore obtain a bruteforce computation of formula forgetting as follows.
Proposition 10.
Let . Then, .
The computation of formula forgetting via Proposition 10 is computationally expensive when is large. To simplify the computation, we give another approach via conditioning. It is hard to extend the definition of term conditioning (cf. Definition 4) to formula conditioning since a formula may contain a literal and its negation simultaneously, and it may even be unsatisfiable. We hereafter resort to Proposition 3, and formalize the notion of formula conditioning via variable forgetting.
Definition 12.
The conditioning of on , written , is defined as .
Note that if (resp. ), then (resp. ).
From now on, we adopt Definition 12 as the definition of the notation when is a satisfiable term .
A semantic characterization of formula conditioning is as follow: the models of the conditioning of on is the union of model where forcing it on any dependent minterm of leads to a model of .
Proposition 11.
.
The following proposition states that the result of forgetting in is equivalent to the disjunction of and the conditioning of on .
Proposition 12.
.
Now, we use an example to illustrate the computation of formula forgetting.
Example 7.
Continued with Example 5. We have , , and .
Firstly, the conjunction of and is as follows:
.
Then, conditioning of on leads to:
.
Hence, the result of forgetting in is
.
Theoretically, the computation of formula forgetting via Proposition 12 cause singleexponential blowup in the size of the original formula unless . But this is a practical solution by using existing techniques of knowledge compilation, e.g., binary decision diagrams (BDDs) [Bryant1992]. BDD is a compact form of propositional formula, and supports efficient boolean operations. This will be shown in our experimental evaluation.
By Propositions 11 and 12, we get a semantic characterization of formula forgetting. Forgetting in amounts to introduce the models of the conditioning of on .
Corollary 1.
.
Belief update based on FFdependence
In this section, following the dependencebased update scheme, we define a new update operator based on FFdependence. Then, we completely capture our update operator by identifying two extra postulates, show that the update operator satisfy the postulates (U1)  (U4) and (U8) and identify a special case in which all of the KM postulates holds. Finally, we compare with other operators , , and from various perspectives including information preservation, computational complexity and experimental results.
Belief update via formula forgetting
The update operator based on FFdependence is defined in terms of formula forgetting.
Definition 13.
Let and be formulas. The update operator , is defined as .
The following example illustrates the mechanism of the update operator .
Example 8.
Continued Example 5, we have , . We let that is equivalent to . The procedure of updating by via consists of two steps:

Forget in : .

Conjoin the result of formula forgetting with :
.
In the following, we give the modeltheoretical characterization of . We first provide the definition of the update of an interpretation by the new information.
Definition 14.
Let be an interpretation. The update based on FFdependence is defined as
If an interpretation satisfies the new information , then we do not modify the interpretation. Otherwise, we force on each dependent minterm of such that the new interpretation satisfies .
The FFdependence based update of a formula by collects the updates of each model of by :
Proposition 13.
.
Now, we give a representation result for the operator .
Theorem 1.
An operator : is equal to iff it satisfies (U2), (U8), and
 UP

If , then ;
 UF

If , , and , then .
Proof.
(): By Definition 14 and Proposition 13, satisfies (U2) and (U8). Suppose that , by Proposition 9, we have . The above implies that satisfies (UP). Suppose that , and . It is easily verified that . Thus, . Hence, satisfies (UF).
() : Suppose that satisfies (U2), (U8), (UP) and (UF). By postulate (U8), . In the following, we only prove that, for any interpretation , . It follows from (U2) that if , then .
Suppose that . Let . Based on and , we construct two terms and as follows:

;

.
Obviously, is the minterm corresponding to . By postulates (UP) and (UF), we get that . So . ∎
Postulate (UP) is analogous to the postulate (SP) except use dependent variables instead of variables. It says that if the belief base can be divided into two disjoint compartments, then the compartment, which is not related to the new information, remains unchanged. Postulate (UF) means that if any dependent variable of is also a dependent variable of , and is satisfiable and conflicts with , then the belief is simply replaced by the new information after updating.
Theorem 2.
The operator satisfies (U1)(U4) and (U8).
It is easy to construct counterexamples to show that none of (U5)(U7) are satisfied. Due to space limitations, we do not provide the counterexamples. The reason why the operator fails to satisfy them is that does not follow the principle of minimal change to which the three postulates correspond. From the semantic perspective, this principle requires that the update of by should be a set of models of that are the closest to . However, generally involves some models that violate this condition. For example, consider and . Then . Clearly, both and are closer to than , and hence is not the closest model.
Interestingly, when we restrict to be a satisfiable term, contains only one model . We take it for granted that it is the only model of that is the closest to . Under this restriction, obeys the principle of minimal change, and hence satisfies (U5)(U7).
Theorem 3.
If any formula appearing in must be a term in the KM postulates, then the operator satisfies (U1)(U8).
To prove the theorem, we now give the following lemma which means that if the update of an interpretation by a term satisfies another one , then it is the same as the update of by the conjunction of and
Lemma 1.
Let and are two terms. If , then .
We now prove Theorem 3.
Proof.
 (U5)

Let be a model of . There is a model of s.t. . Thus, . This, together with Lemma 1, imply that . Hence, .
 (U6)

Let be a model of . By the assumption, we have and . By Lemma 1, we get that . Hence, .
 (U7)

Let be a model of . Since is a minterm, there is a unique model of . Because both and are terms, so and . By postulate (U1), we get that . This, together with Lemma 1, imply that . We construct a term as follows:
.
It is easily verified that and is a dependent term of . Hence, , and .
∎
From the postulational point of view, the essential difference between revision and update is as follows: revision satisfies the conjunction property (R2), proposed in [Katsuno and Mendelzon1991]: if the new information does not contradict the initial base , then the revised belief base should be equivalent to the conjunction of and . On the contrary, update satisfies the distribution property (U8): update is distributive over the initial base. The operator satisfies (U8) but not (R2). Hence, we consider as an update operator, not a revision operator.
We next give the upper and lower bounds of the inference problem of the update operator .
Proposition 14.
Deciding whether is in and hard.
Comparison with other update operators
In this subsection, we compare our belief update operator with other operators from different perspectives.
Information preservation
The first perspective we focus on is how much information of the initial base is preserved after updating. From the semantic perspective, the less difference between the models of the initial base and those of the updated one, the more information that the update procedure preserves.
Definition 15.
Let and be two update operators. We say preserves at least as much information as , written , if for any formulas and . The notation is obtained as usual by taking the asymmetric parts of .
The following proposition says that update operators based on dependence preserve more information than those based on the principle of minimal change. Among three dependencebased update operators, preserves the most information.
Proposition 15.
.
Computational complexity
We next make a comparison from the perspective of the computational complexity. The complexity results of inference problems of , , and are as follows.
Proposition 16.
[Eiter and Gottlob1992] Deciding whether (resp. ) is complete.
Proposition 17.
[Herzig and Rifi1999, Herzig, Lang, and Marquis2013] Deciding whether (resp. ) is complete.
Definition 16.
Let and be two update operators. We say is at least as computational complexity as , written , iff the upper bound the complexity of is contained in the lower bound of the complexity of . The notations and are obtained by taking the asymmetric and symmetric part of respectively.
By Propositions 14, 16 and 17, we get the following corollary. Because the computational complexity of and are the same, so holds. Similarly, also holds. The lower bound of , which is , is the same as the upper bound of . Thus, holds. However, we cannot get that since the lower bound of does not contain the upper bound of , which is . Hence, we only obtain that . In addition, holds while does not. This is because the upper bound of , which is , is not a subset of the lower bound of even if it is complete. So holds. The computational complexity of is between those of and .
Corollary 2.
.
Empirical results
Update operator  V/C  Min  Max  Avg 

20/91  0.024  0.146  0.051  
50/218  0.072  92.681  1.201  
20/91  0.030  0.199  0.068  
50/218  0.070  91.647  1.206  
20/91  0.015  0.076  0.029  
50/218  0.027  64.218  0.164  
20/91  9.205  1,227  632.867  
50/218  9.555  1,350  857.459  
20/91  11.279  1,700  876.926  
50/218  11.461  1,895  1,225 
We have shown that the computational complexity of is higher than the other two dependencebased update operators in theory, but practice might be another matter. To assess the latter, we conduct an experiment of computing the new belief base. The benchmarks used in [Marchi, Bittencourt, and Perrussel2010] is from SATLIB that is available in http://www.cs.ubc.ca/hoos/SATLIB/benchm.html. We test two scales of testsets: 91 clauses with 20 variables and 218 clauses with 50 variables. Each testset has 1000 instances. For each instance, we use its corresponding theory as the initial belief base, and the negation of the first 4 clauses as the new contradictory information. In this experiment, we use BDDs to represent the initial KB and the new information and compute the updated KB via some operations of BDDs.
We illustrate the computation of each update operator in the following. Based on Definitions 12 and 13 and Proposition 12, computing consists of (1) generating the set ; (2) forgetting all variables of in ; (3) conjoining with . The computation of (resp. ) is similar to the above except conjoin (resp. ) with . We implement and according to the approach proposed by GorR2002 GorR2002.
In Table 1, the update operator and the numbers of variables and clauses are reported in columns 1 and 2 respectively. Columns 3  5 indicate the minimum, maximum and average time (in ms) of updating the KB. We can make two observations from Table 1. Firstly, all of approaches based on dependence are much more efficient than anyone based on the principle of minimal change. Secondly, for three dependencebased update operators, the maximum updating times of three dependencebased update operators are less than 100ms while the average ones are less than 1.5ms in the benchmarks with 218 clauses and 50 variables. The difference can be negligible, and hence the updating times of them are almost the same.
We close this section by noting that is a suitable alternative to update operator. It is the dependencebased update operator preserving the most information. In practice, the computational efficiency of are almost the same as those via and , albeit that theoretically the computational complexity of is higher than those of them. Compared to the two operators based on the principle of minimal change, it preserves less information, but it is much more efficient.
Conservative extension via formula forgetting
Conservative extension plays a prominent role in AI and logics [Ghilardi, Lutz, and Wolter2006, Lutz, Walther, and Wolter2007, Jung et al.2017]. Generally speaking, a conservative extension is a supertheory of a theory that proves no new theorems about the language of the original theory. We next give a syntaxindependent definition of conservative extension in terms of dependent variables.
Definition 17.
We say that is a syntaxindependent conservative extension of , if for every formula with , implies that .
The above definition is slightly different from the original version that is based on variables. Each syntaxindependent conservative extension also is an original one, but the converse does not hold. For example, and . The formula is not a conservative extension of since but . However, it is a syntaxindependent conservative extension of . This difference does not impede the widespread use of syntaxindependent version. In many practical applications, the background KB and query are firstly simplified, i.e., they merely contain their dependent variables [Levy, Fikes, and Sagiv1997, Lang, Liberatore, and Marquis2003]. Based on this assumption, two definitions of conservative extension are the same.
Finally, we obtain that deciding if is a syntaxindependent conservative extension of can be reduced to determining if forgetting in each dependent minterm of leads to a tautology.
Theorem 4.
is a syntaxindependent conservative extension of iff for every .
Proof.
(): Suppose that there is s.t. . Let be the set . It is easy to verify that for . Hence, . Let . Obviously, and . This contradicts the assumption.
(): Suppose that there is s.t. , and . Hence, there is s.t. . Since , we get that . Because , so . Thus, . We get that . This contradicts the assumption. ∎
Example 9.
Let and . Then, and . Thus, and . So is a syntaxindependent conservative extension of .
Related Work
Dependence is wellknown as a fundamental concept in many fields of artificial intelligence, particularly belief change. Several authors axiomatized the notion of dependence by postulates, and connected it to belief contraction. Belief contraction, a type of belief change, which is removal of existing beliefs. Del Cerro and Herzig CerH1996 gave postulates for a dependence relation between formulas, and established the correspondence between the dependence relation and belief contraction. In [del Cerro and Herzig1996], a belief state is represented as a belief set, i.e., an infinite set of formulas closed under implication. OveDPP2017 OveDPP2017 pointed out that a belief base, which need not be deductively closed and is often finite, is a practical alternative for representing belief states. They also identified a similar connection between dependence and base contraction.
The main difference between our work and the above approaches to dependence relations between formulas, is that our dependence relation corresponds to formula forgetting while their relations correspond to belief contraction. Belief contraction gets rid of as little as possible from the initial belief in order that the new belief state does not entail . By contrast, formula forgetting eliminates all parts of relevant to even if does not entail .
The above works focus on axiomatizations of dependence. Besides, there are several works that define the notion of dependence by the ideas of language splitting and variable sharing. Par1999 Par1999 showed the finest splitting theorem, which says that any finite set of formulas has a unique finest splitting (i.e., a partition of that refines every other splitting of ). can be decomposed into a set
Comments
There are no comments yet.