Introduction
Belief change and belief merging have been topics of interest in Artificial Intelligence for three decades
[Alchourrón, Gärdenfors, and Makinson1985, Katsuno and Mendelzon1991, Konieczny and Pino Pérez2002]. However, the restriction of such operators to specific fragments of propositional logic has received increasing attention only in the last years [Delgrande et al.2013, Creignou et al.2014a, Creignou et al.2014b, Zhuang and Pagnucco2012, Zhuang, Pagnucco, and Zhang2013, Zhuang and Pagnucco2014, Delgrande and Peppas2015, Haret, Rümmele, and Woltran2015]. Mostly, the question tackled in these works is “How should rationality postulates and change operators be adapted to ensure that the result of belief change belongs to a given fragment?”. Surprisingly, the question concerning the extent to which the result of a belief change operation can deviate from the fragment under consideration has been neglected so far. In order to tackle this question, we focus here on a certain form of reverse merging. The question is, given an arbitrary knowledge base and some ICmerging (i.e. merging with integrity constraint, see [Konieczny and Pino Pérez2002]), operator can we find a profile , i.e. a tuple of knowledge bases, and a constraint , both from a given fragment of classical logic, such that yields a result equivalent to ? In other words, we are interested in seeing if can be distributed into knowledge bases of simpler structure, such that the task of merging allows for a reconstruction of the original knowledge. We call this operation knowledge distribution.Studying the concept of knowledge distribution can be motivated from different points of view. First, consider a scenario where the storage devices have limited expressibility, for instance, databases or logic programs. Our analysis will show which merging operators are required to reconstruct arbitrary knowledge stored in such a set of limited devices. Second, distribution can also be understood as a tool to hide information; only users who know the used merging operator (which thus acts as an encryption key) are able to faithfully retrieve the distributed knowledge. Given the high complexity of belief change (even for revision in “simple” fragments like
and [Eiter and Gottlob1992, Liberatore and Schaerf2001, Creignou, Pichler, and Woltran2013]), bruteforce attack to guess the merging operator is unthinkable. Finally, from the theoretical perspective our results shed light on the power of different merging operators when applied to profiles from certain fragments. In particular, our results show that merging formulas via the Hammingdistance based operator does not need additional care, since the result is guaranteed to stay in the fragment.Related Work.
Previous work on merging in fragments of propositional logic proposed an adaptation of existing belief merging operators to ensure that the result of merging belongs to a given fragment [Creignou et al.2014b], or modified the rationality postulates in order to function in the fragment [Haret, Rümmele, and Woltran2015]. Our approach is different, since we do not require that the result of merging stays in a given fragment. On the contrary, we want to decompose arbitrary bases into a fragmentprofile. Recent work by Liberatore has also addressed a form of metareasoning over belief change operators. In [Liberatore2015a], the input is a profile of knowledge bases with the expected result of merging , and the aim is to determine the reliability of the bases (for instance, represented by weights) which allow the obtaining of . In another paper, Liberatore15 Liberatore15 identifies, given a sequence of belief revisions and their results, the initial preorder which characterizes the revision operator. Finally, even if our approach may seem related to Knowledge Compilation (KC) [Darwiche and Marquis2002, Fargier and Marquis2014, Marquis2015], both methods are in fact conceptually different. KC aims at modifying a knowledge base into a knowledge base such that the most important queries for a given application (consistency checking, clausal entailment, model counting, ) are simpler to solve with . Here, we are interested in the extent to which it is possible to equivalently represent an arbitrary knowledge base by simpler fragments when using merging as a recovery operation.
Main Contributions.
We formally introduce the concept of knowledge distributability, as well as a restricted version of it where the profile is limited to a single knowledge base (simplifiability). We show that for drastic distance arbitrary knowledge can be distributed into bases restricted to mostly any kind of fragment, while simplifiability is limited to trivial cases. On the other hand, for Hammingdistance based merging the picture is more opaque. We show that for , distributability w.r.t. is limited to trivial cases, while slightly more can be done with and . For we show that arbitrary knowledge can be distributed and even be simplified. Finally, we discuss the fragment for which the results for , and are situated in between the two former fragments.
Background
Fragments of Propositional Logic.
We consider as the language of propositional logic over some fixed alphabet of propositional atoms. We use standard connectives , , , and constants , . A clause is a disjunction of literals. A clause is called Horn if at most one of its literals is positive. An interpretation is a set of atoms (those set to true). The set of all interpretations is . Models of a formula are denoted by . A knowledge base (KB) is a finite set of formulas and we identify models of a KB via . A profile is a finite nonempty tuple of KBs. Two formulae (resp. KBs ) are equivalent, denoted (resp. ), when they have the same set of models.
We use a rather general and abstract notion of fragments.
Definition 1.
A mapping is called closureoperator if it satisfies the following for any :

If , then

If , then

.
Definition 2.
is called a fragment if it is closed under conjunction (i.e., for any ), and there exists an associated closureoperator such that (1) for all , and (2) for all there is a with . We often denote the closureoperator associated to a fragment as .
Definition 3.
For a fragment , we call a finite set an knowledge base. An profile is a profile over knowledge bases. A KB is called expressible if there exists an KB , such that .
Many well known fragments of propositional logic are indeed captured by our notion. For the Hornfragment , i.e. the set of all conjunctions of Horn clauses over , take the operator defined as the fixed point of the function
The fragment which is restricted to formulas over clauses of length at most is linked to the operator defined as the fixed point of the function given by
Here, we use the ternary majority function which yields an interpretation containing those atoms which are true in at least two out of . Finally, we are also interested in the fragment which is just composed of conjunctions of literals; its associated operator is defined as the fixed point of the function
Note that full classical logic is given via the identity closure operator .
Merging Operators.
We focus on ICmerging, where a profile is mapped into a KB, such that the result satisfies some integrity constraint. Postulates for ICmerging have been stated in [Konieczny and Pino Pérez2002]. We recall a specific family of ICmerging operators, based on distances between interpretations, see also [Konieczny, Lang, and Marquis2004].
Definition 4.
A distance between interpretations is a mapping from two interpretations to a nonnegative real number, such that for all , (1) iff ; (2) ; and (3) . We will use two specific distances:
 drastic distance

if , otherwise;
 Hamming distance

.
We overload the previous notations to define the distance between an interpretation and a KB : if is a distance between interpretations, then
Next, an aggregation function must be used to evaluate the distance between an interpretation and a profile.
Definition 5.
An aggregation function associates a nonnegative number to every finite tuple of nonnegative numbers, such that:

If , then ;

iff ;

For every nonnegative number , .
As aggregation functions, we will consider the sum , and and ^{1}^{1}1 and are also known as and respectively. Stricto sensu, these functions return a vector of numbers, and not a single number. However, (resp. ) can be associated with an aggregation function as defined in Definition 5 which yields the same vector ordering than (resp. ). We do a slight abuse by using directly and as the names of aggregation functions. See [Konieczny, Lang, and Marquis2002]., defined as follows. Given a profile , let
be the vector of distances s.t.
. (resp. ) is defined by ordering in decreasing (resp. increasing) order. Given two interpretations , (resp. ) is defined by comparing them w.r.t. the lexicographic ordering.Finally, let be a distance, an interpretation and a profile. Then,
If there is no ambiguity about the aggregation function , we write instead of .
Definition 6.
For any distance between interpretations, and any aggregation function , the merging operator is a mapping from a profile and a formula to a KB, such that
with iff .
When we consider a profile containing a single knowledge base , all aggregation functions are equivalent; we write instead of for readability. For drastic distance, , , and are equivalent for arbitrary profiles. Thus, whenever we show results for , these carry over to and .
Main Concepts and General Results
We now give the central definition for a knowledge base being distributable into a profile from a certain fragment with respect to a given merging operator.
Definition 7.
Let be a merging operator, be an arbitrary KB, and be a fragment. is called distributable w.r.t. if there exists an profile and a formula , such that .
Example 1.
Let and consider which we want to check for distributability w.r.t. operator . We have , thus is not expressible (note that ), otherwise would be distributable in a simple way (see Proposition 1 below).
Take the profile with , , together with the empty constraint . We have , . In the following matrix, each line corresponds to the distance between a model of and a KB from the profile (columns and ), or between a model of and the profile using the sumaggregation over the distances to the single KBs (column ).
We observe that , thus as desired. It is easily checked that also other aggregations work: .
Next, we recall that ICmerging of a single KB yields revision. Thus, the concept we introduce next is also of interest, as it represents a certain form of reverse revision.
Definition 8.
Let be a merging operator, an arbitrary KB, and a fragment. is called simplifiable w.r.t. if there exists an KB and , such that .
As we will see later, the KB from Example 1 cannot be simplified w.r.t. ; in other words, we need here at least two KBs to “express” . However, it is rather straightforward that any expressible KB can be simplified.
Proposition 1.
For every fragment and every KB , it holds that is simplifiable (and thus also distributable) w.r.t. , whenever is expressible.
Proof.
Let be an KB equivalent to , and let . Thus, by definition of fragments and it is easily verified that . ∎
Next, we show that in order to determine whether a KB is distributable, it is sufficient to consider constraints such that .
Proposition 2.
Let be a KB, be a fragment, an profile and . Then implies for any such that .
Proof.
Let . By Definition 6, , hence . Moreover, is closed, so . We get . Thus, , i.e. . ∎
Next, we give two positive results for distributing knowledge in any fragment. The key idea is to use KBs in the profile which have exactly one model (our notion of fragment guarantees existence of such KBs). The first result is independent of the distance notion but requires as the aggregation function. The second result is for drastic distance and thus works for any of the aggregation functions we consider.
Theorem 3.
Let be a distance and be a fragment. Then for every KB , such that for all distinct , for some , it holds that is distributable w.r.t.
Proof.
Build the profile such that for each , there is a KB with as its only model. Thus all models of get a vector . All interpretations from get a vector with . Hence, we have using with . ∎
Theorem 4.
For every fragment and every knowledge base , it holds that is distributable w.r.t. , for .
Proof.
Given a fragment , we take where is a knowledge base with single model (such exists due to our definition of fragments), and let be such that ; hence also . Let and , we observe that when , and otherwise. Thus, . The same result holds for and . ∎
Concerning simplifiability w.r.t. drastic distance based operators, Proposition 1 cannot be improved.
Theorem 5.
For every fragment and every KB , is simplifiable w.r.t. iff is expressible.
Proof.
The ifdirection is by Proposition 1. For the other direction, suppose is not expressible. We show that for any KB , with . By Proposition 2 the result then follows. Now suppose there exists an KB such that . First observe that since is not expressible, . Since we are working with drastic distance, in order to promote models of , we also need them in , hence and since is from we have . Thus there exists having distance to , and thus . Since , this yields a contradiction to . ∎
Hamming Distance and Specific Fragments
We first consider the simplest fragment under consideration, namely conjunction of literals. As it turns out, (nontrivial) distributability for this fragment w.r.t. is not achievable. We then see that more general fragments allow for nontrivial distributions. In particular, we show that every KB is distributable (and even simplifiable) in the case, and we finally give a few observations for .
The 1CNF Fragment
The following technical result is important to prove the main result in this section.
Lemma 6.
For any profile and interpretations , it holds that:
Proof.
It suffices to show that for each in profile , . Indeed, summing up these equalities over all , we get
Since , for any interpretation , our conclusion then follows immediately.
Thus, take to be two interpretations that are closest to and , respectively, among the models of . In other words, and . By induction on the number of propositional atoms in , we can show that and are closest in to and , respectively. Thus, we have that , , , , and our problem reduces to showing that . By using induction on the number of propositional atoms in again, we can show that this equality holds. The argument runs as follows: in the base case, when the alphabet consists of just one propositional atom, the equality is shown to be true by checking all the cases. For the inductive step we assume the claim holds for an alphabet of size and show that it also holds for an alphabet of size . More concretely, we analyze the way in which the Hamming distances between interpretations change when we add a propositional atom to the alphabet. An analysis of all the possible cases shows that the equality holds. ∎
Next we observe certain patterns of interpretations that indicate whether a KB is expressible or not.
Definition 9.
If is a knowledge base, then a pair of interpretations and are called critical with respect to if and , and one of the following cases holds:

and ,

and ,

and ,

and , or

and .
Lemma 7.
If a KB is not expressible, then there exist being critical with respect to .
Proof.
The fact that is not expressible implies that either: (i) is not closed under intersection or union, or (ii) there are such that , and , . Case (i) implies that there exist such that one of Cases 13 from Definition 9 holds. If we are in Case (ii), then consider the interpretation . Clearly, , hence . Also, and . There are two subcases to consider here. If , then we are in Case 4 of Definition 9. If , then we are in Case 5 of Definition 9. ∎
Example 2.
Let us consider the KB such that . is not expressible; indeed, .
Here, we identify several sets of critical interpretations w.r.t. . First, corresponds to the situation described in Case 5 of Definition 9, with and .
The set also corresponds to Case 5, with and .
We can also consider the set of interpretations , which corresponds to Case 2 of Definition 9, with and . The models of and the sets of critical interpretations are represented in Figure 1.
We can now state the central result of this section.
Theorem 8.
A KB is distributable with respect to if and only if is expressible.
Proof.
If part. By Proposition 1.
Only if part. Let be a KB that is not expressible. We will show that it is not distributable w.r.t. . Suppose, on the contrary, that is distributable. Then there exists an profile such that , where (cf. Proposition 2).
By Lemma 7, there exist interpretations that are critical with respect to . By Lemma 6, we have
(1) 
Let us now do a case analysis depending on the type of critical pair we are dealing with. If we are in Case 1 of Definition 9, then it needs to be the case that , and , for some integers and . Plugging these numbers into Equality (1), we get that and . Since , we have arrived at a contradiction. If we are in Case 2, then it needs to be the case that , and , for some integers and . Plugging these numbers into Equality (1) again, we get a contradiction along the same lines as in Case 1. If we are in Case 3, then it needs to hold that , , for some integers and . Plugging these numbers into Equality (1) gives us and hence . Since , we have arrived at a contradiction. Cases 4 and 5 are entirely similar. ∎
In other words, for any profile and , is guaranteed to be expressible as well. As we have already shown in Theorem 3, this is not necessarily the case if we replace by . The following example shows how to obtain a similar behavior for ; we then generalize this idea below.
Example 3.
Let and . We have . is not expressible, since . Let be the KB with a single model for any and let us have a look at the following distance matrix for with , , and .
Recall that the lexicographic order of the involved vectors is . We thus get that (see also Theorem 3), and on the other hand, .
Theorem 9.
Any KB such that is distributable with respect to .
Proof.
If is expressible, then the conclusion follows from Proposition 1. If is not expressible, then consider the set . We define the profile , where , for . We show that , where .
First, we have that , which implies that , for any . Furthermore, since and , for any , it follows that and . Next, we show that .
Consider the vectors and . Our claim is that . To see why, notice that the elements in form a complete subset lattice with and as the top and bottom elements, respectively. Let us write . This lattice has elements, and the maximum distance of two elements in it is . Thus, the vector is the vector of distances between and every other element in this lattice, except itself and . A similar consideration holds for . Hence and are vectors of length whose elements are