On the Strong Equivalences of LPMLN Programs

09/18/2019 ∙ by Bin Wang, et al. ∙ 0

By incorporating the methods of Answer Set Programming (ASP) and Markov Logic Networks (MLN), LPMLN becomes a powerful tool for non-monotonic, inconsistent and uncertain knowledge representation and reasoning. To facilitate the applications and extend the understandings of LPMLN, we investigate the strong equivalences between LPMLN programs in this paper, which is regarded as an important property in the field of logic programming. In the field of ASP, two programs P and Q are strongly equivalent, iff for any ASP program R, the programs P and Q extended by R have the same stable models. In other words, an ASP program can be replaced by one of its strong equivalent without considering its context, which helps us to simplify logic programs, enhance inference engines, construct human-friendly knowledge bases etc. Since LPMLN is a combination of ASP and MLN, the notions of strong equivalences in LPMLN is quite different from that in ASP. Firstly, we present the notions of p-strong and w-strong equivalences between LPMLN programs. Secondly, we present a characterization of the notions by generalizing the SE-model approach in ASP. Finally, we show the use of strong equivalences in simplifying LPMLN programs, and present a sufficient and necessary syntactic condition that guarantees the strong equivalence between a single LPMLN rule and the empty program.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

LPMLN [10], a newly developed knowledge representation and reasoning language, is designed to handle non-monotonic and uncertain knowledge by combining the methods of Answer Set Programming (ASP) [3, 7] and Markov Logic Networks (MLN) [16]. Specifically, an LPMLN program can be viewed as a weighted ASP program, where each ASP rule is assigned a weight denoting its certainty degree, and each weighted rule is allowed to be violated by a set of beliefs associated with the program. For example, LPMLN rule “” is a weighted constraint denoting facts and are contrary, is the weight of the constraint. In the view of ASP, the set is impossible to be a belief set of any ASP programs containing the constraint, while in the context of LPMLN, is a valid belief set. Since violates the constraint, the weight is regarded as the certainty degree of . It is easy to observe that the example can also be encoded by weak constraints in ASP. From this perspective, LPMLN can be viewed as an extension of ASP with weak constraints, that is, ASP with weak rules. Besides, several inference tasks are introduced to LPMLN

such as computing marginal probability distribution of beliefs, computing most probable belief sets etc., which makes LP

MLN suitable for knowledge reasoning in the context that contains uncertain and inconsistent data. For example, Eiter and Kaminski [6] used LPMLN

in the tasks of classifying visual objects, and some unpublished work tried to use LP

MLN as the bridge between text and logical knowledge bases.

Recent results on LPMLNaim to establish the relationships among LPMLN and other logic formalisms [2, 12], develop LPMLN solvers [9, 18, 20], acquire the weights of rules automatically [11], explore the properties of LPMLN [19] etc. All these results lay the foundation for the problems solving via LPMLN, however, many theoretical problems of LPMLN are still unsolved, which prevents the wider applications of LPMLN. In this paper, we investigate the strong equivalences between LPMLN programs, which is regarded as an important property in the field of logic programming. For two ASP programs and , they are strongly equivalent, iff for any ASP program , the programs and have the same stable models [13]. In other words, an ASP program can be replaced by one of its strong equivalent without considering its context, which helps us to simplify logic programs, enhance inference engines, construct human-friendly knowledge bases etc. For example, an ASP rule such that its positive and negative body have common atoms is strongly equivalent to [8, 14, 15], therefore, such kinds of rules can be eliminated from any context, which leads to a more concise knowledge base and makes the reasoning easier. By investigating the strong equivalences in LPMLN, it is expected to improve the knowledge base constructing and knowledge reasoning in LPMLN, furthermore, help us to facilitate the applications and extend the understandings of LPMLN.

Our contributions are as follows. Firstly, we define the notions of strong equivalences in LPMLN, that is, the p-strong and w-strong equivalences. As we showed in above example, a stable model defined in LPMLN is associated with a certainty degree, therefore, the notions of strong equivalences in LPMLN are also relevant to the certainty degree. Secondly, we present a model-theoretical approach to characterizing the defined notions, which can be viewed as a generalization of the strong-equivalence models (SE-model) approach in ASP [17]. Finally, we show the use of the strong equivalences in simplifying LPMLN programs, and present a sufficient and necessary syntactic condition that guarantees the strong equivalences between a single LPMLN rule and the empty program.

2 Preliminaries

In this section, we review the knowledge representation and reasoning language LPMLN presented in [10]. An LPMLN program is a finite set of weighted rules , where is the weight of rule , and is an ASP rule of the form

(1)

where s are literals, is epistemic disjunction, and is default negation. The weight of an LPMLN rule is either a real number or a symbol “” denoting “infinite weight”, and if is a real number, the rule is called soft, otherwise, it is called hard. For convenient description, we introduce some notations. By we denote the set of unweighted ASP counterpart of an LPMLN program , i.e. . For an ASP rule of the form (1), the literals occurred in head, positive body, and negative body of are denoted by , , and respectively. Therefore, an ASP rule of the form (1) can also be abbreviated as “”. By we denote the set of literals occurred in rule , and by we denote the set of literals occurred in an ASP program .

An LPMLN program is called ground if its rules contain no variables. Usually, a non-ground LPMLN program is considered as a shorthand for the corresponding ground program, therefore, we limited our attention to the strong equivalences between ground LPMLN programs in this paper. For a ground LPMLN program , we use to denote the weight degree of , i.e. . A ground LPMLN rule is satisfied by a consistent set of ground literals, denoted by , if by the notion of satisfiability in ASP. An LPMLN program is satisfied by , denoted by , if satisfies all rules in . By we denote the LPMLN reduct of an LPMLN program w.r.t. , i.e. . A consistent set of literals is a stable model of an ASP program , if satisfies all rules in and is minimal in the sense of set inclusion, where is the Gelfond-Lifschitz reduct (GL-reduct) of w.r.t. , i.e. . The set is a stable model of an LPMLN program if is a stable model of the ASP program . And by we denote the set of all stable models of an LPMLN program . For a stable model of an LPMLN program , the weight degree of w.r.t. is defined as , and the probability degree of w.r.t. is defined as

(2)

For a literal , the probability degree of w.r.t. is defined as

(3)

A stable model of an LPMLN program is called a probabilistic stable model of if . By we denote the set of all probabilistic stable models of . It is easy to check that is a probabilistic stable model of , iff is stable model of that satisfies the most hard rules. Based on above definitions, there are two kinds of main inference tasks for an LPMLN program [9]:

  • Maximum A Posteriori (MAP) inference: compute the stable models with the highest weight or probability degree of the program , i.e. the most probable stable model;

  • Marginal Probability Distribution (MPD) inference: compute the probability degrees of a set of literals w.r.t. the program .

3 Strong Equivalences for LPMln

In this section, we investigate the strong equivalences in LPMLN. Firstly, we define the notions of strong equivalences based on two different certainty degrees in LPMLN. Secondly, we present a model-theoretical approach to characterizing the notions. Finally, we present the relationships among these notions.

3.1 Notions of Strong Equivalences

The notion of strong equivalence is built on the notion of ordinary equivalence, in this section, we define two notions of ordinary equivalences between LPMLN programs, which is relevant to the weight and probability defined for stable models in LPMLN.

Definition 1 (w-ordinary equivalence).

Two LPMLN programs and are w-ordinarily equivalent, denoted by , if their stable models coincide, and for each stable model of the programs, .

Definition 2 (p-ordinary equivalence).

Two LPMLN programs and are p-ordinarily equivalent, denoted by , if their stable models coincide, and for each stable model of the programs, .

From Definition 1 and Definition 2, it can be observed that both of the w-ordinary and p-ordinary equivalences can guarantee two LPMLN programs have the same MAP and MPD inference results, and the p-ordinary equivalence is a little weaker, i.e. if two LPMLN programs are p-ordinarily equivalent, then they are w-ordinarily equivalent, but the inverse dose not hold generally. Based on the definitions of ordinary equivalences, we can define two kinds of strong equivalences between LPMLN programs.

Definition 3 (strong equivalences for LPMln).

For two LPMLN programs and ,

  • they are w-strongly equivalent, denoted by , if for any LPMLN program , ;

  • they are p-strongly equivalent, denoted by , if for any LPMLN program , .

The notions of w-strong and p-strong equivalences can guarantee the faithful replacement of an LPMLN program in any context. Here, we introduce a new notion of strong equivalence, semi-strong equivalence, that does not guarantee the faithful replacement, but helps us to simplify the characterizations of other strong equivalences.

Definition 4 (semi-strong equivalence).

Two LPMLN programs and are semi-strongly equivalent, denoted by , if for any LPMLN program , the programs and have the same stable models.

3.2 Characterizations of Strong Equivalences

In this section, we present the characterizations for w-strong and p-strong equivalences. From Definition 3 and Definition 4, the notions of w-strong and p-strong equivalences can be viewed as the strengthened semi-strong equivalence by introducing the certainty evaluations. Therefore, we present the characterization of semi-strong equivalence firstly, which severs as the basis of characterizing w-strong and p-strong equivalences.

3.2.1 Characterizing Semi-Strong Equivalence

Here, we characterize the semi-strong equivalence between LPMLN programs by generalizing the strong-equivalence models (SE-models) approach presented in [17]. For the convenient description, we introduce following notions.

Definition 5 (SE-interpretation).

A strong equivalence interpretation (SE-interpretation) is a pair of consistent sets of literals such that . An SE-interpretation is called total if , and non-total if .

Definition 6 (SE-models for LPMln).

For an LPMLN program , an SE-interpretation is an SE-model of , if and , where .

In Definition 6, is an ASP program obtained from by a three-step transformation. In the first step, is obtained from by removing all rules that cannot be satisfied by , which is the LPMLN reduct of w.r.t. . In the second step, is obtained by dropping weight of each rule in . In the third step, is obtained by the GL-reduct. Clearly, an SE-model for the LPMLN program is an SE-model of a consistent unweighted subset of that is obtained by LPMLN reduct, which means the definition of SE-models for LPMLN programs is built on the definition of SE-models for ASP programs. In what follows, we use to denote the set of all SE-models of an LPMLN program .

Definition 7.

For an LPMLN program and an SE-model of , the weight degree of w.r.t. the program is defined as

(4)
Example 1.

Consider an LPMLN program . For the set , it is easy to check that , therefore, the LPMLN reduct is itself. By the definitions of GL-reduct, , therefore, both and are SE-models of , and .

Now, we show some useful properties of the SE-models for LPMLN programs. Proposition 1 is an immediate result according to the definition of SE-models.

Proposition 1.

Let be an LPMLN program and an SE-interpretation,

  • if , then is an SE-model of ;

  • is not an SE-model of , iff .

Proposition 2 shows the relationships between the SE-models and the stable models of an LPMLN program.

Proposition 2.

For an LPMLN program and a total SE-model of ,

  • there must be an LPMLN program such that is a stable model of , for example, ;

  • is a stable model of , iff for any proper subset of .

Based on above results, a characterization of semi-strong equivalence between LPMLN programs is presented in Lemma 1.

Lemma 1.

Let and be two LPMLN programs, they are semi-strongly equivalent, iff they have the same SE-models, i.e. .

Proof.

The proof proceeds basically along the lines of the corresponding proof by Turner [17].

For the if direction, suppose , we need to prove that for any LPMLN program , the programs and have the same stable models. We use proof by contradiction. Assume is a set of literals such that . By the definition, we have . By Proposition 1, we have is an SE-model of . Hence, is also an SE-model of . Then, we have and . By the assumption , there exists a consistent set of literals such that , then we have and , hence, is an SE-model of , which means is also an SE-model of . By the definition of stable model, cannot be a stable model of , which contradicts with the assumption . Therefore, the programs and have the same stable models, and the if direction of Lemma 1 is proven.

For the only-if direction, suppose , we need to prove that . We use proof by contradiction. Assume is an SE-interpretation such that . By Proposition 1, we have . Let . We have . Let be a set of literals such that and . By the construction of , we have . Since , we have . Hence, there must exist a literal such that . By the construction of , we have , which means . By the definition of stable models, is a stable model of , which means should also be a stable model of . By the definition of stable model, cannot be an SE-model of , which contradicts with the assumption . Therefore, and have the same SE-models, and the only-if direction of Lemma 1 is proven. ∎

3.2.2 Characterizing W-Strong and P-Strong Equivalences

Now we present a main result of the paper, that is, the characterizations of w-strong and p-strong equivalences. Based on Lemma 1, Lemma 2 provides a sufficient condition to characterize the p-strong equivalence for LPMLN programs.

Lemma 2.

Two LPMLN programs and are p-strongly equivalent, if , and there exist two constants and such that for each SE-model , .

Proof.

For two LPMLN programs and , by Lemma 1, if , and are semi-strongly equivalent, i.e. for any LPMLN program , . Suppose there exist two constants and such that for each SE-model , , we need to show that and are p-strongly equivalent. Let be an LPMLN program, it is easy to check that is a probabilistic stable model of iff is a probabilistic stable model of , i.e. . For a stable model , the probability degree of can be reformulated as

(5)

By the definition of p-strong equivalence, we have . ∎

The condition in Lemma 2, called PSE-condition, is sufficient to characterize the p-strong equivalence. One may ask that whether the PSE-condition is also necessary. To answer the question, we need to consider the hard rules of LPMLN particularly. For LPMLN programs containing no hard rules, it is easy to check that the PSE-condition is necessary. But for arbitrary LPMLN programs, this is not an immediate result, which is shown as follows. Firstly, we introduce some notations. For a set of literals, we use to denote the power set of , and use to denote the maximal consistent part of the power set of , i.e. .

Lemma 3.

For two p-strongly equivalent LPMLN programs and , let and be arbitrary LPMLN programs such that . There exist two constants and such that for any SE-models of , if , then .

By Lemma 3, for two p-strongly equivalent LPMLN programs and , to prove the necessity of the PSE-condition, we need to find a set of LPMLN programs satisfying

  • , ; and

  • , where is the set of literals occurred in and , i.e. .

Above set is called a set of necessary extensions w.r.t. LPMLN programs and . As shown in Proposition 1, an arbitrary total SE-interpretation is an SE-model of an LPMLN program, therefore, if there exists a set of necessary extensions of two p-strongly equivalent programs and , then the necessity of the PSE-condition can be proven. In what follows, we present a method to construct a set of necessary extensions.

Definition 8.

For two consistent sets and of literals, and an atom such that , by we denote an LPMLN program as follows

(6)
(7)
Definition 9 (flattening extension).

For an LPMLN program and a set of literals such that , a flattening extension of w.r.t. is defined as

  • ;

  • ,

where is a set of weighted facts constructed from , i.e. , is a probabilistic stable model of , i.e. , and .

According to the splitting set theorem of LPMLN [19], the flattening extension has following properties.

Proposition 3.

For an LPMLN program and a set of literals, if is constructed from by adding , then we have

  • ;

  • ; and

  • the weight degrees of stable models have following relationships

    (8)

    and for two stable models and of , if , then .

Example 2.

Let be the LPMLN program in Example 1, and a set of literals . By Definition 9, , it is easy to check that all subsets of are the stable models of , is the unique probabilistic stable model. By Definition 8, is as follows

(9)
(10)

and we have . The stable models and their weight degrees of , , and are shown in Table 1. From the table, we can observe that the flattening extension can be used to adjust the sets of literals that satisfy the most hard rules.

Weight
- -
-
Table 1: Computing Results in Example 2
Lemma 4.

Let and be two p-strongly equivalent LPMLN programs, and . For two consistent subsets and of , there exists a flattening extension such that and are probabilistic stable models of .

Lemma 4 provides a method to construct a set of necessary extensions of two p-strongly equivalent LPMLN programs by constructing a set of flattening extensions, which means the PSE-condition is necessary to characterize the p-strong equivalence for LPMLN programs.

Theorem 1.

Let and be two LPMLN programs,

  • and are p-strongly equivalent iff , and there exist two constants and such that for each SE-model , ;

  • and are w-strongly equivalent iff they are p-strongly equivalent and the constants .

Example 3.

Consider LPMLN programs and , where is a variable denoting the weight of corresponding rule. It is easy to check that is the unique non-total SE-model of and , therefore, and are semi-strongly equivalent. If the programs are also p-strongly equivalent, we have following system of linear equations, where and .

(11)

Solve the system of equations, we have and are p-strongly equivalent iff and ; and they are w-strongly equivalent iff and .

4 Simplifying LPMln Programs

The notions of strong equivalences can be used to study the simplifications of logic programs. Specifically, if LPMLN program and are strongly equivalent, and the program is easier to solve or more friendly for human, then can be replaced by . In this section, we investigate the simplifications of LPMLN programs via using the notions of strong equivalences. In particular, we present an algorithm to simplify and solve LPMLN programs based on strong equivalences firstly. Then, we present some syntactic conditions that guarantee the strong equivalence between a single LPMLN rule and the empty set , which can be used to check the strong equivalences efficiently.

Definition 10.

An LPMLN rule is called semi-valid, if is semi-strongly equivalent to ; the rule is called valid, if is p-strong equivalent to .

In Definition 10, we specify two kinds of LPMLN rules w.r.t semi-strong and p-strong equivalences. Obviously, a valid LPMLN rule can be eliminated from any LPMLN programs, while a semi-valid LPMLN rule cannot. By the definition, eliminating a semi-valid LPMLN rule does not change the stable models of original programs, but changes the probability distributions of the stable models, which means it may change the probabilistic stable models of original programs.

Example 4.

Consider three LPMLN programs , , and . It is easy to check that rules in and are valid and semi-valid, respectively. Table 2 shows the stable models and their probability degrees of LPMLN programs , , and . It can be observed that eliminating the rule of from makes all stable models of probabilistic, which means semi-valid rules cannot be eliminated directly.

Stable Model
Table 2: Computing Results in Example 4

Algorithm 1 provides a framework to simplify and solve LPMLN programs based on the notions of semi-valid and valid LPMLN rules. Firstly, simplify an LPMLN program by removing all semi-valid and valid rules (line 2 - 8). Then, compute the stable models of the simplified LPMLN program via using some existing LPMLN solvers, such as LPMLN2ASP, LPMLN2MLN [9], and LPMLN-Models [20] ect. Finally, compute the probability degrees of the stable models w.r.t. the simplified program and all semi-valid rules (line 9 - 12). The correctness of the algorithm can be proved by corresponding definitions.

Input: an LPMLN program
Output: stable models of and their probability degrees
1 , ;
2 foreach  do
3       if  is valid then
4             ;
5            
6      else
7             if  is semi-valid then
8                   ;
9                   ;
10                  
11            
12      
13;
14 foreach  do
15       ;
16      
17Compute probability degrees for each stable model by Equation (2) and ;
return and corresponding probability degrees
Algorithm 1 Simplify and Solve LPMLN Programs
Name Definition Strong Equivalence
TAUT p, semi
CONTRA p, semi
CONSTR1 semi
CONSTR2 semi
CONSTR3 , , and p, semi
Table 3: Syntactic Conditions

In Algorithm 1, a crucial problem is to decide whether an LPMLN rule is valid or semi-valid. Theoretically, it can be done by checking the SE-models of a rule, however, the approach is highly complex in computation. Therefore, we investigate the syntactic conditions for the problem. Table 3 shows five syntactic conditions for a rule , where TAUT and CONTRA have been introduced to investigate the program simplification of ASP [15, 5], CONSTR1 means the rule is a constraint, and CONSTR3 is a special case of CONSTR1. Rules satisfying CONSTR2 is usually used to eliminate constraints in ASP, for example, rule “” is equivalent to rule “”, if the atom does not occur in other rules. Based on these conditions, we present the characterization of semi-valid and valid LPMLN rules.

Theorem 2.

An LPMLN rule is semi-valid, iff the rule satisfies one of TAUT, CONTRA, CONSTR1 and CONSTR2.

Theorem 3.

An LPMLN rule is valid, iff one of following condition is satisfied

  • rule satisfies one of TAUT, CONTRA, and CONSTR3; or

  • rule satisfies CONSTR1 or CONSTR2, and .

Theorem 2 and Theorem 3 can be proven by Lemma 1 and Theorem 1. It is worthy noting that conditions CONSTR1 and CONSTR2 means the only effect of constraints in LPMLN is to change the probability distribution of inference results, which can also be observed in Example 2. In this sense, the constraints in LPMLN can be regarded as the weak constraints in ASP, and Algorithm 1 is similar to the algorithm of solving ASP containing weak constraints. In both of algorithms, stable models are computed by removing (weak) constraints, and the certainty evaluations of the stable models are computed by combining these constraints.

Combining Theorem 2 and Theorem 3, Algorithm 1 is an alternative approach to enhance LPMLN solvers. In addition, Theorem 2 and Theorem 3

also contribute to the field of knowledge acquiring. On the one hand, although it is impossible that rules of the form TAUT, CONTRA, and CONSTR3 are constructed by a skillful knowledge engineer, these rules may be obtained from data via rule learning. Therefore, we can use TAUT, CONTRA, and CONSTR3 as the heuristic information to improve the results of rule learning. On the other hand, CONSTR1 and CONSTR2 imply a kind of methodology of problem modeling in LP

MLN, that is, we can encode objects and relations by LPMLN rules and facts, and adjust the certainty degrees of inference results by LPMLN constraints. In fact, this is the core idea of ASP with weak constraints, LPMLN is more flexible by contrast, since LPMLN provides weak facts and rules besides weak constraints.

5 Conclusion and Future Work

In this paper, we present four kinds of notions of strong equivalences between LPMLN programs by comparing the certainty degrees of stable models in different ways, i.e. semi-strong, w-strong and p-strong equivalences, where w-strong equivalence is the strongest notion, and semi-strong equivalence is the weakest notion. For each notion, we present a sufficient and necessary condition to characterize it, which can be viewed as a generalization of SE-model approach in ASP. After that, we present a sufficient and necessary condition that guarantees the strong equivalence between a single LPMLN rule and the empty set, and we present an algorithm to simplify and solve LPMLN programs by using the condition. The condition can also be used to improve the knowledge acquiring and increase the understanding of the methodology of problems modeling in LPMLN.

As we showed in the paper, there is a close relationship between LPMLN and ASP, especially, the constraints in LPMLN can be regarded as the weak constraints in ASP. Concerning related work, the strong equivalence for ASP programs with weak constraints (abbreviated to ASPwc) has been investigated [4]. It is easy to observe that the strong equivalence and corresponding characterizations of ASPwc can be viewed as a special case of the p-strong equivalence in ASP.

For the future, we plan to improve the equivalences checking in the paper, and use these technologies to enhance LPMLN solvers. And we also plan to extend the strong equivalence discovering method introduced in [14] to LPMLN, which would help us to decide strong equivalence via some syntactic conditions.

6 Acknowledgments

We are grateful to the anonymous referees for their useful comments. The work was supported by the National Key Research and Development Plan of China (Grant No.2017YFB1002801).

References

  • [1]
  • [2] Evgenii Balai & Michael Gelfond (2016): On the Relationship between P-log and LPMLN. In Subbarao Kambhampati, editor:

    Proceedings of the 25th International Joint Conference on Artificial Intelligence

    , pp. 915–921.
  • [3] Gerhard Brewka, Thomas Eiter & Mirosław Truszczyński (2011): Answer Set Programming at a Glance. Communications of the ACM 54(12), pp. 92–103, doi:http://dx.doi.org/10.1145/2043174.2043195.
  • [4] Thomas Eiter, Wolfgang Faber, Michael Fink & Stefan Woltran (2007): Complexity results for answer set programming with bounded predicate arities and implications. Annals of Mathematics and Artificial Intelligence 51(2-4), pp. 123–165, doi:http://dx.doi.org/10.1007/s10472-008-9086-5.
  • [5] Thomas Eiter, Michael Fink, Hans Tompits & Stefan Woltran (2004): Simplifying Logic Programs Under Uniform and Strong Equivalence. In: Proceedings of the 7th International Conference on Logic Programming and Nonmonotonic Reasoning, pp. 87–99, doi:http://dx.doi.org/10.1007/978-3-540-24609-1˙10.
  • [6] Thomas Eiter & Tobias Kaminski (2016): Exploiting Contextual Knowledge for Hybrid Classification of Visual Objects. In Jürgen Dix, Luís Fariñas del Cerro & Ulrich Furbach, editors: Proceedings of the 15th European Conference on Logics in Artificial Intelligence, Lecture Notes in Computer Science 10021, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 223–239, doi:http://dx.doi.org/10.1007/978-3-319-48758-8˙15.
  • [7] Michael Gelfond & Vladimir Lifschitz (1988): The Stable Model Semantics for Logic Programming. In Robert A. Kowalski & Kenneth A. Bowen, editors: Proceedings of the Fifth International Conference and Symposium on Logic Programming, MIT Press, pp. 1070–1080.
  • [8] Katsumi Inoue & Chiaki Sakama (2004): Equivalence of Logic Programs Under Updates. In: Proceedings of the 9th European Workshop on Logics in Artificial Intelligence, 3229, pp. 174–186, doi:http://dx.doi.org/10.1007/978-3-540-30227-8˙17.
  • [9] Joohyung Lee, Samidh Talsania & Yi Wang (2017): Computing LP MLN using ASP and MLN solvers. Theory and Practice of Logic Programming 17(5-6), pp. 942–960, doi:http://dx.doi.org/10.1017/S1471068417000400.
  • [10] Joohyung Lee & Yi Wang (2016): Weighted Rules under the Stable Model Semantics. In Chitta Baral, James P. Delgrande & Frank Wolter, editors: Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning:, AAAI Press, pp. 145–154.
  • [11] Joohyung Lee & Yi Wang (2018): Weight Learning in a Probabilistic Extension of Answer Set Programs. In: Proceedings of the 16th International Conference on the Principles of Knowledge Representation and Reasoning, pp. 22–31.
  • [12] Joohyung Lee & Zhun Yang (2017): LPMLN, Weak Constraints, and P-log. In Satinder P. Singh & Shaul Markovitch, editors: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI Press, pp. 1170–1177.
  • [13] Valdimir Lifschitz, David Pearce & Agustín Valverde (2001): Strongly equivalent logic programs. ACM Transactions on Computational Logic 2(4), pp. 526–541, doi:http://dx.doi.org/10.1145/383779.383783.
  • [14] Fangzhen Lin & Yin Chen (2007): Discovering Classes of Strongly Equivalent Logic Programs. Journal of Artificial Intelligence Research 28, pp. 431–451, doi:http://dx.doi.org/10.1613/jair.2131.
  • [15] Mauricio Osorio, Juan Antonio Navarro & José Arrazola (2001): Equivalence in Answer Set Programming. In: Proceedings of the 11th International Workshop on Logic Based Program Synthesis and Transformation,, pp. 57–75, doi:http://dx.doi.org/10.1007/3-540-45607-4˙4.
  • [16] Matthew Richardson & Pedro Domingos (2006): Markov logic networks. Machine Learning 62(1-2), pp. 107–136, doi:http://dx.doi.org/10.1007/s10994-006-5833-1.
  • [17] Hudson Turner (2001): Strong Equivalence for Logic Programs and Default Theories (Made Easy). In: Proceedings of the 6th International Conference on Logic Programming and Nonmonotonic Reasoning, pp. 81–92, doi:http://dx.doi.org/10.1007/3-540-45402-0˙6.
  • [18] Bin Wang & Zhizheng Zhang (2017): A Parallel LPMLN Solver: Primary Report. In Bart Bogaerts & Amelia Harrison, editors: Proceedings of the 10th Workshop on Answer Set Programming and Other Computing Paradigms, CEUR-WS, Espoo, Finland, pp. 1–14.
  • [19] Bin Wang, Zhizheng Zhang, Hongxiang Xu & Jun Shen (2018): Splitting an LPMLN Program. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 1997–2004.
  • [20] Wei Wu, Hongxiang Xu, Shutao Zhang, Jiaqi Duan, Bin Wang, Zhizheng Zhang, Chenglong He & Shiqiang Zong (2018): LPMLNModels: A Parallel Solver for LPMLN. In: 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), IEEE, pp. 794–799, doi:http://dx.doi.org/10.1109/ICTAI.2018.00124.