1 Introduction
Prime compilation is a prominent fundamental issue for AI. Given a nonclausal Boolean formula, prime compilation aims to generate all the primes of the formula. A prime does not contain redundant literals so that it can represent refined information. Because of that, this problem has widely applications, including logic minimization [Ignatiev et al.2015], multiagent systems [Slavkovik and Agotnes2014], fault tree analysis [Luo and Wei2017], model checking [Bradley and Manna2007], bioinformatics [Acuna et al.2012], etc.
This problem is computationally hard. For a nonclausal Boolean formula, the number of primes may be exponential in the size of the formula, while finding one prime is hard for the second level of the PH. In practice, most problems can be hardly expressed in clausal formulae [Stuckey2013]. Hence, nonclausal formulae are often transformed into CNF by some encoding methods, such as Tseitin encoding [Tseitin1968], which reduce the complexity by adding auxiliary variables. Most of the earlier works only generate all primes of a CNF, but they cannot directly compute all primes of a nonclausal formula. Therefore, this issue for nonclausal formula has received great attention [Previti et al.2015].
The stateoftheart approaches [Previti et al.2015] are capable of generating all primes of a nonclausal formula through several iterations. They use dual rail encoding [Bryant et al.1987, Roorda and Claessen2005] to encode search space in all produce. Either a prime implicate or a prime implicant is computed at each iteration until a prime cover and all primes are obtained. The cover is logically equivalent to the nonclausal formula, which guarantees that all the primes can be obtained. Particularly, they extract a prime from an assignment based on the asymptotically optimal QuickXplain algorithm [Junker2004, Bradley and Manna2007].
There are three issues in their methods. (i) The dual rail encoding with twice the number of variables than original encoding results in larger search space. (ii) It is a timeconsuming task to construct a prime cover because it should completely remove all redundant literals. (iii) It requires a minimal or maximal assignment in order to ensure the correctness, which often exerts a negative influence on SAT solving. These issues probably explain that their performance on the inherent intractability of computing cases is still not satisfactory. Notably, it is questionable whether finding a prime cover has a practical value since the influence of the size of the cover on the other parts of the algorithm is only vaguely known although the prime cover can be smaller.
We propose a novel twophase method – CoAPI that focuses on an overapproximate cover with a good tradeoff between the small size of the cover and efficiency to improve the performance. We stay within the idea of the work [Previti et al.2015] that generates all primes based on a cover. However, we use two separate phases to avoid using dual rail encoding in all phases. We construct a cover without dual rail encoding in the first phase. In the second phase, we generate all primes with that. Furthermore, we introduce the notion of the overapproximate implicate (AIP for short) that is an implicate containing as few literals as possible. We consider constructing a cover with a set of AIPs – overapproximate cover (AC for short), rather than with a prime cover. Note that an AC is also logically equivalent to the nonclausal formula.
There are two challenges in our work. The first one is the efficient computation of AIP. Motivated by the applications of the unsatisfiable core in optimizing largescale search problems [Narodytska and Bacchus2014, Yamada et al.2016], we propose a coreguided method to produce AIPs. A smaller unsatisfiable core containing fewer literals should be efficiently obtained, which helps to reduce the number of AIPs in the cover. It is the second challenge, i.e., producing smaller unsatisfiable cores. We notice that the SAT solvers based on the two literal watching scheme [Moskewicz et al.2001] cannot produce the smallest unsatisfiable core because of the limitation of the partial watching. As for this, we provide a multiorder based shrinking method, in which we defined different decision orders to guide the shrinking of unsatisfiable cores in an iterative framework.
We evaluate CoAPI on four benchmarks introduced by previti15. The experimental results show that CoAPI exhibits better performance than stateoftheart methods. Especially for generating all prime implicates, CoAPI is faster about one order of magnitude.
The paper is organized as follows. Section 2 first introduces the basic concepts. Then, Section 3 and Section 4 present the main features of CoAPI in detail. After that, Section 5 reports the experiments. Finally, Section 6 discusses related works and Section 7 concludes this paper.
Due to space limit, omitted proofs and supporting materials are provided in the additional file and online appendix (http://tinyurl.com/IJCAI19233).
2 Preliminaries
This section introduces the notations and backgrounds.
A term is a conjunction of literals, represented as a set of literals. is the size of , i.e., the number of literals in . Given a Boolean formula , a model is an assignment satisfying . Particularly, a model is said to be minimal (resp. maximal), when it contains the minimal (resp. maximal) number of variables assigned . A clause is the disjunction of literals, which is also represented by the set of its literals. The size of a clause is the number of literals in it. A Boolean formula is in conjunctive normal form (CNF) if it is formed as a conjunction of clauses, which denotes a set of clauses. For a Boolean formula in CNF, means the sum of the size of clauses in . Two Boolean formulae are logically equivalent iff they are satisfied by the same models.
Definition 1.
A clause is called an implicate of if . Especially, is called prime if any clause s.t. is not an implicate of .
Definition 2.
A term is called an implicant of if . Especially, is called prime if any term s.t. is not an implicant of .
The prime compilation aims to compute all the prime implicates or implicants, respectively, denoted by and .
Given a Boolean formula , if is unsatisfiable, a SAT solver based on CDCL, such as MiniSAT [Eén and Sörensson2003], can produce a proof of unsatisfiability [McMillan and Amla2003, Zhang and Malik2003b] using the resolution rule.
Definition 3.
A proof of unsatisfiability for a set of clauses is a directed acyclic graph , where is a set of clauses. For every vertex , if , then is a root; otherwise has exactly two predecessors, and , such that is the resolvent of and . The empty clause, denoted by , is the unique leaf.
Definition 4.
Given a proof of unsatisfiability , for every clause , the fanin cone of includes of all the from which there is at least one path to .
A proof of unsatisfiability can answer what clauses are in the transitive fanin cone of the empty clause. Therefore, an unsatisfiable core can be generated through backward traversing from .
Definition 5.
Given a Boolean formula in CNF, an unsatisfiable core is a subset of and that is inconsistent.
The SAT solver, such as MiniSAT, is capable of handling assumptions. When the solver derives unsatisfiability based on the assumptions for a Boolean formula, it can return failed assumptions, which is a subset of assumptions inconsistent with the formula. From here on, we use the terms failed assumptions and unsatisfiable core interchangeably since every unsatisfiable core corresponds to definite failed assumptions.
3 New Approach – CoAPI
In this section, we first introduce the overview of CoAPI, then show the details.
3.1 Overview
Given a Boolean formula , we first, based on original encoding, construct a cover in CNF to rewrite in the first phase, i.e., the cover is logically equivalent to
; then generate all primes in the second phase based on dual rail encoding. Note that the twophase produce not only avoids using dual rail encoding in all phases but also exploits the powerful heuristic branching method for SAT solving. For simplicity, this paper only introduces the generation of all prime implicants of
(similarly for prime implicates because of the duality).We extend the concepts of the prime implicate and prime cover into the AIP and AC, respectively, which are essential concepts in our algorithm and are defined as follows.
Definition 6.
An overapproximate implicate is a clause s.t. . Given two overapproximate implicates and of , if , then we call is smaller than .
The concept of AIP is different from the concept of implicate because the former is as small as possible. Notably, the prime implicate is a minimal AIP.
Definition 7.
An overapproximate cover of is a conjunction of overapproximate implicates of and is logically equivalent to . The cost of an overapproximate cover , denoted by cost prime .
Intuitively, cost measures the degree of approximation of to prime ^{1}^{1}1prime returns a prime cover of ..
The framework of CoAPI includes two phases, namely CompileCover and CompileAll, which is shown in Figure 1. It takes a nonclausal Boolean formula and its negation as inputs. The inputs are encoded as a set of clauses by Tseitin encoding or other methods. Its output is all primes of . CompileCover first produces an AC of and then CompileAll computes all primes. We introduce the two phases in detail as follows.
3.2 CoreGuided OverApproximate Cover
In order to construct a cover, the work [Previti et al.2015] produces several prime implicates based on the QuickXplain algorithm. A naive approach to extract a prime from an implicate, namely linear approach, is to linearly query whether it is still an implicate after flipping each literal of the implicate. Therefore, the QuickXplain algorithm, based on recursively splitting the implicate, requires exponentially fewer queries in the optimal case than the linear approach. However, it is still timeconsuming for producing a prime implicate because there are considerable SAT queries to guarantee the prime.
In addition, the influence of the size of the cover on the other phases is only vaguely known. Hence, although more computation time can lead to a smaller cover, it is not clear whether it is costeffective in the overall algorithm. The results of the Experiment 5.1 demonstrate this view. Based on the above considerations, we propose a coreguided method to produce AIPs to rewrite . It is possible to trade off the quality of the cover with the run time for extraction.
We construct a cover to rewrite by iteratively computing AIPs in CompileCover shown in Algorithm 1. To this end, CompileCover maintains a set of clauses , where encodes in CNF ( for ) and blocks already computed models.
We illustrate each iteration as follows. CompileCover first searches for a model of which is not blocked by (Line 3). Then, OverApproximate is invoked to shrink the unsatisfiable core of and (Line 7). The more detail will be introduced in Section 4. After shrinking, CompileCover updates by adding (Line 8). Clearly, is a smaller AIP of than , since and . In the end, the updated prunes the search space for the next iteration (Line 9).
During the iterations, on the one hand, CompileCover applies an incremental SAT solver to continually shrink the search space by conflict clauses. On the other hand, it also uses to block the space that has been found. Eventually, prunes all the search space of , i.e., an AC of has been constructed by . At this point, is unsatisfiable and the algorithm terminates. We summarize an example for Algorithm 1 as follows.
Example 1.
Given a formula , in the first iteration, a model of is found; then, by consecutive SAT queries, we get a core ; finally, an AIP is produced, clearly, . During the same step, we can obtain a new AIP . In total, is unsatisfiable, where CompileCover produces an AC .
For this example, previti15 constructs the same result as us. However, CoAPI needs fewer SAT queries while their methods need more queries according to the size of implicate. In general, CoAPI reduces the number of SAT queries to speed up each iteration although it may take more iterations.
3.3 Generation of All Primes
In CompileAll, we encode by dual rail encoding to initialize . Then, based on SAT solving, we iteratively compute all the minimal models of , i.e., all the prime implicants of . This process is similar to [Jabbour et al.2014]. The more details about CompileAll show in the additional file.
4 MultiOrder based Shrinking
Constructing an AC of can be carried out iteratively to produce unsatisfiable cores. Unfortunately, the SAT solver based on deterministic branching strategy often produces similar unsatisfiable cores for similar assumptions. In the worst case, the unsatisfiable core is the same size as the assumptions. Therefore, it is worthwhile finding smaller unsatisfiable cores to compress the size of AC.
Given a proof of unsatisfiability , an unsatisfiable core can be produced by traversing backward. Therefore, the generation of determines the size of the unsatisfiable core. We notice that the SAT solver based on the two literal watching scheme, which is powerful for SAT solving, selectively ignores some information during generating . We call this case blocker ignoring defined as follows.
Definition 8.
Given a Boolean formula in CNF and a proof of unsatisfiability , the clause is a blocker, if and is not a root. An SAT solver ignore the satisfiability of the clauses containing .
Theorem 1.
If a clause is a blocker of , then there does not exist a clauses s.t. unless is in the fanin cone of .
Intuitively, if a blocker is generated, i.e., the literal is satisfied, then all clauses containing are naturally satisfied that can be ignored until backtracking to result in the freedom of in SAT solving. Therefore, these clauses do not appear in the except the fanin cone of . This is powerful to search for a model because only the satisfiability of the necessary clauses needs to be considered, which is the core of the two literal watching scheme. However, the blocker ignoring can miss important information for producing a small unsatisfiable core. We use an example to explain this point.
Example 2.
Given , the proof of unsatisfiability is shown in Figure 2(a) based on the assumptions . We can notice that the is missing due to blocker , in which is not the resolvent of and , but and . This proof can produce the unsatisfiable core .
Disturbing decision order iteratively in the SAT solving is a useful and straightforward method to guide the smaller unsatisfiable core. A similar approach was proposed by [Zhang and Malik2003a]. They iteratively invoke a SAT solver based on a random decision strategy to shrink an unsatisfiable core. However, it lacks power for the prime compilation, which can be shown by the results of the Experiment 5.2. We propose a multiorder decision strategy instead of the random to iteratively shrink a small unsatisfiable core. The multiorder decision strategy includes three kinds of decision orders defined as follows.
Definition 9.
A decision order is a list of variables , in which and is picked earlier than by a SAT solver.
Definition 10.
Given an original decision order , the forward decision order is the same as . The interval decision order has two parts and with the following properties: (i) if is in (resp. ), then is in (resp. ); (ii) in (resp. ), if in s.t. , then is also picked earlier than in (resp. ). The backward decision order is a reverse of .
Our method allows a SAT solver to have the opportunity to produce a smaller unsatisfiable core from different definite orders. Given a Boolean formula in CNF and its three variables , , and , assume that a SAT solver can produce an unsatisfiable core of based on while or can result in blockers to enlarge the size of the core. Intuitively, for an original decision order (resp. ), based on the forward (resp.backward) decision order, CoAPI can reduce the impact of blockers. For an original order , the impact can be lessened based on the interval decision decision order (, ).
Example 3.
We provide a multiorder based shrinking method shown in Algorithm 2, in which OrderSAT invokes a SAT solver with certain decision order. The whole algorithm consists of two phases: basic and iterative. In the basic phase, we first apply (Line 2), and then use and (Line 4). In the iterative phase, we use (Line 7) and (Line 9) alternately until the bound of iterations or the fixpoint has been reached. The fixpoint is that the size of the core does not change.
Based on the interval decision order, we partition the unsatisfiable core to explore better results. Algorithm 3 summarizes Interval that is similar to the QuickXplain algorithm, in which Partition partitions an unsatisfiable core based on and . Compared with the QuickXplain algorithm, Interval avoids discussing the case where none of and is a model of to cut down the time consumption (Line 16).
Note that OrderSAT with or is potentially harder than that with or . The reasons are as follows. First, OrderSAT with the assumptions in Interval, in which is unknown, is in NPC. Second, based on or , OrderSAT with the assumptions , in which holds, is in polynomial time. Hence, we only use the interval decision order in the basic phase while apply and in the two phases.
5 Experimental Results
To evaluate our method, we compared CoAPI and its variants with the stateoftheart methods – primera and primerb [Previti et al.2015]^{2}^{2}2https://reason.di.fc.ul.pt/wiki/doku.php?id=primer. over four benchmarks, and discussed the effects of different shrinking strategies. In each experiment, we considered two tasks: (i) generating all prime implicates; (ii) generating all prime implicants. We implemented CoAPI utilizing MiniSAT^{3}^{3}3https://github.com/niklasso/minisat. that was also used to implement primera and primerb. The benchmarks are introduced by previti15, denoted by QG6, Geffe gen., F+PHP, and F+GT, respectively. The experiments were performed on an Intel Core i57400 3 GHz, with 8 GByte of memory and running Ubuntu. For each case, the time limit was set to 3600 seconds and the memory limit to 7 GByte.
5.1 Comparisons between CoAPI and primer
We assess the performance of CoAPI in this section. In this experiment, we implemented the variant of CoAPI, denoted by CoAPIqx, which uses the QuickXplain algorithm to construct a prime cover in the first phase. We also implemented CoAPI with only one iteration, denoted by CoAPI1it. We evaluate the performance of CoAPIqx, CoAPI1it, primera, and primerb by the 743 cases.
QG6  Geffe gen.  F+PHP  F+GT  Total  
(83)  (600)  (30)  (30)  (743)  
primera  30 / 66  576 / 596  30 / 30  28 / 30  664 / 722 
primerb  30 / 65  577 / 596  30 / 30  28 / 30  665 / 721 
CoAPIqx  30 / 70  589 / 592  30 / 30  26 / 30  675 / 722 
CoAPI1it  30 / 81  589 / 591  30 / 30  30 / 30  679 / 732 
Table 1 shows the number of cases that can be computed. The results are separated by the symbol ‘/’, on the right of which is for the task (i) and the left of which is for the task (ii). It is used for all tables. Overall, CoAPIqx and CoAPI1it can successfully solve more cases than primera and primerb. Note that the 679 cases solved by CoAPI1it include all the 664 (resp. 665) ones solved by primera (resp. primerb) in the task (i). It is obvious that, for QG6, CoAPIqx and CoAPI1it dramatically increase the number of cases successfully solved in the task (ii).
The more detailed comparisons of these methods for the task (i) are shown in Figure 3(a). The Xaxis indicates the time in seconds taken by CoAPIqx or CoAPI1it, and the Yaxis indicates that taken by primera or primerb. Points above the diagonal indicate advantages for CoAPIqx or CoAPI1it. CoAPI1it generally computes much faster than primera (resp. primerb) in 92% (resp. 96%) cases – it consumes about at least one order of magnitude less time than primera (resp. primerb) in 26% (resp. 27%) cases. For CoAPIqx, the advantage is still obvious. It is in 73% (resp. 80%) cases that CoAPIqx beats primera (resp. primerb), in which CoAPIqx is about one order of magnitude faster in 18% (resp. 19%) cases than primera (resp. primerb). In this task, most of the literals in implicate are necessary. Therefore, the QuickXplain algorithm may require significantly more SAT queries than our method.
Figure 3(b) shows the performance of these methods for the task (ii) in detail. Cases that are negative for CoAPI1it focus on F+PHP and F+GT, because the prime covers of these formulae are in the form that is extremely beneficial to generate all primes. We focus on the challenging cases that are computed over 1000s by primera or primerb, i.e., the cases are shown in the green area in Figure 3(b). Most of the points above the diagonal (at least 62% cases for CoAPIqx and 84% for CoAPI1it) indicate the advantage of our methods. In particular, CoAPI1it dominates primera and primerb on QG6 reducing used time for at least 40.26%.
Win  Win x10+  

CoAPIqx vs. primera  70% / 64%  47% / 0% 
CoAPIqx vs. primerb  70% / 62%  47% / 0% 
CoAPI1it vs. primera  93% / 86%  50% / 0% 
CoAPI1it vs. primerb  93% / 84%  50% / 0% 
For challenging cases, the improvements of our methods are shown in Table 2, in which the columns present the percentage of faster cases (Win) and the percentage of at least one order of magnitude faster cases (Win x10+). Note that, for CoAPI1it, the number of faster cases in the task (ii) increases to 86% (resp. 84%) and the number of cases with at least one order of magnitude faster improves to 50% (resp. 50%) in the task (i).
In general, our methods outperform the stateoftheart methods, particularly in the task (i). The outstanding performance of CoAPIqx shows that the twophases framework is efficient because it avoids using dual rail encoding and the minimal or maximal assignment strategy throughout the whole algorithm. Moreover, we can notice that CoAPI1it is better than CoAPIqx because of the AC, which is described in the next section.
5.2 Evaluations of OverApproximation
To evaluate the different shrinking strategies, we implemented CoAPI0it without iterations and CoAPI2it with two iterations. Moreover, CoAPIzm uses the strategy proposed by [Zhang and Malik2003a]. Based on our experiences, CoAPIzm with 11 iterations gives the best performance for the two tasks in practice.
Cost  Fixpoint  First Shrink  Other Shrink  
CoAPI0it  1.00 / 2.59  /  7% / 92%  7% / 93% 
CoAPI1it  1.00 / 1.75  0% / 0%  7% / 92%  7% / 94% 
CoAPI2it  1.00 / 1.72  99% / 74%  7% / 92%  7% / 94% 
CoAPIzm  1.00 / 6854.80  100% / 99%  7% / 65%  7% / 67% 
We compare CoAPI and CoAPIzm in different shrinking strategies on the same benchmarks as above. The results are shown in Figure 3(c). The most points are above the diagonal line, which represents a less used time for CoAPI1it in most cases. CoAPI2it and CoAPIzm are comparable in the task (i). However, in the task (ii), CoAPIzm only solves 302 of 743 cases that are all simple for CoAPI1it.
Table 3 shows the statistics on average for shrinking unsatisfiable cores. Due to CoAPIqx with a prime cover, we compute the cost of ACs based on the CoAPIqx. The columns present the cost (Cost), the ratio of reaching the fixpoint (Fixpoint), the ratio of the shrinking size in the first time (First Shrink), and the ratio of the shrinking size in the other times (Other Shrink).
The generally lower costs of CoAPI0it, CoAPI1it, and CoAPI2it show the shrunk unsatisfiable core can be much smaller in all cases. From the statistics, the cost is often reduced by running the shrinking procedure iteratively, but usually, the gains for the shrinking core are not as substantial as the first shrinking. This point is also reflected in the ratio of shrinking size, in which the size of the AIPs reduces dramatically in the first time, but not by much during the following shrinkings. We also note that CoAPI2it can reach a fixpoint in most cases. These statistics mean that the only one iteration is the best tradeoff for the quality of the unsatisfiable core with the run time for these benchmarks. Comparing CoAPI2it with CoAPIzm, the cost and the ratio of the shrinking size in the task (ii) illustrate that CoAPIzm cannot effectively control shrinking, while CoAPI2it does well for it.
6 Related Works and Discussions
Many of techniques to the prime compilation are based on branch and bound/backtrack search procedures [Castell1996, Ravi and Somenzi2004, Déharbe et al.2013, Jabbour et al.2014]. They take full advantage of powerful SAT solvers, while these methods cannot generate the primes for nonclausal formulae. In addition, a number of approaches based on binary decision diagrams (BDD) [Coudert and Madre1992] or zerosuppressed BDD (ZBDD) [Simon2001] have been proposed. These methods can encode primes in a compact space thanks to BDD. Given the complexity of the problem, however, these methods may still suffer from time or memory limitations in practice. Almost simultaneously, a
01 integer linear programming (ILP)
formulation [Pizzuti1996, Manquinho et al.1997, MarquesSilva1997, Palopoli et al.1999] was proposed to compute primes of CNF formulae. Although these approaches can naturally encode the minimal constraints utilizing ILP, their efficiency is questionable.Most present works [Castell1996, Pizzuti1996, Manquinho et al.1997, MarquesSilva1997, Palopoli et al.1999, Ravi and Somenzi2004, Jabbour et al.2014] only focus on computing primes of CNF or DNF, while there are also some approaches for working on nonclausal formulae. Such as, nagir93 nagir93 studied a more general algorithm for prime implicate generation, which allows any conjunction of DNF formulae. The approaches based on the BDD/ZBDD can compute prime implicants of nonclausal formulae. Additionally, ramesh97 ramesh97 computed prime implicants and prime implicates of NNF formula. Recently, previti15 previti15 described the most efficient approach at present.
In order to produce a small AIP, we need to generate small unsatisfiable cores. zhang03b zhang03b produced small unsatisfiable cores by the random order and multiple iterations. This approach is similar to our idea, but they fail to find the relationship between the order and the size of unsatisfiable cores, resulting in their approach without the ability to further shrink unsatisfiable cores. gershman06 gershman06 suggested a more effective shrinking procedure based on dominators resulting from the proof of unsatisfiability. Naturally, analysis based on proof of unsatisfiability increases the cost of a single iteration. Hence, considering largescale iterations for shrinking different unsatisfiable cores in our work, their method does not work well to this.
7 Conclusions and Future Works
We have proposed a novel approach – CoAPI for the prime compilation based on unsatisfiable cores. Compared with the work [Previti et al.2015], CoAPI separates the generating processes into two phases, which can permit us to construct a cover without using dual rail encoding resulting in shrinking the search space. Moreover, we have proposed a coreguided approach to construct an AC to rewrite the formula. It should emphasize that the AC can be efficiently computed. Besides, we have provided a multiorder based method to shrink a small unsatisfiable core. The experimental results have shown that CoAPI has a significant advantage for the generation of prime implicates and better performance for prime implicants than stateoftheart methods.
For future work, we expect that our method can be applied to the task of producing a small proof of unsatisfiability.
References
 [Acuna et al.2012] Vicente Acuna, Paulo Vieira Milreu, Ludovic Cottret, Alberto Marchettispaccamela, Leen Stougie, and Mariefrance Sagot. Algorithms and complexity of enumerating minimal precursor sets in genomewide metabolic networks. Bioinformatics, 28(19):2474–2483, 2012.
 [Bradley and Manna2007] A. R. Bradley and Z. Manna. Checking safety by inductive generalization of counterexamples to induction. In FMAC, pages 173–180, 2007.
 [Bryant et al.1987] Randal E. Bryant, Derek Beatty, Karl Brace, Kyeongsoon Cho, and Thomas Sheffler. Cosmos: A compiled simulator for mos circuits. In DAC, pages 9–16, 1987.
 [Castell1996] Thierry Castell. Computation of prime implicates and prime implicants by a variant of the davis and putnam procedure. In ICTAI, pages 428–429, 1996.
 [Coudert and Madre1992] Olivier Coudert and Jean Christophe Madre. Implicit and incremental computation of primes and essential primes of boolean functions. In DAC, pages 36–39, 1992.
 [Déharbe et al.2013] David Déharbe, Pascal Fontaine, Daniel Le Berre, and Bertrand Mazure. Computing prime implicants. In FMCAD, pages 46–52, 2013.
 [Eén and Sörensson2003] Niklas Eén and Niklas Sörensson. An extensible satsolver. In SAT, pages 502–518, 2003.
 [Gershman et al.2006] Roman Gershman, Maya Koifman, and Ofer Strichman. Deriving small unsatisfiable cores with dominators. In CAV, pages 109–122, 2006.
 [Ignatiev et al.2015] A. Ignatiev, A. Previti, and J. MarquesSilva. Satbased formula simplification. In SAT, pages 287–298, 2015.
 [Jabbour et al.2014] Said Jabbour, Joao MarquesSilva, Lakhdar Sais, and Yakoub Salhi. Enumerating prime implicants of propositional formulae in conjunctive normal form. In JELIA, pages 152–165, 2014.
 [Junker2004] Ulrich Junker. Quickxplain: Preferred explanations and relaxations for overconstrained problems. In AAAI, pages 167–175, 2004.
 [Luo and Wei2017] W. Luo and O. Wei. Wap: Satbased computation of minimal cut sets. In ISSRE, pages 146–151, 2017.
 [Manquinho et al.1997] Vasco M. Manquinho, Paulo F. Flores, Joao MarquesSilva, and Arlindo L. Oliveira. Prime implicant computation using satisfiability algorithms. In ICTAI, pages 232–239, 1997.
 [MarquesSilva1997] Joao MarquesSilva. On computing minimum size prime implicants. In IWLS, 1997.
 [McMillan and Amla2003] Kenneth L. McMillan and Nina Amla. Automatic abstraction without counterexamples. In TACAS, pages 2–17, 2003.
 [Moskewicz et al.2001] Matthew W Moskewicz, Conor F Madigan, Ying Zhao, Lintao Zhang, and Sharad Malik. Chaff: Engineering an efficient sat solver. In DAC, pages 530–535, 2001.
 [Narodytska and Bacchus2014] Nina Narodytska and Fahiem Bacchus. Maximum satisfiability using coreguided maxsat resolution. In AAAI, pages 2717–2723, 2014.
 [Ngair1993] TeowHin Ngair. A new algorithm for incremental prime implicate generation. In IJCAI, pages 46–51, 1993.

[Palopoli et al.1999]
Luigi Palopoli, Fiora Pirri, and Clara Pizzuti.
Algorithms for selective enumeration of prime implicants.
Journal of Artificial Intelligence
, 111(12):41–72, 1999.  [Pizzuti1996] Clara Pizzuti. Computing prime implicants by integer programming. In ICTAI, pages 332–336, 1996.
 [Previti et al.2015] Alessandro Previti, Alexey Ignatiev, Antonio Morgado, and Joao MarquesSilva. Prime compilation of nonclausal formulae. In IJCAI, volume 15, pages 1980–1987, 2015.

[Ramesh et al.1997]
Anavai Ramesh, George Becker, and Neil V. Murray.
Cnf and dnf considered harmful for computing prime
implicants/implicates.
Journal of Automated Reasoning
, 18(3):337–356, 1997.  [Ravi and Somenzi2004] Kavita Ravi and Fabio Somenzi. Minimal assignments for bounded model checking. In TACAS, pages 31–45, 2004.
 [Roorda and Claessen2005] JanWillem Roorda and Koen Claessen. A new satbased algorithm for symbolic trajectory evaluation. In Advanced Research Working Conference on Correct Hardware Design and Verification Methods, pages 238–253, 2005.
 [Simon2001] L. Simon. Efficient consequence finding. In IJCAI, pages 359–365, 2001.
 [Slavkovik and Agotnes2014] M. Slavkovik and T. Agotnes. A judgment set similarity measure based on prime implicants. Adaptive Agents and Multi Agents Systems, pages 1573–1574, 2014.
 [Stuckey2013] P. J. Stuckey. There are no cnf problems. In SAT, pages 19–21, 2013.
 [Tseitin1968] G. Tseitin. On the complexity of derivations in the propositional calculus. Studies in Constrained Mathematics and Mathematical Logic, pages 234–259, 1968.
 [Yamada et al.2016] Akihisa Yamada, Armin Biere, Cyrille Artho, Takashi Kitamura, and EunHye Choi. Greedy combinatorial test case generation using unsatisfiable cores. In ASE, pages 614–624, 2016.
 [Zhang and Malik2003a] Lintao Zhang and Sharad Malik. Extracting small unsatisfiable cores from unsatisfiable boolean formula. In SAT, 2003.
 [Zhang and Malik2003b] Lintao Zhang and Sharad Malik. Validating sat solvers using an independent resolutionbased checker: Practical implementations and other applications. In DATE, pages 880–885, 2003.
Comments
There are no comments yet.