1 Introduction
In this report, we propose a quick survey of the currently known techniques for encoding a Boolean cardinality constraint into a cnf formula, and we discuss about the relevance of these encodings.
A Boolean cardinality constraint can be denoted , meaning . A cnf formula is a disjunction of clauses, where each clause is a conjunction of literals, where each literal is either a propositional variable or a negated propositional variable. For convenience, such a formula can be represented as a set of clauses, where each clause is a set of literals.
Given a set of propositional variables, a partial truth assignment on is a set of literals such that for any , either or is in , and . A complete truth assignment on is a partial assignment on such that for any , either or is in . Given a truth assignment and a formula , denotes .
A formula is said to encode of a given constraint if and only if, for any complete truth assignment on , is satisfiable if and only if satisfies . It is said to be a pac (like propagating arc consistency) encoding if and only if, given any partial truth assignment , applying unit propagation on fixes the same variables of as restoring arc consistency on . It is said to be a pic (like propagating inconsistency) encoding if and only if, given any partial truth assignment , applying unit propagation on produces the empty clause if and only if falsifies .
2 Existing encodings
Existing encodings can be roughly classified into two (overlapping) categories : the ones which are based on a bit counter coupled with a comparator, and the encodings dedicated to the pseudoBoolean constraints, i.e., constraints of the form
, where are positive integers and are propositional literals.2.1 Encodings based on bit counters
These encodings are based on a Tseitin transformation [6] of a circuit including a bit counter cascaded with a comparator. Two approaches has been proposed, respectively based on a unary and a binary representation of the counter output.
2.1.1 Binary representation
These encodings use binary adders and comparators. Warner introduced such an approach in [7] for translating a pseudoBoolean constraints into a cnf formula. The proposed solution can be simplified in the particular case of cardinality constraints. The size of the obtained formula is linearly related to the number of variables in the input constraint, as well as to the number of auxiliary variables. Another architecture is proposed in [5], where the bit counter is organized as a tree of binary adders. The size of the resulting formula and the number of required auxiliary variables are ^{1}^{1}1 When applicable, we prefer we prefer to use the notation rather than the notation, because the latter is only an upper bound. For example, .. These encodings are known to be neither pac, nor pic.
2.1.2 Unary representation
By adopting a unary representation of the output of the bit counter, which incidentally makes obvious the comparison stage, we obtain encodings techniques which produce larger formulae, but allow unit propagation to perform more deductions.
This was shown for the first time in [1], where a pac encoding requiring clauses and auxiliary variables is presented. The unary bit counter of inputs is designed as an association of two bit counters of inputs coupled with a unary adder.
Another architecture is proposed in [5], where the bit counter is shaped as a sequential association of unary adders. The resulting pac encoding requires clauses and auxiliary variables.
As shown in [4], the bit counter can also be done thanks to a sorting network. This approche allows to produce a cnf formula with clauses and auxiliary variables.
All these encodings are pac, then pic. As far as we know, no criteria have been proposed to decide between them.
2.2 pseudoBoolean encodings
Because a Boolean cardinality constraint is a special case of pseudoBoolean constraint, any encoding dedicated to pseudoBoolean constraints can be used with pseudoBoolean ones. Excluding approaches producing a formula of exponential size and the ones that have been mentioned above, this covers three techniques, namely the bdd encoding presented in [2], and the two encodings presented in [3], namely lpw and gpw.
The bdd based encoding presented in [2] is a pac cnf encoding of pseudoBoolean contraints that can produce an exponential size formula in the worst case. But with a cardinality constraint as input, the size of the resulting formula is , which is competitive with other encodings presented above.
In contrast, the gpw encoding introduced in [3] presents no interest because with a cardinality constraint as input, it falls down to the encoding presented in [1]. Finally, the lpw encoding requires a unary bit counter for each variable of the input constraint. It produces a formula of size , which is somewhat prohibitive.
3 Discussion
3.1 Binary versus unary representation
In all the encodings that we have reviewed, there is an implicit or explicit calculation of the number of variables that are fixed to 1 among the input variables. With the encodings based on binary arithmetic, this calculation requires that all the input variables are assigned because the Boolean functions related to each bit of the binary representation are not monotonic with respect to the number of input variables that are fixed to one. For example, the lower bit of this representation depends only on the parity of the input cardinality, then alternatively changes each time this input cardinality increases. This structurally prevents some propagations when some input variables are not assigned, even if there are enough input variables fixed to 1 for falsifying the constraint. As a consequence, some inconsistencies cannot be detected by unit propagation alone.
On the other hand, the encodings based on unary arithmetic allow unit propagation to calculate the input cardinality even when some input variables are not assigned. This is made possible by the monotonicity of the functions related to each bit of the unary representation of the input cardinality. This deserves an explanation, given Sections 3.2 and 4.
3.2 Filtering functions
Restoring arc consistency of a Boolean cardinality constraint – but also other Boolean constraints, like the pseudoBoolean ones – reduces to compute functions which map to , where the symbol means that a variable is not assigned.
Regarding the constraint , these filtering functions are of the form such that if the number of input values set to among is at least then , else . For example, the filtering function related to the input variable is because, accordingly to the principle of arc consistency, must be fixed to 0 if there are other input variables assigned to 1. If there are more than input variables assigned to 1, and if the value of each is determined by the related filtering function, then a contradiction occurs, i.e. at least one of the input variable is fixed both to 1 (by hypothesis) and to 0 (by the related filtering function). Therefore, any pac encoding of a Boolean cardinality constraint explicitly or implicitly allows unit propagation to compute the filtering functions related to each input variable , and any encoding allowing unit propagation to compute these functions is pac.
As a example, Figure 1 shows the implicit filtering functions for the constraint . Each of these functions maps to , with output value 0 if and only if its two inputs are set to 1. Each output represents the value of the variable after the propagation is done. The clause allows unit propagation to achieve all these computations.
Without lost of generality, we can consider that the filtering functions have codomain , because

any filtering function with codomain can be decomposed into two simplified filtering functions with domains and , respectively;

to any filtering function can be associated a filtering function such that for any suitable input assignment , if then , else ;

any formula computing a filtering function with output variable can compute with output variable by adding the clauses ;

for any filtering function , if the formulae compute with output variables (assuming without lost of generality that share no variable except the input ones) then the formula computes with output variable .
Why these filtering functions cannot be propagated through a binary representation ? As an example, let us consider the constraint . In any encoding based on a binary representation, by definition, there are three auxiliary variables representing the binary value of the number of input variables fixed to 1. These variables link the output of the bit counter (whatever its architecture) with the comparator. Now, suppose the two input variables are fixed to 1, and the two other ones, i.e., , are not fixed. This means that the input cardinality could be 2,3, or 4, hence, in binary, 010, 011, or 100. Each of the variables could take either the value 0 or 1, depending on the further values of . While these variables are not fixed, nothing can be inferred regarding the values of . Then the comparator cannot "know" that must be set to 0. Worse, if three input variables are fixed to 1 and the other one is not fixed, each of the variable can be potentially fixed to 0 or 1, then the inconsistency cannot be detected.
Finally, remark that the bdd encoding can be considered as based on a unary encoding, because each of the nodes of the underlying decision diagram is related to a filtering function in the sense described above.
4 Complexity issues
In this section, we ask different questions about the complexity of pac and pic cnf encodings for Boolean cardinality constraints as well as for other kind of constraints. First, let us recall that any Boolean function can be computed using unit propagation under the assumption that the input value is represented as a complete truth assignment of the input variables. This is due to the fact that any Boolean function can be computed thanks to a Boolean circuit, and that the behavior of any Boolean circuit with nodes can be simulated by applying unit propagation on a formula of clauses. But not all functions mapping to – that we propose to call matching functions – can be computed in this way. For example, the function that maps to such that cannot. Informally speaking, unit propagation cannot test whether a variable is assigned or not. We propose to call propagatable functions the matching functions that can be computed thanks to unit propagation (see Figure 2).
Clearly, the propagatable functions are the matching functions that are monotonic with respect to the following order: , , , , , if and only if . It follows that for any Boolean constraint and any , the filtering function related to is a propagatable function, because if the value of can be inferred whereas some other input variables are not fixed, then this value of holds whatever the values of these input variables. Thus, the complexity of computing propagatable functions with unit propagation is a key concept in studying the cnf encoding of Boolean constraints.
Now, let us present the critical issues regarding the search for efficient encodings. The following questions are relevant for Boolean cardinality constraints, but can be generalized to other constraints on Boolean variables, such as pseudoBoolean constraints.

The smallest known pac encoding for Boolean cardinality constraint is presented in [4]. This is actually a pseudoBoolean to cnf encoding which is not pac for any pseudoBoolean input constraint, but which is pac in the particular case of cardinality constraints. The size of the output formula is , which is better than when .
Is there a smaller pac encoding? Is there a pac encoding which produce a formula of size ? Is there a gap between the smaller pac encoding and the smaller encoding with binary representation?

The preceding questions are about encodings of a whole cardinality constraints, which implicitly include the filtering functions related to each input variables. But what about the size complexity of computing (thanks to unit propagation) each filtering function ? Clearly, if each filtering function requires clauses, then size of the smallest pac encoding is , but not necessarily , because some parts of the output formula could be shared to compute several filtering functions.
The smallest known encodings for Boolean cardinality constraints allow to compute the underlying filtering functions with a formula of size (assuming ), i.e., the same size as for restoring arc consistency on the whole constraint. Is it possible to do better?
5 Concluding remarks and perspectives
There are at least three research ways in the field of cnf encoding of Boolean (including cardinality) constraints. The first direction is the research for theoretical models that would facilitate the design and the analysis of encodings. The second one concerns the research for inference rules and filtering techniques that would allow sat solvers to achieve the same deductions with binary encodings as the current solvers do with unary encodings. And the last research area is a fine study of the respective inference powers and efficiencies of sat and pseudoBoolean solvers regarding the problems which can be represented with the two formalisms.
5.1 Designing and analysing cnf encodings
A way to design propositional encodings is to start from a Boolean circuit, then use a Tseitin transformation to produce the corresponding formula. Indeed, all the encodings presented in section 2 can be represented as Boolean circuits. This representation is suitable for designing correct encodings and for proving the correctness of encodings. But it does not model the behavior of unit propagation alone, especially when some variables are not assigned.
The way unit propagation computes a filtering function can be simulated with a monotone Boolean circuit by representing each of the three possible values of any variable by two binary values such that is represented by , is represented by , and is represented by . It is easy to see that any filtering function which can be computed with a monotone circuit can also be computed by unit propagation with a satisfiable cnf formula of the same size. Conversely, under the assumption that the size of the clauses is bounded, any filtering function which can be computed with unit propagation on a satisfiable^{2}^{2}2 In fact, it suffices that the unit propagation can not produce a contradiction, so that the filtering function is fully defined on its domain. The filtering process will detect a local inconsistency when the result of the computation of the filtering function is in conflict with the initial value of some input variables. cnf formula can also be computed with a monotone circuit of size linearly related to the size of . A sketch of the proof is given in Annex A.
Then there is a tight relation between the size of the smallest cnf formula computing a filtering function and monotone circuit complexity. Namely, this formula reduces to the smallest monotone circuit computing the related Boolean function.
5.2 Improving filtering in sat solvers
As mentioned before, the unassigned variables impact the expressive power of unit propagation. A possible way to overcome this problem is to "inform" unit propagation thanks to preassigned variables. For example, let us consider the constraint where some variables are fixed to 1, some are fixed to 0, and the other are not assigned. We propose to achieve unit propagation under the assumption that unassigned variables are fixed to 0. Namely, these variables are considered as unassigned except for unit propagation. If unit propagation fixes such a variable to 1, this is not considered as a conflict and the new value replaces the initial default one.
With this simple modification, which supposes to inform the sat solver of the default values of the involved variables, the binary based encodings for cardinality constraints become pic, allowing to increase the amount of deductions performed by the solver. Thanks to such an informed unit propagation rule, we can expect more compact pic and pac encodings.
5.3 sat versus pseudoBoolean solvers
Given what we said in this report, translating a Boolean cardinality constraint – and more generally a pseudoBoolean constraints – in propositional formula is not obvious. There are many ways to proceed, with their advantages and disadvantages. On the other hand, Translating a clause into a pseudoBoolean or cardinality constraint is immediate.
Therefore, it is questionable whether it is appropriate to use a SAT solver to deal with problems specified using both clauses and pseudoBoolean constraints. Is it not possible to achieve, if indeed it does not already exist, a pseudoBoolean solver that would be as efficient as a sat solver when dealing with clauses, and at least as efficient as a sat solver associated with a cnf encoding when dealing with other pseudoBoolean constraints?
The question deserves a comparative study of existing sat and pseudoBoolean solvers, with the same problem instances and all the known encodings of pseudoBoolean and cardinality constraints. If it turns out that in some cases sat
solvers are better, it will be relevant to investigate the reason for such a difference: learning strategy, branching heuristic, filtering efficiency… in order to be able to design a pseudoBoolean solver which covers efficiently the scope of the
sat solvers.Appendix A Reducing a cnf formula to a monotone circuit
To each filtering function with domain can be associated a Boolean function with the convention introduced Section 5.1. Our aim is to prove that if can be computed using unit propagation on a satisfiable formula of size , then can be computed by a monotone Boolean circuit of size .
For any variable , let us define and .
Without lost of generality, we want to prove that for any function mapping to , if there is a cnf formula allowing unit propagation to compute , then there is a monotone circuit which computes .
Such a circuit can be built based on the following principle: all the deductions unit propagations can do regarding a given literal , and then the corresponding Boolean variable , involve only the clauses containing . Let be the set of these clauses.
Clearly, the following part of circuit computes the value of with respect to the variables which it depends.
Where if , if , and is ignored when is not related to an input variable (see Figure 3 for an example).
Bringing together the circuit parts , for any literal occurring in , produces a monotone circuit with loops. These loops (if applicable) are not involved during the unit propagation process because the only deductions they allow can only fix a literal to a value it has already. They can therefore be suppressed by removing the links between any output of a or gate and any input of a and gate involved in the computation of .
The resulting circuit, which can include unnecessary parts, can simulate any deduction performed by unit propagation on with respect to the values of the input variables .
References
 [1] Olivier Bailleux and Yacine Boufkhad. Efficient cnf encoding of boolean cardinality constraints. In CP, pages 108–122, 2003.
 [2] Olivier Bailleux, Yacine Boufkhad, and Olivier Roussel. A translation of pseudo boolean constraints to sat. JSAT, 2(14):191–200, 2006.
 [3] Olivier Bailleux, Yacine Boufkhad, and Olivier Roussel. New encodings of pseudoboolean constraints into cnf. In SAT, pages 181–194, 2009.
 [4] Niklas Eén and Niklas Sörensson. Translating pseudoboolean constraints into sat. JSAT, 2(14):1–26, 2006.
 [5] Carsten Sinz. Towards an optimal cnf encoding of boolean cardinality constraints. In In Proc. of the 11th Intl. Conf. on Principles and Practice of Constraint Programming (CP 2005, pages 827–831, 2005.
 [6] G Tseitin. On the complexity of derivation in propositional calculus. Automation of Reasoning, 2:466–483, 1968.
 [7] J. P. Warners. A lineartime transformation of linear inequalities into conjunctive normal form. Information Processing Letters, 1968.
Comments
There are no comments yet.