1 Introduction
Let us begin by an introductory exemple, which consists in translating the constraint into a cnf formula. At first, an input model must be created as follows:
Then, the three variables can be created as an array of instances of the class Variable.
The input constraint is composed of literals that can be produced from the variables. Each literal is an instance of the class Literal.
Before to create the input constraint, we have to define the coefficients related to each literal and the tag that will be assigned to this constraint. Note that it is allowed to assign several tags to the same constraint. For example, setTags(1,3) will assigns the tags 1 and 3 to the upcoming constraints that will be created until the next call of setTags.
To complete the building of the input model, it only remains to create the input constraint, which is simplified by using the static factory method makeLeq, and add it to the model.
At this time, the input model is created. It can be print to the screen for verification purpose.
Now, the input model must be translated to an output problem, namely a cnf formula. This suppose to create an instance of the class CNFProlem in the following way:
Now, suppose that we want to encode the input constraint (which is marked with the tag 1) redundantly with two encoders. We have to create an instance of each of these encoders, and to assign these encoders to the tag 1. As a consequence, each constraint of the input model tagged with 1 will be translated using these two encoders.
The translation process will be achieved by reading the input model thanks to the method read of the output problem.
The resulting cnf formula is given in dimacs format by the method getOutput.
For example, if only the encoder CNFdirectEncoder is used, the result is the following:
This encoder produces an exponential number of clauses in the general case, but can produce quite compact outputs from small input constraints. The currently available encoders will be described in section 3.
To conclude this brief presentation, let us mention that BoolVar also provide a class PBproblem and an encoder PBbasicEncoder allowing to produce the output problem as an instance of pseudoBoolean satisfiability with the opb format. This allows a same input problem to be solved either using a sat solver and a pseudoBoolean satisfiability solver, in order to compare the performances and the relevance of the two approaches.
2 Description
This section presents the main aspects and resources of BoolVar/pb from the user side and as well as a short description of its internal structure.
2.1 Architecture
Figure 1 presents the architecture of the BoolVar/pb library, which can be decomposed in three parts: the input block, the output block, ans the internal block.
2.1.1 The input block
This part includes the classes allowing the user to create and specify an input problem.
 InputModel

The container for the input constraints. Its constructor must be used to create a new input problem where the input constraints will be added thank to the method addConstraint.
 Constraint

An interface requiring that any input constraint implements a method addTag, which allows to assign tags to constraints.
 GenericConstraint

An abstract class which implements the tag management system, which is the same for any input constraint.
 PseudoBooleanLeqConstraint

A pseudoBoolean inequality constraint of the form , where are propositional literals (i.e., Boolean variables or negated Boolean variables), and are positive integers. The instances of this class can be created in two ways : (1) using the provided constructors and building methods, or (2) using the static factory method Boolvar.makeLeq.
 Literal

The building block for input constraints. A literal can be produces either by using the constructor of this class, or by using the methods getNegLit and getPosLit of the class Variable.
 Variable

A representation for the propositional variables that are used both in the input and in the output constraints.
2.1.2 The output block
This part includes the classes allowing the user to specify the output problem, as wall as the way to produce this problem from the input constraints.
 OutputProblem

This interface specify the methods that must be implemented in any kind of output problems: assignEncoder, read, and getOutput.
 CNFproblem

A representation for cnf output problems. Basically, any instance of this class contains a set of encoders and a string that will be receive the result of the translation. The internal methods ensures the translation process.
 PBproblem

A representation of a pseudoBoolean output problem in the same way of the class CNFproblem. This class allows to produce an instance of the input problem in a form that allows to solve it using a pseudoBoolean satisfiability solver.
 Encoder

This interface represents an encoder, i.e., a class that contains the resources for translating constraints.
 Encoder2cnf

This interface represents an encoder which produces a cnf formula as output. The available implementations will be presented in section 3.
 Encoder2pb

This interface represents an encoder which produces a list of pseudoBoolean constraints as output.
 PBbasicEncoder

This encoder puts the input problem in a format that allows it to be solved using a pseudoBoolean satisfiability solver.
2.1.3 The internal block
This part includes the internal resources allowing the communication between the input and the output block, as well as the implementation of the translation process.
 RawConstraint

The internal representation for input constraints. Any class which implements the interface Constraint must provide a method producing an array of such internal constraints. Currently, any instance of the class PseudoBooleanLeqConstraint produces only one instance of RawConstraint, but a class PseudoBooleanEqConstraint could be implemented in a way to manage input constraints of the form . Such a constraint could either be represented as two internal inequality constraint or one internal equality constraint.
 OutputConstraint

A generic output constraint.
 Clause

Implementation of OutpuConstraint as a propositional clause. Contains resources that can be used by the cnf encoders to produce output clauses, which will be converted into strings by the class CNFproblem.
 PBconst

Implementation of OutpuConstraint as a pseudoBoolean constraint. Contains resources that can be used by the pb encoder to produce output pseudoBoolean constraints, which will be converted into strings by the class CNFproblem.
 OutputConstraints

A set of output constraints resulting from the translation process, which are aimed to be converted into strings to produce the output problem.
 CNFformula

A set of Clauses represented as instances of the class Clause.
 PBformula

A set of pseudoBoolean constraints represented as instances of the class PBconst.
2.2 User resources
This section presents the main classes and methods allowing one to implement the resources provided by BoolVar/pb.
2.2.1 Input block user resources
 InputModel.InputModel()

Create a new input model, which can be seen as a container for the input constraints.
 Variable.Variable()

Create a new propositional variable, which is aimed to be used in an input constraint.
 Variable.getPosLit()

Returns a positive literal from the current variable . If the literal does not exists, it is created thanks to the constructor of the class Literal, else the reference of the existing literal is returned. is aimed to be used in input constraints.
 Variable.getNegLit()

Returns a negative literal from the current variable . If does not exists, it is created thanks to the constructor of the class Literal, else the reference of the existing literal is returned. is aimed to be used in input constraints.
 Variable.getLit(boolean sign)

Returns a literal or , according to the value of sign, from the current variable . If the required literal does not exists, it is created thanks to the constructor of the class Literal, else the reference of the existing literal is returned.
 Literal.Literal(Variable v, Boolean s)

Create a new literal, which is aimed to be used in an input constraint. In order to avoid redundancies, literals must be preferably created thanks to the methods getNegLit and getPosLit provided by the class Variable.
 BoolVar.setTag(int tag1)
 BoolVar.setTag(int tag1,int tag2)
 BoolVar.setTag(int tag1,int tag2,...,tag4)

Sets the tags that will be assigned to the input constraints that will be created before the next call of setTag. The tags are arbitrary integers that will be assigned to encoders in a way to specify which encoder(s) must be used for translating each input constraint.
 BoolVar.makeLeq(int[] c, Literal[] l, int b)
 BoolVar.makeLeq(BigInteger[] c, Literal[] l, BigInteger b)

Create a new pseudoBoolean inequality constraint (where is the size of the arrays c and l) from an array c of coefficients, an array l of literals, and a bound b. The resulting constraint is aimed to be added to the input model tanks to the method InputModel.addConstraint.
 InputModel.addConstraint(Constraint q)

Adds a new constraint to the current input model.
2.2.2 Output block user resources
 PBproblem.PBproblem()

Creates a new output problem as a pseudoBoolean satisfiability problem.
 PBproblem.PBproblem()

Creates a new output problem as a propositional satisfiability problem.
 PBproblem.assignEncoder(int tag, Encoder x)

Assigns the encoder x to the given tag. Several encoders can be assigned to the same tag by multiple calls to this method.
 PBproblem.read(InputModel m)

Reads the input model m in a way to translate each input constraint of m with the related encoders. This method must be used only one time, after all the input constraints are added to the input model.
 PBproblem.getOutput()

Returns the output problem as a string. This method must be called after the method InputModel.read.
3 Available encodings
The current version of BoolVar/pb includes 5 cnf encoders for pseudoBoolean inequality constraints.
The underlying encoding methods can be classified in different categories with respect to the size of the output formula and the inference power of unit propagation (which is the basic filtering technique used in the
sat solvers) on this formula.Any cnf encoding which produces a cnf formula polynomially sized (exponentially sized, respectively) with respect to the number of variables in the input constraint is said to be polynomial (exponential, respectively).
Any cnf encoding is said to be a pac (like propagating arc consistency) encoding if and only if applying unit propagation on the resulting formula fixes the same variables as restoring arc consistency on the corresponding input constraint. It is said to be a pic (like propagating inconsistency) encoding if and only if applying unit propagation on the resulting formula produces the empty clause if restoring arc consistency on the corresponding input constraint detects an inconsistency. Any pac encoding is necessarily a pic one.
3.1 CNFdirectEncoder
3.2 CNFbddEncoder
This exponential pac bddbased encoding was introduced in [1]. It generally produces a smaller formula that the direct encoding, because it uses additional variables corresponding to each node of the bdd, which allows to factorize identical subgraphs. Unlike the direct one, this encoding is polynomial for cardinality constraints, i.e., when all the coefficients are 1.
3.3 CNFlinearEncoder
3.4 CNFwatchdogEncoder
This encoding, introduced in [2], is both polynomial and pac (then pic), but can sometimes produce output formulae of prohibitive size.
3.5 CNFbargraphEncoder
This is a variant of the watchdog encoding, also presented in [2], which is pic but not pac, and produces smallest formulae.
4 A commented example
As an exemple, we will encode the binpacking problem, where objects, each of them with a weight , must be put into boxes with capacities , in such a way that each object occurs in exactly one box and the sum of the weights of all the objects belonging to any box does not exceed the capacity of this box.
Each instance of this problem will be represented thanks to a matrix of Boolean variables, where means that the object is in the box .
There are two kinds of constraints, namely:

the unicity constraints, which ensures that each object belong to exactly one boxe:

the capacity constraints, ensuring that the sum of the weights of the objects in any box does not exceed the capacity of this box:
The binpacking problem can be encoded with BoolVar in the following way. The integer arrays weights and capacities are supposed to contain the weights of the objects and the capacities of the boxes, respectively.

Create the input model.
InputModel p = new InputModel(); 
Create and initialize the matrix of domain variables.
Variable[][] v = new Variable[m][n];for(int i=0; i<m; i++)for(int j=0; j<n; j++)v[i][j] = new Variable(); 
Create the unicity constraints and assign them the tag 1 with the method setTag. Each equality constraint is encoded as two inequality constraints and .
setTags(1);int[] coeffs = new int[m];for(int i=0; i<m; i++) coeffs[i]=1;for(int j=0; j<n; j++){Literal[] poslits = new Literal[m];Literal[] neglits = new Literal[m];for(int i=0; i<m; i++){poslits[i] = v[i][j].getPosLit();neglits[i] = v[i][j].getNegLit();}p.addConstraint(makeLeq(coeffs,poslits,1));p.addConstraint(makeLeq(coeffs,neglits,m1));} 
Create the capacity constraints and assign them the tag 2.
setTags(2);for(int i=0; i<m; i++){Literal[] lits = new Literal[n];for(int j=0; j<n; j++)lits[j] = v[i][j].getPosLit();p.addConstraint(makeLeq(weights,lits,capacities[i]));} 
Create the output problem.
OutputProblem out = new CNFProblem(); 
Create the two encoders that will be used to translate the input constraints: the bdd encoder will be used to translate the unicity constraints (tagged with 1), and the bargraph encoder will be used to translate the capacity constraints (tagged with 2).
Encoder2cnf bdd = new CNFbddEncoder();Encoder2cnf bg = new CNFbargraphEncoder();out.assignEncoder(1,bdd);out.assignEncoder(2,bg); 
Read the input model, which runs the translation process, and print the result as a string.
5 Perspectives
The following evolutions are planned.
Predicting the size of the input formulae
The size of the formula resulting from the translation of each input constraint is a critical parameter for the choice of the encodings. The interface Encoder will include a method providing this information.
Automatic choice of the encoders
Assigning a dedicated tag to any constraint will ensure that the encoder to use for translating this constraint will be automatically selected accordingly to both the size of the output formula and the inference power of unit propagation on this formula.
Clauses as input constraints
It will be allowed to add clauses as input constraints. These clauses will be directly added to the output problem, in a way to allow to deal with input problems that are specified with both clauses and pseudoBoolean formulae.
Supporting new encodings
Some encodings dedicated to cardinality constraints will be added. Because cardinality constraints are a special case of pseudoBoolean ones, the pseudoBoolean encoders can of course deal with cardinality constraints. But there exists specific encodings which could be more efficient and/or compact for cardinality constraints.
In addition, some of the already implemented encodings could be improved and / or hybridized in a way to reduce the size / efficiency ratio of the resulting output formulae.
Coupling with a solver
The goal is to provide the resources for solving the output problem thanks to the solver sat4J [3], in a way to build stand alone applications.
References
 [1] Olivier Bailleux, Yacine Boufkhad, and Olivier Roussel. A translation of pseudo boolean constraints to sat. JSAT, 2(14):191–200, 2006.
 [2] Olivier Bailleux, Yacine Boufkhad, and Olivier Roussel. New encodings of pseudoboolean constraints into cnf. In SAT, pages 181–194, 2009.
 [3] Daniel Le Berre and Anne Parrain. The sat4j library, release 2.2. JSAT, 7:59–64, 2010.
 [4] Niklas Eén and Niklas Sörensson. Translating pseudoboolean constraints into sat. JSAT, 2(14):1–26, 2006.
 [5] J. P. Warners. A lineartime transformation of linear inequalities into conjunctive normal form. Information Processing Letters, 1968.
Comments
There are no comments yet.