1 Introduction
A (pure) integer programming (IP) problem consists of the following: (1) a set of problem variables with upper and lower bounds, some of which (perhaps all) are constrained to take integer values, (2) an objective function which is a linear function of the variables and (3) linear constraints on the variables. The goal of an IP solver is to find a solution that satisfies all constraints and maximises (or minimises) the objective function. Decades of work on the theory, practice and implementation of IP solvers means that they are often the method of choice for solving hard (NPhard) constrained optimisation problems.
IP problems are presented to IP solvers either via an API (the Gurobi solver [4] has APIs for 6 programming languages) or using some standard format or modelling language such as ZIMPL [5] or AMPL [1]. In the constraint programming community MiniZinc [7] is used. Modelling languages provide a ‘template’ approach where the user can declare many variables and constraints in a compact way. For example, in the ZIMPL language
set I := { 1 .. 100 }; var x[I] integer >= 2 <= 18; var y[I] real;
declares 100 integer variables 100 realvalued variables and
subto fo: forall<i> in I: x[i] <= 2*y[i];
declares 100 linear constraints. This compact representation is then ‘unrolled’ into an explicit list of variables and constraints, so that the problem presented to the solver is exactly as if the user had laboriously listed all variables and constraints explicitly in the first place. This unrolling is done by either creating a large input file (in say, ‘lp’ format) or is done internally by the solver. For example, the SCIP solver [2] directly accepts problems defined in the ZIMPL language.
In this paper we use firstorder logic as the modelling language and use it to describe problems which cannot, in general, be represented by unrolling since this would result in infinitely many IP variables and constraints. By way of introduction we first consider the special case (firstorder languages without function symbols) where a firstorder representation can be unrolled.
2 Firstorder logic as a template language
It is easy to use firstorder logic to define a standard IP problem with finitely many variables and constraints. We can write a logic program (in e.g. Prolog) to define a predicate linear/4 using Prolog clauses like this:
linear(LHS,Coeffs,Var,RHS) : foo(LHS,Coeffs,Var,RHS). linear(LHS,Coeffs,Var,RHS) : bar(LHS,Coeffs,Var,RHS).
and the use Prolog’s findall/3
metapredicate to find all ground
instances of linear(LHS,Coeffs,Var,RHS)
which are logically
entailed by the logic program. (Throughout this paper logical
variables will have an initial upper case letter and constants,
function and predicate symbols will have an initial lower case
letter.) Each such ground instance will
represent a linear constraint in the IP. Other predicates can be
written to similarly generate IP variables together with their bounds
and objective coefficients. Many IP solvers go beyond pure IP and
allow nonlinear (e.g. quadratic) constraints. Such constraints can
be defined analogously to the linear ones. Note that the variables and
constraints of the IP are represented as ground terms in the
problemdefining logic program.
2.1 Encoding MLN MAP problems
Using firstorder language as a template language is particularly convenient when using IP to solve MAP problems for Markov logic networks (MLNs). An MLN is a “a finite set of pairs , , where each is a clause in functionfree firstorder logic and .” [9]
. Since the clauses are functionfree the number of ground atoms in the firstorder language implicitly defined by the MLN is finite. An MLN defines a probability for each Herbrand interpretation
as follows: where is the number of groundings of formula which are true in and is a normalising constant. The MAP problem for MLNs is to find where is evidence, a (possibly empty) set of ground atoms with fixed truth values (true or false). In other words the goal is to find the most probable interpretation (Herbrand model) which is consistent with the evidence.
Similarly to the RockIt system [9]
we will construct an IP where there is a oneone correspondence between nonevidence ground atoms in the Herbrand base and binary variables in the IP. We will present our encoding by example using the LP (‘link prediction’) MLN which can be downloaded from the
Tuffy [8] website. The following weighted clause from the LP MLN: is encoded as follows:cons(lit(p,cb(19,A1,A2)),and, [lit(n,advisedBy(A1,A2)), lit(n,advisedBy(A2,A1))]) : guard(19,A1,A2,[_A3]). guard(19,A1,A2,[A3]) : publication(A3,A1), publication(A3,A2), not samePerson(A1,A2).
These 2 Prolog clauses says that for any grounding
which satisfies the guard/4
predicate this
constraint:
cb(19,a1,a2) <> ~advisedBy(a1,a2) & ~advisedBy(a2,a1)
should be added to the IP. This constraint states that the binary variable
cb(19,a1,a2)
is true (takes value 1) iff both
advisedBy(a1,a2)
and advisedBy(a2,a1)
are false. Both
the SCIP and Gurobi solvers accept AND constraints, like this. (SCIP internally
creates linear constraints from AND constraints, presumably Gurobi
does also.) In the LP MLN publication/2
and samePerson/2
are evidence predicates: the truth value of every one of their ground
instances are fixed. This is represented in our IPdefining Prolog
program by simply adding those publication/2
facts which are
true. In the case of samePerson/2
we add the single clause
samePerson(X,X)
rather than adding the 68 facts like
samePerson("Person319", "Person319")
which are present in the
original encoding of the problem.
Our encoding reflects the fact that we are only interested in
groundings of the MLN clause which are not satisfied due to evidence
ground atoms having a clausesatisfying truth value. The encoding
states that for each such grounding where advisedBy(a1,a2)
and
advisedBy(a2,a1)
have the ‘wrong’ truth values (are false) then
cb(19,a1,a2)
has to be true. cb(19,a1,a2)
is a penalty
atom (cb
stands for ‘clause broken’) so must have a positive
cost. This means that in any optimal solution it will have value 1
only if forced to do so by the values of advisedBy(a1,a2)
and
advisedBy(a2,a1)
. This is why we can have an AND (‘iff’)
constraint rather than a weaker ‘if’ constraint.
To get the correct
cost it is necessary to count how many
groundings correspond to cb(19,a1,a2)
. This can be achieved as
follows:
cost(cb(19,A1,A2),Cost) : setof(X,guard(19,A1,A2,X),Sols), length(Sols,Count), Cost is Count * 0.749123.
The main features of our chosen encoding have now been
illustrated—most other MLN clauses are encoded just like the example
just given. Note that
when an MLN clause has only one nonevidence literal in it then the
resulting AND constraints are of the form cb <> lit
,
i.e. equations. This allows preprocessing to remove the penalty atom
(and the AND constraint) from the problem.
The following MLN clause (clause 10) from the LP example:
cannot be so preprocessed away and will lead to many AND
constraints in the IP. However, since the predicate
samePerson/2
is just equality in disguise it has a special
structure which can be exploited. This clause states that for any
a1
there should be a penalty of 0.384788 for each ordered pair
(a2,a3)
of distinct individuals where both
advisedBy(a1,a3)
and advisedBy(a1,a2)
are true. If
is the number of facts (in a candidate model) which unify with
advisedBy(a1,_)
then the number of such pairs is simply
. So we encode clause 10 by one linear
constraint (to compute ) and one quadratic constraint (to compute
).
The resulting IP contains 30,243 variables and 44,609 constraints. However, after preprocessing we end up with only 7,108 variables and 2,484 constraints. Using the SCIP solver the IP is solved to optimality in 26.3 seconds (including 4.4 seconds for preprocessing) using a single core of a 1.7GHz laptop. The optimal model found has 60 ground atoms set to true. In contrast, as reported by Noessner et al [9] none of the 4 MLN systems RockIt [9], TheBeast [10], Tuffy [8] or Alchemy [6] were able to solve this problem to optimality (or even to a 0.1% gap) within 1 hour.
The key to solving this particular problem is dealing with clause 10 properly. If clause 10 is omitted both RockIt and our approach solve the problem very quickly, both returning the same optimal solution (with 273 ground atoms set to true). Using the version of RockIt available via the RockIt web interface, with clause 10 included RockIt does find an optimal solution (with 60 true ground atoms) after 424 seconds but is unable to establish that it is optimal.
The version of the LP MLN considered so far contains 24 MLN clauses [9, Table 4] and excludes 2 MLN formulae which are weighted (i.e. not hard) formulae containing existentially quantified variables. This is because RockIt cannot handle such formulae. However, we can, for example encoding this formula: like this:
cons(lit(n,cb(26,X)),and,Lits) : professor(X), \+ hasPosition(X,"Faculty_visiting"), findall(lit(n,advisedBy(Y,X)),person(Y) ,Lits).
The IP resulting from adding in the two missing existentially quantified formula is solved by SCIP in 29 seconds.
3 The minimal cost Herbrand model problem
The basic idea of the current paper is that the template approach described in Section 2 can be extended to the case where the logic program defines an IP with infinitely many variables and constraints. Evidently, in this case, the simple ‘grounding out’ approach of Section 2 can no longer be used. Instead variables and constraints of the IP will be added to the IP during the course of solving by methods known as ‘pricing’ and ‘cutting’ respectively. We show that in some cases only a finite subset of the full set of variables and constraints need be added to find a provably optimal solution.
We will initially restrict attention to problems where each constraint in the underlying IP is a linear constraint corresponding to a ground clause in some firstorder language and these (perhaps infinitely many) ground clauses are defined as the set of all ground instances of some finite collection of firstorder clauses. Formally, we consider the problem of finding a minimal cost Herbrand model: given (1) a set of clauses in some firstorder language and (2) a cost function , mapping each ground atom in the Herbrand base to a nonnegative real, then our goal is either to find a Herbrand model of which is guaranteed to minimise the sum of the costs of true ground atoms, or to establish that there is no Herbrand model for .
The IP encoding of the problem is straightforward. For each ground atom there is a binary IP variable with objective coefficient . Only a finite subset of these IP variables ever get explicitly represented in the IP. There is also a linear inequality for each grounding of each of clauses in . For a ground clause: the corresponding linear inequality is:
(1) 
Call such inequalities clausal inequalities [3]. Of course, only a finite subset of the clausal inequalities are ever present in the IP.
4 Cutandprice
Consider first the special case where the firstorder language contains no function symbols apart from constants, so and thus the number of IP variables and constraints are both finite. Let denote the number of variables, and let denote a generic candidate solution to the IP. Typically, the strategy for solving such an IP would involve solving the linear relaxation
of the IP which provides a useful global lower bound on any optimal solution. The linear relaxation is the linear program (LP) which results from relaxing the integrality constraint
to . However, if there were very many ground clauses (= very many clausal inequalities) then solving this LP (which we call the full LP) could be slow. A cutting plane approach addresses this problem: a small number (perhaps zero) of linear inequalities from the full LP are included in an initial LP. This initial LP is solved producing a solution . There is then a search for one or more linear inequalities from the full LP which does not satisfy; such inequalities are known as cutting planes or cuts. If any cuts are found they are added to the initial LP which is then resolved, generating a new solution . This process continues until no cuts can be found, at which point we have an optimal solution to the full LP even though (typically) we have not added all its linear inequalities. This approach can be strengthened by not only using inequalities from the full LP but additional inequalities which follow from assuming that is integervalued. Adding such additional inequalities results in a LP whose solution provides a better lower bound on the IP solution.It may be that the number of variables in the full LP (i.e. the size of the Herbrand base) is also so big as to cause problems. To get round this problem one can adopt a pricing strategy, where in an initial LP only a small number (perhaps zero) of variables are used. All omitted variables are implicitly fixed to have value zero, so if an omitted variable appears in a clausal inequality it is replaced with a zero. Once this initial LP is solved there is then a search for currently omitted variables which, if allowed to take a value other than zero, would allow a better solution to the current LP (better either in terms of feasibility or objective value). If the current LP has a solution then any variable with negative reduced cost allows an improvement. Reduced costs are now explained. If the current LP has a solution then there will also be a solution, call it , to its dual, which assigns a nonnegative real value () to each of the inequalities in the LP. Let be the coefficient of variable in inequality , then the reduced cost for associated with solution is
. (If the current LP is infeasible then there will be a vector
of dual Farkas multipliers which allows ‘improving’ variables to be identified in a similar way, a process called Farkas pricing.) If the current LP has a solution and no omitted variable with negative reduced cost can be found then it follows that the current set of variables is enough to get an optimal solution to the LP.The cutandprice approach used in this paper rests on the simple observation that both cutting and pricing can still be used when the ‘pool’ of available inequalities and variables are allowed to be infinite, rather than just very large but finite. This means that the above described priceandcut approach can be applied when there is no restriction on the firstorder language. So from now on we will remove the restriction to finite Herbrand bases and consider general firstorder languages.
5 Generating cuts from firstorder clauses
We now consider how to find cutting planes, a problem known as the separation problem. Let be a set of firstorder clauses (a CNF formula). For the time being we will make the simplifying assumption that any substitution that grounds all the negative literals in a firstorder clause in determines a grounding for all positive literals in the clause. We will later consider how this restriction can be relaxed.
Given a solution to an LP whose inequalities are a finite set of ground instances of these clauses, the problem is to find a new ground instance of some firstorder clause in which does not satisfy. This is done by considering each firstorder clause in turn and for each doing a simple depthfirst search for a suitable ground instance. Each state of this search is a 4tuple where is a substitution, is a set of ground atoms representing negative literals, a set of ground positive literals and is an activity value equal to . A state is a goal state if (i) is a complete grounding of the firstorder clause and (ii) . The initial state is . We first illustrate the search process by example, and then describe it formally. Consider the following clause
and LP solution where , and .
The search grounds a given negative literal by scanning the LP solution for atoms such that and which unify with the negative literal. In this example, we have , so we can ground the first literal using the substitution and increase the activity value to 10.4 = 0.6, allowing the search to move to state: .
Suppose next that the search uses the fact that to unify with . This leads to state: . This is a failstate since and so the search would backtrack and use to unify with , leading to state: .
Both negative literals are now ground and since the clause satisfies the restriction mentioned above, the positive literal is also ground and is the atom . Suppose now that is an omitted variable so its value is zero in . In this case we have reached the goal state: , which corresponds to the cut . Note that this inequality: could only be generated if the currentlyomitted variable were created—an issue discussed in the next section.
The search for cuts from firstorder clauses is now formally described. We assume that all negative literals precede positive literals in each firstorder clause and that any grounding of all negative literals determines a unique grounding for all positive literals. If the current state is and the next literal in the firstorder clause is then successor states are of the form where is a grounding of such that . If the next literal is a positive literal then will be ground, and the unique successor state is where is defined to be zero if the LP variable does not currently exist.
Note that this search will always terminate since the following are all finite: (1) the number of atoms such that , (2) the number of firstorder clauses and (3) the length of each firstorder clause. Moreover, for similar reasons the number of cuts which can be generated by this search is also finite.
6 Generating ground atoms
In a typical cutandprice approach, pricing—the search for improving variables—is done separately from cutting, and indeed, earlier (unpublished) work of ours took just this approach. However, it is possible to efficiently generate improving variables as part of cut generation, avoiding the need for a separate search. This is the approach taken here.
Recall that the reduced cost of a variable is from which it immediately follows that a variable can only have negative reduced cost if its associated ground atom appears as a positive literal in at least one of the ground clauses. (The only ‘reason’ for setting a ground atom to true is if doing so ‘helps’ satisfy a (ground instance of) a clause.) This leads to the following very simple pricing strategy: for each clausal inequality in the current LP ensure that all variables corresponding to positive literals exist in the LP (creating them if necessary). A drawback of this approach is that it may lead to the creation of more LP variables than necessary—since we create all LP variables which might conceivably have negative reduced cost rather than search for those that definitely do. The simplicity and efficiency of creating new inequalities and new variables simultaneously is however sufficient compensation.
So, in the example cut given in Section 5 we would create the missing variable and add it to the generated cut giving This is still a cut for the current LP solution since . However now that exists in the LP a better solution where has a positive value may be possible.
7 Branchpriceandcut
Our approach is to create an initial IP with no variables and no inequalities and to search for cuts (for the IP’s linear relaxation) using the method given in Section 5, adding variables at the same time, as described in Section 6. Assuming at least one cut is found this produces a new linear relaxation for which cuts are sought (and new variables generated) in the same way. This process continues until no further cuts can be found. However, we have no guarantee that this will terminate since the problem of determining whether a set of firstorder clauses even has a model is undecidable. In practice, we impose a time limit and admit defeat if it is reached before the problem is solved.
If the cutgenerating process terminates, the objective value of the solution () to this final LP provides a global lower bound on solutions to the IP. If happens to be an integer solution then the IP is solved. However, typically this is not the case and there will be fractional values where . If this is the case we branch on some fractional variable creating two subproblems, one where ( is false) and one where ( is true). The solving process then continues recursively: each subproblem is attacked in the same way as the original global problem. This method of solving IPs is known as branchpriceandcut.
8 Defining a problem instance
A problem instance is a triple — a firstorder language, a set of clauses and a cost function—and is defined by writing a logic program in Mercury. Mercury [11] is a stronglytyped, purely declarative logic programming language where the user is obliged to declare types, modes and determinisms for each predicate definition in a logic program. This allows Mercury programs to be compiled to C and thence to native code. The result is much faster execution than Prolog.
To define a problem instance the user must first define the firstorder language for the instance. This is done via a Mercury type declaration specifying the Herbrand base . For example, this type declaration:
: type atom > f(int,list(int)) ; cb(int,list(int)).
declares that includes f(3,[2,4])
,
f(9,[2,4,4])
, cb(2,[4])
, cb(200,[5,6])
, and all other
(infinitely many) similarly typed ground atoms. (Note that one can
view integers as abbreviations for ground terms in some suitable
firstorder language, where, for example, “2” abbreviates
“s(s(0))”.)
Secondly, the user is required to declare the cost function by defining a semideterministic predicate mapping atoms to floats. Continuing our example, we might have:
: pred cost(atom::in, float::out) is semidet. cost(cb(X,L),1.0/float(X)). cost(f(X,[HT]),0.01).
The “semidet” (i.e. semideterministic) declaration states that cost/2 either maps an input ground atom to a unique float or fails. Note that ground atoms in are ground terms in the Mercury program, so that the problemdefining Mercury program is, in effect, a metaprogram. Allowing failure in cost/2 is just for convenience since this relieves the user from having to explicitly define zero costs: a ground atom for which cost/2 fails implicitly has zero cost. In this particular case, cost/2 always succeeds so the Mercury compiler will internally convert cost/2 into a function.
Thirdly, the user must represent the firstorder clauses. Continuing our example suppose this clause were in : , then it would be represented in the problemdefining Mercury program as follows:
clause("2") > neglit_out(f(N,L)), neglit(f(N,L)), poslit(f(N+1,[N+1L])), poslit(cb(N,L)).
Here DCGnotation is being used, so each of the 5 literals in the
Mercury clause has 2 extra variables which are not explicitly
represented. These extra variables represent states of the search for
cuts which was described in Section 5. "2"
is
just an arbitrary identifier for the clause. Note that there are 2
Mercury predicates for the negative literal . The first,
neglit_out/3
, generates a grounding from the current LP solution
and the second, neglit/3
, does everything else that is
necessary. This division of labour is for reasons of efficiency and
could perhaps be hidden from the user by some syntactic sugar.
Given an LP solution , the Mercury goal
clause("2",S0,S4)
is called where S0
will be unified
with a ground term representing the initial state of the search. If
this goal succeeds then S4
will represent a goal state of the
search from which a cut can be extracted and added to the LP. Using
Mercury’s builtinin solutions/2
predicate (Mercury’s version
of Prolog’s findall/3
) we can find all valid instantiations of
S4
and thus all groundings of the clause which are cuts for
.
8.1 Using context predicates
This Mercury clause
clause("walls") > neglit_out(position(I,X1,Y1)), neglit_out(position(I+1,X2,Y2)), {wall_between(I,X1,Y1,X2,Y2)}, neglit(position(I,X1,Y1)), neglit(position(I+1,X2,Y2)).
generates ground instances of the clause
but only those where wall_between(I,X1,Y1,X2,Y2)
is true. (The
curly brackets indicates that no extra ‘state’ arguments are added.)
wall_between/5
is a context predicate whose definition
is given by a normal Mercury clause, for example:
wall_between(I,X,Y,X+1,Y) : I mod 3 = 0.
The set of true ground atoms for a context predicate are fixed (by its definition in the problemdefining Mercury program) before solving even begins. Such atoms are implicitly in but since their truth values are given it would be inefficient to represent them by IP variables. Context predicates play a similar role to evidence predicates in MLNs.
In the first paragraph of Section 5 we promised to remove the restriction that the negative literals in a clause must determine a grounding for the positive literals. The use of context predicates allows the removal of this restriction since they can be used instead to generate the required grounding.
9 Implementation
Our branchpriceandcut algorithm for finding minimal cost Herbrand models is called mfoilp (https://bitbucket.org/jamescussens/mfoilp/) and is implemented in C and Mercury using the SCIP Optimization Suite [2]. Fig 1 shows how mfoilp is organised.
mfoilp is essentially the SCIP solver equipped with an extra constraint handler called folinear which handles constraints which are firstorder clauses. Just like the 30 constraint handlers already included in the current version of SCIP, folinear provides callbacks for checking whether candidate solutions meet constraints, generating cuts, etc.
The problem instance is defined by Mercury predicate definitions in the Mercury program prob.m. prob.m must be compiled before solving begins. Once object code for prob.m has been generated it is then linked with (alreadygenerated) object code for the rest of mfoilp thus generating a problem instancespecific executable which is then executed to solve the problem. A Makefile is used to keep track of what, if anything, needs recompiling before solving begins.
10 Using mfoilp
Whether a given problem with firstorder clausal constraints is solvable, and if so whether reasonably quickly, is largely determined by the problem at hand.
We have tested mfoilp on a number of problems where is infinite. We have checked that when each clause in has a negative literal then mfoilp immediately deduces that setting all atoms in to false is an optimal solution. If each clause in is definite (has exactly one positive literal) then mfoilp generates the minimal model for , familiar from logic programming theory, irrespective of the cost function. (Of course, this generation does not terminate if the minimal model is infinite!)
We have created an infinite maze problem where (i) an agent has to keep moving (to an adjacent location) until it reaches a goal location, (ii) where walls appear and disappear dynamically (see the clause in Section 8.1), and (iii) where each move has unit cost. We did not include a clause stating that the agent must stop once it reaches a goal state, leaving mfoilp to deduce that to keep moving would be suboptimal.
Defining a goal location thus: goal(X,Y):X > 1,Y > 4
and
stating that the agent must be at square (0,0) at time point 0,
mfoilp finds a minimal cost route of 7 steps to the goal location
(2,5) in 5.09 seconds using a single core of a 1.7GHz
laptop. mfoilp generates over 14,000 ground clauses but only 305
ground atoms. The branching in our branchpriceandcut algorithm
created 4889 nodes in the search tree. The 14,000 ground clauses are
not distinct since, at present, we allow SCIP to remove ‘old’ cuts
which are not tight for the current linear relaxation solution. This
keeps the size of each LP small—the largest one had only 140
constraints—but means that discarded cuts might need to be refound
later on.
We have also created variants of this problem where the problem was not solved within a 30 minute cutoff. Generally, in our ‘maze’ experiments we have observed that if mfoilp can find an optimal solution it can quickly prove that it is optimal, but in other cases no feasible solution can be found (not even suboptimal ones). At present mfoilp relies on SCIP’s default primal heuristics to generate candidate solutions. In our maze experiments SCIP’s ‘simplerounding’ algorithm, which generates integer solutions from LP relaxation solutions, was what produced candidate solutions when mfoilp succeeded. We expect that it would be beneficial to add to mfoilp
a specialised primal heuristic which generates candidate Herbrand models. Some variant of the standard method for generating the minimal model of a definite program would be worth exploring.
11 Conclusions and future work
In this paper we have presented methods for integrating firstorder logical inference into integer programming, focusing on the problem of finding a minimal cost Herbrand model. Note that the MAP problem for MLNs is a special case of this problem. We hope that this paper will stimulate further work in this direction, since much remains to be done.
Most importantly, automatic reformulation of IP problems posed in terms of (firstorder) clauses is needed. Representing each (ground) clausal constraint by its corresponding clausal inequality (1), as mfoilp does, is known to be a poor IP formulation since it leads to a weak LP relaxation. This issue has been analysed in some depth (for propositional logic) by Hooker [3] who provides the following example. Given a CNF with these 4 clauses , , and , the best formulation (the ‘convex hull’ formulation) is not the corresponding 4 clausal inequalities but this single inequality: . Hooker also shows how adding clausal inequalities which are produced by resolution on initiallygiven (propositional) clauses can tighten the linear relaxation. Applying firstorder resolution on firstorder clauses is thus particularly attractive since it amounts to doing very many propositional resolutions in one step.
The big win achieved by reformulating the link prediction (LP) MLN MAP problem (see Section 2.1) is thus just one example of a general phenomenon. Our expectation is that having an initial representation in firstorder logic will make it easier for a problem to be automatically transformed into a better formulation.
References
 [1] Robert Fourer, David M. Gay, and Brian W. Kernighan. A modeling language for mathematical programming. Management Science, 36:519–554, 1990.
 [2] Ambros Gleixner, Leon Eifler, Tristan Gally, Gerald Gamrath, Patrick Gemander, Robert Lion Gottwald, Gregor Hendel, Christopher Hojny, Thorsten Koch, Matthias Miltenberger, Benjamin Müller, Marc E. Pfetsch, Christian Puchert, Daniel Rehfeldt, Franziska Schlösser, Felipe Serrano, Yuji Shinano, Jan Merlin Viernickel, Stefan Vigerske, Dieter Weninger, Jonas T. Witt, and Jakob Witzig. The SCIP Optimization Suite 5.0. Technical Report 1761, ZIB, Takustr.7, 14195 Berlin, 2017.
 [3] John H. Hooker. Integrated Methods for Optimization. Springer, 2007.
 [4] Gurobi Optimization Inc. Gurobi optimizer reference manual, 2016.
 [5] Thorsten Koch. Rapid Mathematical Programming. PhD thesis, Technische Universität Berlin, 2004. ZIBReport 0458.
 [6] Stanley Kok, Parag Singla, Matthew Richardson, Pedro Domingos, Marc Sumner, and Hoifung Poon. The Alchemy System for Statistical Relational AI: User Manual. University of Washington, 2007.
 [7] N. Nethercote, P. J. Stuckey, R. Becket, S. Brand, G. J. Duck, and G. Tack. MiniZinc: Towards a standard CP modelling language. In C. Bessiere, editor, Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming, volume 4741 of LNCS, pages 529–543. Springer, 2007.
 [8] F. Niu, C. Ré, A. Doan, and J. Shavlik. Tuffy: Scaling up statistical inference in Markov logic networks using an RDBMS. In Proceedings of the VLDB Endowment, volume 4, pages 373–384, 2011.

[9]
Jan Noessner, Mathias Niepert, and Heiner Stuckenschmidt.
Rockit: Exploiting parallelism and symmetry for MAP inference in
statistical relational models.
In
Proceedings of the TwentySeventh AAAI Conference on Artificial Intelligence, July 1418, 2013, Bellevue, Washington, USA.
, 2013.  [10] Sebastian Riedel. Improving the accuracy and efficiency of MAP inference for Markov logic. In Proceedings of the TwentyFourth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI08), pages 468–475, Corvallis, Oregon, 2008. AUAI Press.
 [11] Zoltan Somogyi, Fergus Henderson, and Thomas Conway. The execution algorithm of Mercury: an efficient purely declarative logic programming language. Journal of Logic Programming, 29(1–3):17–64, OctoberDecember 1996.
Comments
There are no comments yet.