Many hard to solve practical non-linear integer programming problems have a specially structured linear constraint matrix. We study an important subset of these classes–including Cardinality Boolean Quadratic Problems (CBQP), Quadratic Semi-Assignment Problems (QSAP), and Quadratic Assignment Problems (QAP)–which have two features: (1) their Graver bases can be calculated systematically, and (2) multiple feasible solutions, which are uniformly spread out in the space of solutions, can be likewise systematically constructed. The hardness of such problems, then, stems from the non-convexity of their nonlinear cost function, and not from the Graver basis or the ability to find feasible solutions. In this paper, we target such problems and solve them using a novel approach, GAMA: Graver Augmented Multi-seeded Algorithm.
Our approach here is inspired by an earlier work ([ADT19]) in which we introduced a decomposition for a broader class (there was no restriction on but we required a convex objective function), with three components:
Computing a partial or complete Graver basis of (which utilized outputs from a D-Wave quantum computer, a stand-in for any Ising solver);
a set of feasible solutions (which also required a call to D-Wave); and
a parallel augmentation procedure, starting at each of the feasible solutions from (2), using the (partial or complete) Graver basis that was obtained in (1).
An essential concept that was exploited there, which we will continue to use here, was to separate the objective function from the constraints in the decomposition. It is known that given the Graver basis of a problem’s integer matrix, any convex non-linear problem can be globally optimized with polynomial number of moves–augmentations–from any arbitrary initial feasible solution [Onn10]. What is novel in this current paper is the recognition that for a wide class of hard problems, the matrix has a special structure that allows us to obtain (a) Graver basis elements and (b) many feasible solutions that are spread out, by classical methods, and that are simple enough to be systematically algorithmized. Therefore, the steps (1) and (2) that previously needed calls to D-Wave no longer do.
Furthermore, suppose the non-convex objective function can be viewed as many convex444We have an expanded notion of convex; see Section 2.2. functions stitched together (like a quilt). Thus, the entire feasible solution space can be seen as a collection of parallel subspaces, each with a convex objective function. If we have the Graver basis for the constraint matrix, and a feasible solution in every one of these sub-regions, putting this all together, an algorithm that can find the optimal solution is as follows:
Find the Graver basis.
Find a number of feasible solutions, spread out so that there is at least one feasible solution in each of the sub-regions that has a convex objective function.
Augment along the Graver basis from each of the feasible solutions ("seeds") until you end up with a number of local optimal solutions (one for each seed).
Choose the best from among these local optimal solutions.
In this paper, we compare the speed of reaching optimal solutions with a best-in-class integer optimization solver, Gurobi®[GO19]. We do not discuss the worst-case complexity of GAMA. Our goal is to solve instances from industry that are not solvable by commercially best-in-class solvers and understand (numerically) why GAMA works so well when it does.
2 Recap: Graver Bases and their Use in Non-linear Integer Optimization
This section is repeated verbatim from [ADT19] for ease of access. Let be a real-valued function. We want to solve the general non-linear integer optimization problem:
One approach to solving such problem is to use an augmentation procedure: start from an initial feasible solution (which itself can be hard to find) and take an improvement step (augmentation) until one reaches the optimal solution. Augmentation procedures such as these need test sets or optimality certificates: so it either declares the optimality of the current feasible solution or provides direction(s) towards better solution(s). Note that it does not matter from which feasible solution one begins, nor the sequence of improving steps taken: the final stop is an optimal solution.
A set is called a test set or optimality certificate if for every non-optimal but feasible solution there exists and such that is feasible and
. The vector(or ) is called the improving or augmenting direction.
If the optimality certificate is given, any initial feasible solution can be augmented to the optimal solution. If is finite, one can enumerate over all and check if it is augmenting (improving). If is not practically finite, or if all elements are not available in advance, it is still practically enough to find a subset of that is feasible and augmenting.
In the next section, we discuss the Graver basis of an integer matrix which is known to be an optimality certificate.
2.1 Mathematics of Graver Bases
First, on the set , we define the following partial order:
Let . We say is conformal to , written , when ( and lie on the same orthant) and for . Additionally, a sum is called conformal, if for all ( majorizes all ).
The Graver basis of integer matrix is defined to be the finite set of minimal elements (indecomposable elements) in the lattice . We denote by the Graver basis of .
The following proposition summarizes the properties of Graver bases that are relevant to our setting.
The following statements are true:
Every vector in the lattice is a conformal sum of the Graver basis elements.
Every vector in the lattice can be written as for some and . The upper bound on the number of Graver basis elements required () (called integer Caratheodory number) is .
A Graver basis is a test set (optimality certificate) for ILP and several nonlinear convex forms (see 2.2). That is, a point is optimal for the optimization problem , if and only if there are no such that is better than .
For any , an upper bound on the norm of Graver basis elements is given by
where and is the maximum absolute value of the determinant of a square submatrix of .
2.2 Applicability of Graver Bases as Optimality Certificates
Beyond integer linear programs (ILP) with a fixed integer matrix, Graver bases as optimality certificates have now been generalized to include several nonlinear objective functions:
Separable convex minimization [MSW04]: with convex.
Convex integer maximization (weighted) [DLHO09]: with convex on .
Norm (nearest to ) minimization [HOW11]: .
Polynomial minimization [LORW12]: where is a polynomial of degree , that lies on cone , the dual of degree Graver cone of .
In the rest of this paper, we loosely call all of the above mentioned cost categories as convex. Graver did not provide an algorithm for computing Graver bases; Pottier [Pot96] and Sturmfels [ST97] provided such algorithms. The use of test sets to study integer programs is a vibrant area of research. Another collection of test sets is based on Groebner bases. See [CT91, TTN95, BLSR99, HS95] and [BPT00] for several different examples of the use of Groebner bases for integer programs.
3 Graver Bases for Some Structured Matrices
Here we construct the Graver basis of four categories of structured matrices that occur frequently as subsets of constraints in nonlinear integer programming problems.
3.1 Graver basis of
The matrix (a row vector where all elements are 1) is unimodular555The determinant of every square submatrix of a unimodular matrix is or .. Therefore, the Graver basis elements contain only values.
For any , the number of nonzero elements of should be even with equal numbers of and . Starting from vectors with nonzero elements, we can have vectors of the form , for all and . We cannot have Graver elements with or more elements, since each of them is the positive sum of the nonzero vectors (also each of elements lying on a
dimensional hyperplane isminimal to any vector lying on dimensional hyperplane). Therefore, all of vectors construct the Graver basis of .
3.2 Graver basis of
For any block diagonal matrix of the form, one can easily observe that
and can be acquired from equation (4).
3.3 Graver basis of
In matrices of the form there are number of ’s in each row; each consecutive pair of ’s is spaced by number of ’s, and there is only one in each column. The Graver basis of each row, then, is the Graver basis of multiple of ’s, but each adjacent pair of elements in it should be spaced apart by elements. This has to repeat for all of the rows. We can represent the element spacing for all rows by . Therefore,
and is acquired from equation (4).
3.4 Graver basis of combination:
This structured combination is called the generalized Lawrence configuration.
3.4.1 Generalized Lawrence configuration and -fold product matrices
The Lawrence lifting of an matrix is the enlarged matrix where is the matrix of all zeros and is the identity matrix. The generalization of the Lawrence lifting is of the form
Graver bases of are finite ([SS03], [HS07]). A slightly modified version of this called -fold matrices [DLHOW08] appears frequently in integer programming. The -fold matrix appears in many applications including high dimensional transportation problems and packing problems. Our -fold matrix is constructed of ordered pair (). Given the Graver basis of fixed sized matrix , there is a polynomial time algorithm that computes the Graver basis of the -fold product matrix [Onn10].
3.4.2 Systematic generation of Graver basis
If we had only the second term , then based on equation (5) the overall Graver basis would be the vector placement of each of sized vectors of into bricks. The addition of block constrains the brick placement of those vectors, such that the sum of the placements should become zero. Keeping the vector placement aspect aside, we need to know which minimal and positive combination of elements of reduces to zero. This is equivalent to the Graver basis of in positive orthant, which is the Hilbert basis of .
Therefore, we need to find the Hilbert basis of the Graver basis of and do the liftings (brick placements) based on it.
3.4.3 Hilbert basis of
One way to generate the Hilbert basis of is using a completion procedure such as the Pottier algorithm [Pot96] and limiting it to the positive orthant. However, this method does not exploit the structure of elements.
As evident in equation (4), the elements of consists of vectors of size , each containing one and one and zeros (). We can model the matrix as the incidence matrix of a bidirectional complete graph having nodes and directional edges (two back and forth directional edges between each node pair). It can be shown that the set of all basic (non-overlapping) directional cycles in this complete graph represents the Hilbert basis of .
Therefore, we have two-cycles, three-cycles (directional triangles), four-cycles, …, and finally -cycles. The sum of these basic directional cycles, which is the cardinality of Hilbert basis results in:
One can thus construct each of the -cycle indices of a complete graph and add them up for .
3.4.4 Liftings and final Graver basis enumeration
Each of the Hilbert basis elements with nonzero terms (corresponding to a directional -cycle), is a labeled construct, which creates different possibilities. For each such possibility, we have different lifting choices (combinations), which creates Graver elements and their negatives (by symmetry). Therefore, the total enumeration of the size Graver basis will be , excluding symmetries.
For each of the size Hilbert basis elements different -brick liftings, for each of the size elements different -brick liftings, for each of the size elements different -brick liftings, …, and finally for each of the size elements different -brick liftings. Combining the number of Hilbert basis elements in each size with all possible liftings, the total cardinality of the lifted Graver basis for matrix becomes:
where is the permutation operator.
4 Computational Results
4.2 Cardinality Boolean Quadratic Programs (CBQP)
Cardinality Boolean Quadratic Programming (CBQP) [LG17] is of the form:
4.2.1 Feasible solutions
The Graver basis of can be generated using equation (4). To solve the problem using our multi seeded approach, we need to generate many (say
) uniformly distributed feasible solutions of the linear constraint, . The total number of solutions is , which can be large depending on the value of . We generate only uniformly distributed number of them as initial feasible solutions. Each of the solutions is a size vector with placed in locations with indices sampled uniformly at random from to (without replacement), and placed in other locations.
4.2.2 CBQP results
All problems are also solved with the MIP solver of Gurobi® Optimizer (latest version, 8.1), for comparison purposes666Installed on MATLAB R2014b using MacBookPro15,1: 6 Core 2.6 GHz Intel Core i7 processor, 32 GB 2400 MHz DDR4 RAM.
We have used CBQP instances777https://sites.google.com/site/cbqppaper/ generated by [LG17]. There are CBQP instances of size . Setting the number of initial feasible points to be , our algorithm calculated the optimal solution for all 30 problems with an average time of In almost all cases, our method takes about about . This is attributable to the fact that each augmentation path in all cases has a similar number of augmentation steps. With Graver basis elements, each augmentation path takes about to terminate.
Using the Gurobi solver, in cases the problem was solved very quickly (in less than one second). At the other extreme, in cases the problems were very hard and took between to . In the remaining 13 cases, the problems were solved between and .The results are shown in Figure 2.
To summarize: our multi-seeded Graver based approach (GAMA) solved all instances (containing convex, easy nonconvex, and hard nonconvex instances) in a total of seconds; the same 30 instances took for Gurobi. This is an average speedup of a factor of over for all samples. If we compare the performance only on the hard non-convex problems, we have a speedup of over .
We want to understand when (and why) our approach does so much better when it does. We observed that the problem instances can be roughly categorized into three groups based on where all the augmentations ended up, beginning from the different starting feasible solutions.
The number of different terminal values is low, and the global solution is the destination for a high percentage of the initial feasible points, as shown in Figure 4. We consider this the easier non-convex case.
There are many different terminal values, and the global solution is obtained from a lower percentage of initial points, as shown in Figure 5. This is the harder non-convex case.
Interestingly, and perhaps not too surprisingly, the cases in the first category are among those that Gurobi could also solve very quickly. Instances in the second group were solved by Gurobi in good time (1-100 seconds), and the instances in the third group took much longer.
We also conducted some sensitivity analysis. Changing the value of , the relative hardness of the second and third categories reversed for Gurobi, but it does not have much effect on the first category.
For GAMA, the solving times for all categories are almost equal and independent of the hardness of or value of and depend on the number of initial feasible solutions (seeds) .
We further probed how the value of affects the Gurobi performance. The problem samples that we mentioned earlier all had values of either or , which is equally () deviated from the center to the right or the left. From the Gurobi solver point of view, in almost all cases, switching the value to the opposite side of the center, for easier problems makes the problem harder, and turns harder problems into easier ones (whereas in our solver all of the problems are solved in the same amount of time). Changing the value from either or to the middle value () transforms the nonconvex problems and even some of the convex problems into extremely difficult; even after several hours, Gurobi was not able to solve any of them optimally (whereas our algorithm still solves them in about on average). We speculate that the reason for this degradation in Gurobi’s performance is that as we move the value of closer to the middle point, the total number of possible nodes in the constraint polytope becomes maximum at : Consequently, the number of degenerate solutions also increases. This is a hurdle for the Gurobi solver or other such solvers that return only one optimal solution, whereas our method returns all the degenerate solutions reached after augmentation terminations.
4.3 Quadratic Semi-Assignment Problems 1 (QSAP1)
Here we consider nonconvex nonlinear combinatorial problems with a quadratic term as the objective function and separate horizontal cardinality constraints. Quadratic Semi-Assignment Problems (QSAP) have many applications [Pit09]. The problem is of the form:
where and is the column vector depicting the assignment (aka brick) and
is the concatenation of all assignment vectors (bricks).
4.3.1 Feasible solutions
The Graver basis of can be generated using equation (5). To solve the problem using our multi seeded approach, we need to generate many (assume ) uniformly distributed feasible solutions of the linear constraint , . The total number of solutions is , which can be very large. As before, we want to generate only uniformly distributed number of them as initial feasible solutions. This is done in two stages:
For each of the solutions, that has sections (bricks), we randomly choose a section among , then generate a size vector with placed in locations with indices sampled uniformly at random from to (without replacement), and placed in other locations.
We uplift this solution to randomly chosen section , in an size (initially set to zero) vector.
4.3.2 QSAP1 results
We have three batches of tests.
We generated QSAP problem instances999We used the QSAP problem instance generator provided by D.E. Bernal. of size . Considering the number of initial feasible points to be , our algorithm solves all problems with an average time of .
We increased the size of the QSAP problems to , and our algorithm solved them with an average time of . The same sets of problems (sizes and ) were passed to the Gurobi MIP solver and none were solved after more than hours.
We used a set of problem instances with various sizes from small to large, from the CMU QSAP problem instance generator. This set consists of problem instances of sizes:
Here we chose the number of starting feasible points equal to the size of the problem . The results are shown in Figure 6.
GAMA optimally solved all samples in
seconds, in a total of , with an average time of . The Gurobi solver, on the other hand, could only finish seven of them in seconds. The other 10 instances could not be solved, and after for each instance, the Gurobi solver was terminated. On the largest size that Gurobi could solve, the problem instance, GAMA had a factor speedup.
An observation that we can have from Figure 6 is that, for problems of a similar complexity level (roughly like our problem instances in QSAP1), a linear increase in the problem size results in a linear increase in the logarithm of solution time, which means an exponential increase in time for both GAMA and Gurobi. As can be seen quite clearly, however, the slope of Gurobi’s line is much higher than that of GAMA.
It is well known that the total Gurobi time, like that of many best-in-class exact MIP solvers, consists of separate phases: After finding a feasible solution, they enter the phase of branch-and-bound search and improving the incumbent solution, and then switch into the phase of proving optimality101010Recall that in GAMA, assuming that we have at least one feasible solution in the deepest convex region, the termination of Graver augmentation path is a proof of the solution optimality. [BHK18]. (The solver may switch back and forth between improving and proving phases dynamically to minimize the overall time of the solving process.) To make our comparisons on more equal footing, therefore, we also evaluated the quality of solutions acquired by Gurobi (before spending time on proving optimality). To do this, we set the Gurobi’s timelimit parameter to three times the average GAMA solving time (). We find that for small sized QSAP1 problems the results match the optimal, but for larger sizes the solution obtained within this time limit degrades rapidly (Figure 7). Consequently, we believe that GAMA finds better solutions faster as the problem size increases.
4.4 Quadratic Semi-Assignment Problems 2 (QSAP2)
Here we solve the nonconvex nonlinear combinatorial problem with a quadratic term as the objective function and separate vertical cardinality constraints. Quadratic Semi-Assignment Problems 2 (QSAP2) [Pit09] is of the form:
where and similarly is the column vector depicting assignment (aka brick) and is concatenation of all assignment vectors (bricks).
4.4.1 Feasible solutions
The Graver basis of can be generated using equation (6). To solve the problem using our multi seeded approach, we generate many (assume ) uniformly distributed feasible solutions of the linear constraint , . The total number of solutions is , which can be very large. We generate a much smaller number, , uniformly distributed initial feasible solutions. As before, this is done in two stages. For each of the solutions, that has subsections, we initially randomly choose a section among , then generate a size vector with placed in locations with indices sampled uniformly at random from to (without replacement), and placed in other locations. Next, we spread the solutions element apart by elements for each random subsection , and place them in an size (initially set to zero) vector.
4.4.2 QSAP2 results
We retained the from QSAP1 instances, and using the same and , we generated and based on QSAP2 formulation, equation (13).
The same set of problem instances shown in section 4.3.2 is used in our testing. GAMA solved all of them with times that ranged from to Gurobi solved of the instances. The other instances could not be solved in . The results are shown in Figure 8. The last four instances not completed by Gurobi after 4 hours each, are shown by at the 4 hour border time. In the largest instance that Gurobi did solve (in ), our method is times faster.
As before, the linear dependency between the logarithm of time versus problem size exists here as well, with almost similar slope differences between GAMA and Gurobi. A difference is that, here in QSAP2, the initial crossing point is on higher problem sizes than in QSAP1, indicating that for a much larger range of smaller problems Gurobi is faster, but as the size increases, GAMA outperforms significantly.
4.5 Quadratic Assignment Problem (QAP)
Here we solve the nonconvex nonlinear combinatorial problem with a quadratic term as the objective function and separate horizontal cardinality constraints in addition to separate vertical cardinality constraints. The Quadratic Assignment Problems (QAP) is of the form:
where and, similarly, is the column vector depicting the assignment (aka brick) and is the concatenation of all assignment vectors (bricks).
4.5.1 Connection to Random Binary Matrices
The Graver basis of can be generated by the procedure described in subsection (3.4.2).
Calculation of randomly generated feasible solutions of constraints for QAP problem categories is not as straightforward as in the previous cases. We connect this problem to random binary matrix theory and observe that some known algorithms in that area can be adapted. Additionally, we propose a novel approach – also based on the Graver basis (Null basis can also be used) – that not only helps us find random feasible solutions for QAP, but can also be applied to study random binary matrices as well.
We rearrange the main vector defined in equation (12), such that size sub-vectors become columns of a matrix :
where , and is the vectorization111111 operator. Thus, the QAP constraint
which states that row sum () and column sum () of matrix should be fixed. This then helps create .
The problem of finding random feasible solutions of QAP problems becomes the problem of generating random binary matrices with fixed row sum and column sum, also known as doubly stochastic binary matrices. This problem is addressed in the literature121212Sampling of zero one matrices has many applications in statistical analysis of many fields of study including co-occurrence matrices in evolutionary studies and ecology, multivariate binary time series, affiliation matrices in sociology, and item response analysis in psycho-metrics., and several known algorithms tackle it. Rycer [Ryc60] was the first to study such matrices. Some combinatorial properties of Rycer matrices were studied in [Bru80] by connecting them to bipartite matrices and hyper-graphs. Enumeration of binary matrices with known row and column sum is also studied [WZ98]. The Curveball algorithm (or its reincarnation [SNB14]) is known to be a fast method that creates uniform random samples [Car15] of binary matrices with fixed row and column sums. Other approaches for exact counting and sampling of binary matrices with specified sums based on dynamic programming have been devised [MH13].
4.5.2 Feasible solutions: A novel Graver basis approach
Here we describe a novel approach that we have used to find a random distribution of binary matrices, coincidentally based on the Graver basis. Similar to the swap in the Curveball algorithm (and some other algorithms), we start with one single solution and add a randomly chosen Graver element to it (while checking for lower bound and upper bound on all terms) and repeat this a randomly chosen number of times, to reach the next feasible solution. Repeating this procedure, again and again, generates more feasible solutions. That is, let be the initial feasible solution and . We select from a random number (), then chose random indices from , add them to to reach to the , and repeat.
4.5.3 QAP results
The constraint matrix A is generated based on the QAP. Values for are generated by initially generating a random binary matrix and using its row sum and column sum accordingly, to guarantee feasibility such that the sum of column vectors equals the sum of row vectors. The same set of problem instances described in section 4.3.2 is used.
GAMA solved all 17 instances with times ranging from to . Gurobi solved of them, and could not solve the other 10 even after for each. Results are shown in Figure 9. The instances not completed by Gurobi after hours are shown by at the hour border time. In the largest instance that Gurobi solved in under hours (problem size in ), GAMA () is times faster.
As can be seen in Figure 9, the same linear dependency between logarithm of time versus problem size exists here. Gurobi could solve small sized QAP problem instances faster than GAMA, but the slope is much higher than before, due to tighter double (row and column) constraints.
Further speedup of GAMA is possible. A study of Graver bases elements that are used in the augmentation paths indicates that the majority of bases that cause cost reduction are from basis elements that are the result of lower brick number liftings (). In other words, the higher -cycle equivalents of the Hilbert basis are not used frequently. In cases when and are both large, thus the number of Graver basis elements is large, instead of generating all the liftings of a Graver set, we generate only liftings with a lower number of bricks and created a fixed Graver basis repository, randomly generated elements from the higher size liftings during augmentation, and used them on the fly. This procedure and similar modification of it limits the memory requirements substantially. It is important to note that this selective random generation is possible only when we systematically create the Graver basis like we do, which is not the approach using a completion procedure such as the Pottier algorithm [Pot96].
In this paper, we proposed and tested a novel method, the Graver Augmented Multi-seeded Algorithm (GAMA), for non-convex, non-linear integer programs. For problem classes, that include Cardinality Boolean Quadratic Programs (CBQP), Quadratic Semi-Assignment Programs, and Quadratic Assignment Programs, we develop procedures for (1) systematically constructing Graver basis elements and (2) finding many feasible solutions that are uniformly spread out. We performed extensive numerical testing on instances that arise in industry, and conducted a sensitivity analysis to understand why GAMA vastly outperforms existing best-in-class commercial solvers.
The authors thank D. E. Bernal and Professor I. Grossmann from CMU for providing us with instances of CBQP and QSAP, and for providing helpful feedback on an earlier draft.
- [ADT19] Hedayat Alghassi, Raouf Dridi, and Sridhar Tayur, Graver Bases via Quantum Annealing with Application to Non-Linear Integer Programs, arXiv:1902.04215 [quant-ph] (2019), arXiv: 1902.04215.
Maurizio Bruglieri, Matthias Ehrgott, Horst W. Hamacher, and Francesco
An annotated bibliography of combinatorial optimization problems with fixed cardinality constraints, Discrete Applied Mathematics 154 (2006), no. 9, 1344–1357.
- [BHK18] Timo Berthold, Gregor Hendel, and Thorsten Koch, From feasibility to improvement to proof: three phases of solving mixed-integer programs, Optimization Methods and Software 33 (2018), no. 3, 499–517.
- [Bil05] Alain Billionnet, Different Formulations for Solving the Heaviest K-Subgraph Problem, INFOR: Information Systems and Operational Research 43 (2005), no. 3, 171–186.
- [BLSR99] A. M. Bigatti, R. La Scala, and L. Robbiano, Computing Toric Ideals, Journal of Symbolic Computation 27 (1999), no. 4, 351–365.
- [BPT00] Dimitris Bertsimas, Georgia Perakis, and Sridhar Tayur, A New Algebraic Geometry Algorithm for Integer Programming, Management Science 46 (2000), no. 7, 999–1008.
- [Bru80] Richard A. Brualdi, Matrices of zeros and ones with fixed row and column sum vectors, Linear Algebra and its Applications 33 (1980), 159–231.
- [Car15] C. J. Carstens, Proof of uniform sampling of binary matrices with fixed row sums and column sums for the fast Curveball algorithm, Physical Review E 91 (2015), no. 4, 042812.
- [CT91] Pasqualina Conti and Carlo Traverso, Buchberger Algorithm and Integer Programming, Proceedings of the 9th International Symposium, on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes (London, UK, UK), AAECC-9, Springer-Verlag, 1991, pp. 130–139.
- [DLHO09] J. A. De Loera, R. Hemmecke, S. Onn, U. G. Rothblum, and R. Weismantel, Convex integer maximization via Graver bases, Journal of Pure and Applied Algebra 213 (2009), no. 8, 1569–1577.
- [DLHOW08] Jesús A. De Loera, Raymond Hemmecke, Shmuel Onn, and Robert Weismantel, N-fold integer programming, Discrete Optimization 5 (2008), no. 2, 231–241.
- [GO19] LLC Gurobi Optimization, Gurobi optimizer reference manual, 2019.
- [Gra75] Jack E. Graver, On the foundations of linear and integer linear programming I, Mathematical Programming 9 (1975), no. 1, 207–226 (en).
- [HOW11] Raymond Hemmecke, Shmuel Onn, and Robert Weismantel, A polynomial oracle-time algorithm for convex integer minimization, Mathematical Programming 126 (2011), no. 1, 97–117 (en).
- [HS95] Serkan Hoşten and Bernd Sturmfels, GRIN: An implementation of Gröbner bases for integer programming, Integer Programming and Combinatorial Optimization, Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, May 1995, pp. 267–276 (en).
- [HS07] Serkan Hoşten and Seth Sullivant, A finiteness theorem for Markov bases of hierarchical models, Journal of Combinatorial Theory, Series A 114 (2007), no. 2, 311–321.
- [LG17] Ricardo M. Lima and Ignacio E. Grossmann, On the solution of nonconvex cardinality Boolean quadratic programming problems: a computational study, Computational Optimization and Applications 66 (2017), no. 1, 1–37 (en).
- [LORW12] Jon Lee, Shmuel Onn, Lyubov Romanchuk, and Robert Weismantel, The quadratic Graver cone, quadratic integer minimization, and extensions, Mathematical Programming 136 (2012), no. 2, 301–323 (en).
- [MH13] Jeffrey W. Miller and Matthew T. Harrison, EXACT SAMPLING AND COUNTING FOR FIXED-MARGIN MATRICES, The Annals of Statistics 41 (2013), no. 3, 1569–1592.
- [MSW04] Kazuo Murota, Hiroo Saito, and Robert Weismantel, Optimality criterion for a class of nonlinear integer programs, Operations Research Letters 32 (2004), no. 5, 468–472.
- [Onn10] Shmuel Onn, Nonlinear Discrete Optimization: An Algorithmic Theory, European Mathematical Soc., 2010 (en), Google-Books-ID: kcD5DAEACAAJ.
- [Pit09] Leonidas Pitsoulis, Quadratic Semi-assignment Problem, Encyclopedia of Optimization (Christodoulos A. Floudas and Panos M. Pardalos, eds.), Springer US, Boston, MA, 2009, pp. 3170–3171 (en).
- [Pot96] Loïc Pottier, The Euclidean Algorithm in Dimension N, Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation (New York, NY, USA), ISSAC ’96, ACM, 1996, pp. 40–42.
- [Ryc60] Herbert John Rycer, Matrices of zeros and ones., Bulletin of the American Mathematical Society 66 (1960), no. 6, 442–464.
- [SNB14] Giovanni Strona, Domenico Nappo, Francesco Boccacci, Simone Fattorini, and Jesus San-Miguel-Ayanz, A fast and unbiased procedure to randomize ecological binary matrices with fixed row and column totals, Nature Communications 5 (2014), 4114 (en).
- [SS03] Francisco Santos and Bernd Sturmfels, Higher Lawrence configurations, Journal of Combinatorial Theory, Series A 103 (2003), no. 1, 151–164.
- [ST97] Bernd Sturmfels and Rekha R. Thomas, Variation of cost functions in integer programming, Mathematical Programming 77 (1997), no. 2, 357–387 (en).
- [TTN95] Sridhar R. Tayur, Rekha R. Thomas, and N. R. Natraj, An algebraic geometry algorithm for scheduling in presence of setups and correlated demands, Mathematical Programming 69 (1995), no. 1-3, 369–401 (en).
- [WZ98] Bo-Ying Wang and Fuzhen Zhang, On the precise number of (0,1)-matrices in A(R,S), Discrete Mathematics 187 (1998), no. 1, 211–220.