DeepAI
Log In Sign Up

ReDUCE: Reformulation of Mixed Integer Programs using Data from Unsupervised Clusters for Learning Efficient Strategies

10/01/2021
by   Xuan Lin, et al.
0

Mixed integer convex and nonlinear programs, MICP and MINLP, are expressive but require long solving times. Recent work that combines learning methods on solver heuristics has shown potential to overcome this issue allowing for applications on larger scale practical problems. Gathering sufficient training data to employ these methods still present a challenge since getting data from traditional solvers are slow and newer learning approaches still require large amounts of data. In order to scale up and make these hybrid learning approaches more manageable we propose ReDUCE, a method that exploits structure within small to medium size datasets. We also introduce the bookshelf organization problem as an MINLP as a way to measure performance of solvers with ReDUCE. Results show that existing algorithms with ReDUCE can solve this problem within a few seconds, a significant improvement over the original formulation. ReDUCE is demonstrated as a high level planner for a robotic arm for the bookshelf problem.

READ FULL TEXT VIEW PDF

page 1

page 5

09/08/2020

On Training Neural Networks with Mixed Integer Programming

Recent work has shown potential in using Mixed Integer Programming (MIP)...
12/23/2020

Solving Mixed Integer Programs Using Neural Networks

Mixed Integer Programming (MIP) solvers rely on an array of sophisticate...
04/07/2020

Learning Mixed-Integer Convex Optimization Strategies for Robot Planning and Control

Mixed-integer convex programming (MICP) has seen significant algorithmic...
06/09/2021

Learning Pseudo-Backdoors for Mixed Integer Programs

We propose a machine learning approach for quickly solving Mixed Integer...
01/11/2021

Marketing Mix Optimization with Practical Constraints

In this paper, we address a variant of the marketing mix optimization (M...

I Introduction

Optimization-based methods are useful tools for solving robotic motion planning problems. Typical approaches such as mixed-integer convex programs (MICPs) [13, 16], nonlinear or nonconvex programs (NLPs) [12, 27, 20] and mixed-integer NLPs (MINLPs) [21] offer powerful tools to formulate these problems. However, each has its own drawbacks. NLPs tend to suffer from local optimal solutions. In practice, local optimal solutions can sometimes have bad properties, such as inconsistent behavior as they depend on initial guesses. Mixed-integer programs (MIPs) are a type of NP-hard problem. Branch-and-bound is usually used to solve MIPs [8]. MIP solvers seek global optimal solutions, therefore, having more consistent behavior than NLP solvers. For small-scale problems, these algorithms usually find optimal solutions within a reasonable time [16, 23]. On the contrary, MIPs can require impractically long solving times for problems with a large number of integer variables [15]. MINLPs incorporate both integer variables and nonlinear constraints, hence, very expressive. Unfortunately, we lack efficient algorithms to tackle MINLPs. Many practical problems require a solving speed of at most a few seconds. As a result, it is difficult to implement most of the optimization schemes online for larger-scale problems.

Fig. 1: Angle and x position of solutions to the bookshelf problem’s furthest left book are depicted by colored numbers indicating cluster membership. Each box on the grid corresponds to a different relaxation scheme using integer variables: (a) full relaxation of the whole space, (b) relaxation over the space with data, (c) relaxation for specific modes from clusters. For visualization purposes only mode 4 region is explicitly displayed. Notice how different modes can reduce the amount of integer variables needed.

Recently, researchers have started to investigate machine learning methods to gather problem specific heuristics and speed up the MIP solving process. Standard algorithms to solve MIPs such as branch-and-bound and cutting plane methods rely on heuristics to quickly remove infeasible regions. Learning methods can be used to acquire better heuristics. For example,

[17]

used graph neural networks to learn heuristics.

[22]

used reinforcement learning to discover efficient cutting planes. On the other hand, data can be collected to learn and solve specific problems

[28, 10]. In [10]

, the authors proposed CoCo which collects problem features and solved integer variable strategies offline and trains a neural network. Effectively, this becomes a classification problem, where each strategy has a unique label. For online solving, the neural network will propose candidate solutions, reducing it to a convex program. The results show that CoCo can solve MIPs with around 50 integer variables from within a second. However, with more integer variables, CoCo suffers in two aspects. First, the number of possible strategies equals to

, where is the number of integer variables. As a result the number of unique strategies tend to be close to the total amount of data. This induces overfitting. Second, as the amount of integer variables increase, the solving speed dramatically slows. As a consequence, it is difficult to collect enough reliable training data.

In this paper, we propose ReDUCE, an algorithm that combines previous unsupervised learning work

[15]

with supervised learning, e.g., CoCo

[10]

, to solve larger-scale MICPs and MINLPs online. Unsupervised learning is employed on a small amount of initial data to retrieve sub-regions, clusters, inside the solution space. Integer variables are then assigned to each cluster. This allows us to retrieve important regions on the solution manifold and reduce the amount of integer variables needed. This then allows for fast generation of much larger datasets to train supervised learners on. All datasets generated can then be used to train a final learning model which allows interpolation between clusters. We also introduce the bookshelf organization problem in this paper to demonstrate ReDUCE’s capabilities. Given a bookshelf with several books on top, an additional book needs to be placed on the shelf with minimal disturbance on the existing books. The bookshelf problem works well as a good benchmark because it: 1) is an MINLP that can be converted to an MICP problem with hundreds of integer variables, 2) can easily be scaled to push algorithms to their limit, and 3) has practical significance where data can reasonably be gathered, such as in the logistics industry.

To summarize, our contributions are as follows:

  1. Extend supervised learning schemes, e.g., CoCo, to solve MINLPs, where nonlinear or non-convex constraints are converted into MICPs constraints using convex envelopes,

  2. Use unsupervised learning to formulate the mixed-integer envelope constraints, significantly speeding up the data collection process for problems unsuitable for MICPs, i.e., very slow solving speeds, and

  3. Formulate the bookshelf organization problem as an MINLP and solve it within seconds with ReDUCE.

Ii Bookshelf Organization Problem Setup

Fig. 2: Complete formulation of the bookshelf organization problem.

Assume a 2D bookshelf with limited width and height contains rectangular books where book has width and height for . A new book, , is to be inserted into the shelf. The bookshelf contains enough books in various orientations, where in order to insert book , the other books may need to be moved, i.e., optimize for minimal movement of books. This problem is useful in the logistics industry, such as robots filling shelves.

Fig. 2 shows the constraints, variables and objective function for the bookshelf problem. The variables that characterize book are: position and angle about its centroid. when a book stands upright. The rotation matrix is: . Let the 4 vertices of book be , . The constraint A in Fig. 2 shows the linear relationship between and , where

is the constant offset vector from its centroid to vertices. Constraint

B enforces that all vertices of all books stay within the bookshelf, a linear constraint. Constraint C enforces the orthogonality of the rotation matrix, a bilinear (non-convex) constraint. Constraint D enforces that the angle stays within , storing books right side up.

To ensure that the final book positions and orientations do not overlap with each other, separating plane constraints are enforced. For convex shapes, the two shapes do not overlap with each other if and only if there exists a separating hyperplane

in between [7]. That is, for any point inside shape 1 then , and for any point inside shape 2 then . This is represented by constraint E. Constraint F enforces a to be a normal vector. Both E and F are bilinear constraints.

Finally, we need to assign a state to each book . For each book, it can be standing straight up, laying down on its left or right, or leaning towards left or right against some other book, as shown in the far right column in Fig. 2. For each book , we assign a set of integer variables . If book stands upright () or lays flat on its left () or right (), constraints I1 or J1 or H1 are enforced, respectively. If book leans against another book on the left or right, constraints in K and L are enforced, respectively. To this end checks need to indicate the contact between books. By looking at the right column in Fig. 2, we can reasonably assume that the separating plane always crosses vertex 1 of the book on the left and vertex 4 of the book on the right. This is represented by bilinear constraints, K1 and L1. In addition, the books need to remain stable given gravity. Constraints K2 and L2 enforce that a book is stable if its position stays between the supporting point of itself (vertex 2 if leaning rightward and vertex 3 if leaning leftward) and the position of the book that it is leaning onto. Lastly, constraint K3 and L3 enforce that the books have contact with the ground. For practical reasons, we assume that books cannot stack onto each other, i.e, each book has to touch the ground of bookshelf at at least one point. We note that constraints in H, I, J, K, and L can be easily formulated as MICP constraints using big-M formulation [26], such that they are enforced only if the associated integer variable . Also, it can easily be extended to allow stacking for our problem. Any contact conditions between pairs of books may also be added into this problem as long as it can be formulated as mixed-integer convex constraint. Overall, this is a problem with integer variables, , and non-convex constraints C, E, F, K1, and L1, hence, an MINLP problem.

Practically, this problem presents challenges for retrieving high quality solutions. If robots were used to store books, the permissible solving time is several seconds, and less optimal solutions means longer realization times. For example, in Fig. 4 a non-optimal insertion induces multiple additional robot motions that dramatically increase the chance of failure. There are several potential approaches to resolving this issue: fix one set of nonlinear variables and solve MICP [19], convert the nonlinear constraints into piece-wise linear constraints and formulate them into an MICP [13], or directly applied MINLP solvers such as BONMIN [6]. As expected, these approaches struggle to meet the requirements. In this paper, we implement ReDUCE to satisfy them.

Iii Learning Algorithm

Previous work demonstrated the potential of learning mixed-integer strategies offline and then using the learned model to sample candidate solutions and solve convex programs online [28, 10, 5]. In [15], the authors proposed an unsupervised learning method to identify important regions in the solution space. This approach effectively reformulated the whole MIP into multiple problems with a reduced number of integer variables. ReDUCE further builds upon this notion and combines those two approaches.

Assume that we are given a set of problems parametrized by that is drawn from a distribution . For each , we seek a solution to the optimization problem:

(1)
s. t.

Where x denotes continuous variables and zbinary variables with for . Constraints are mixed-integer convex, meaning if the binary variables z are relaxed into continuous variables , becomes convex. Constraints are mixed-integer bilinear, meaning that relaxing the binary variables gives bilinear constraints. Without loss of generality, x and z are assumed to be involved in each constraint. We omit equality constraints in (1) as they can be turned into two inequality constraints from opposite directions. Similar to [15], the solution set with respect to fixed z, , is defined to be the manifold containing all feasible x given z. If z is infeasible, . The full solution set, , for the problem embedded inside the solution space, , is defined by .

In this paper, we take the approach to convert bilinear constraints into mixed-integer linear constraints by gridding the solution space and approximating the constraints locally inside grids with McCormick envelopes similar to [11]. A McCormick envelope relaxation of one bilinear constraint [9] is the best linear approximation defined over a pair of lower and upper bounds and . Therefore, we first assign grids to . Let , be one cell in the grid with upper and lower bounds . We introduce additional integer variables n, where . Each unique value of n corresponds to one cell in the grid within which McCormick envelope relaxations are applied. The constraints , are converted into constraints: for and , turning (1) into an MICP:

(2)

To speed up the solving speed for MICPs, [5, 10, 4] define integer strategies to be tuples of , where is an optimizer for problem (2), is the set of active inequality constraints. Given an optimal integer strategy , solution to (2) can be retrieved through solving a convex optimization problem. If this approach returns a small amount of integer variables, it may perform well. However, approximating nonlinear constraints with mixed-integer convex constraints usually gives large number of integer variables when high approximation accuracy is desired [11]. As a result, the solving time grows fast. This makes it difficult to collect data to train supervised learners.

This paper uses clustering method on a relatively smaller amount of pre-solved data to identify important regions on , and implement McCormick envelope relaxations around those regions which significantly improves the MIP solving speed. The nominal dimension of the solution space is . However, due to the existence of constraints, the actual solution set of is of much lower dimension. Relaxing the complete space of x is unnecessary as many grid cells are infeasible or non-optimal and the MIP solver should not need to spend time in exploring those regions. Furthermore, we almost never need the complete manifold in practice. Many robotic systems operate under different modes corresponding to regions on where the optimal solutions populate. With data unsupervised learning can identify those regions. Within each region, we only need a smaller amount of integer variables . Consequently, the solving speed can be significantly improved.

The detailed steps of ReDUCE are shown in Algorithm 1. To begin, ReDUCE requires a relatively small amount of pre-solved data , termed kick off data throughout. This data may be collected by solving the non-reduced formulation (2). If (2) represents a practical problem, the data may come from simulation or human demonstration on real hardware. We also need to pre-assign grids to the solution space . The size of grids depends on the approximation accuracy requirement for bilinear constraints. ReDUCE begins by performing unsupervised learning on the kick off data to retrieve clusters that indicates regions on . Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [14] was used to cluster on x which gives clusters , where is the number of clusters. We then trace map backwards to create clusters in space, i.e.

. Next, a supervised classifier is trained to classify

. We use random forest which requires relatively smaller amounts of data to train than deep learning methods. Thus, the amount of

kick off data, , can be relatively small.

The main difficulty to generate training data for (2) is its slow solving speed due to large amount of integer variables. If is segmented into smaller regions, e.g. clusters, the required integer variables for each cluster can be reduced. This may be seen in Fig. 1 which is an instance of 2-dimensional (dim) from a bookshelf experiment described in Sec. IV-A. In this paper, we use a formulation which means grids will be represented by integer variables. For example, grids can be represented by 5 integer variables. In Fig. 1

, the complete space has 27 grids which can be represented with at least 5 integer variables. However, mode 4 occupies only 6 grids which is represented with 3 integer variables. One advantage of using DBSCAN is that it uses a threshold to decide the boundaries of each cluster and identify outliers. Since DBSCAN is based off of densities, it is able to predict outliers from the data. Outliers can be removed from training data wherein the classifier can be used to make a prediction. Based on that prediction, it is possible that some outliers may have membership to multiple clusters. Also outliers may indicate insufficient sample size near that particular region. This may guide practitioners where to collect more data.

With the trained classifier, we can classify a much greater amount of problems and solve them quickly within each clustered region to collect more training data, indicated in line 4-14 of Algorithm 1. For all in cluster , we find the grid cells that they occupy and re-assign integer variables and to each cluster (line 8-10). Eventually all data are put together for training, so the definition of integer variables needs to be consistent across clusters. Thus, we recover the original integer variables z and n (line 13). With , and for cluster , problem (2) can be reformulated and solved. The feature , solution , recovered integer variables and are added to the dataset. Finally, the dataset is used to train a strategy learner, e.g. CoCo.

The idea of solving smaller sub-MIPs bears similarities with algorithms such as RENS [3] and Neural Diving [17]

where a subset of integer variables are fixed from linear program solutions or a learned model, while the others are solved. Our method, however, segments the problem through an unsupervised clustering approach with the goal of generating enough training data for supervised learning.

Input Kick off data ,
Grid ,
DBSCAN clustering threshold
desired amount of samples on label
1:  DBSCAN
2:  
3:  Train Random Forest: RF
4:  while Samples  do
5:     Sample , Classify RF and Store
6:  for  do
7:     Initialize grid cell list and dataset
8:     for each point in  do
9:         Find corresponding grid and add to
10:     Assign integer variables to
11:     Formulate from (2) with , replacing z, n by
12:     if  has solution  then
13:         Recover original , from
14:         Add , , , to
15:  Initialize Neural Network or desired model:
16:  Use to train:
17:  return  
Algorithm 1

Iv Experiment

Iv-a Experiment Setup

We use ReDUCE to solve the bookshelf organization problem. We place 3 books inside the shelf where 1 additional book is to be inserted. Grids are assigned to the variables involved in the non-convex constraint C, E, F, K1, L1: , and . These variables span a 48 dimensional space. The rotation angles , which includes , are gridded at a interval. Elements in are gridded on 0.25 intervals. Elements in are gridded at intervals the shelf width and height . Since books are not allowed to stack on each other, there is an order of books from left to right. We order the feature and solution vector according to this order. We use an MICP formulation that has integer variables (explained in detail in the appendix) where is the number of grids. This results in 130 integer variables in total. The feature vector includes the centroid positions, angles, heights and widths of stored books and height and width of the book to be inserted. The feature dimension is 17.

The kick off data was collected using a 2-dim simulated environment of books on a shelf. Initially, 4 randomly sized books are arbitrarily placed on the shelf, and then 1 is randomly removed and regarded as the book to be inserted. Contrary to the sequence, the initial state with 4 books represents one feasible (not necessarily optimal) solution to the problem of placing a book on a shelf with 3 existing books. Since this problem can be viewed as high-level planning for robotic systems, the simulated data is sufficient. For applications outside of the scope of this paper real world data may be preferable in this pipeline.

Iv-B Unsupervised Learning

ReDUCE CoCo+Re RF+Re Baseline+Re MIP+Re
Cluster
blue!10 0
blue!10
blue!10
1
blue!10 2
blue!10
blue!10
3
N/total % Int
blue!10 997/1000 blue!10 99.7% blue!1077
blue!10 2984/2999 blue!10 99.4% blue!1077
blue!10 4947/5000 blue!10 98.9% blue!1077
698/699 99.9% 77
1996/1999 99.8% 77
5976/5999 99.6% 77
blue!10 599/600 blue!10 99.8% blue!1066
blue!10 2970/2999 blue!10 99.0% blue!1066
blue!10 5476/5599 blue!10 97.8% blue!1066
339/399 85.0% 46
681/900 75.6% 46
S% Det Avg Max
blue!10 93.4% blue!10 210 blue!10 1.39 blue!10 7.39
blue!10 97.0% blue!10 144 blue!10 1.25 blue!10 7.35
blue!10 98.0% blue!10 119 blue!10 1.05 blue!10 7.23
91.2% 148 1.42 7.37
96.2% 93 1.30 7.58
99.0% 73 1.24 7.12
blue!10 91.0% blue!10 95 blue!10 1.36 blue!10 6.70
blue!10 96.8% blue!10 60 blue!10 1.13 blue!10 7.37
blue!10 98.4% blue!10 47 blue!10 1.07 blue!10 7.47
94.0% 28 1.18 7.03
97.2% 20 0.95 6.92
S% Det Avg Max
blue!10 93.2% blue!10 348 blue!10 1.42 blue!10 7.58
blue!10 97.0% blue!10 253 blue!10 1.31 blue!10 7.46
blue!10 96.0% blue!10 194 blue!10 1.17 blue!10 6.75
90.0% 267 1.35 7.10
92.6% 204 1.27 7.21
96.8% 169 1.12 7.57
blue!10 90.6% blue!10 122 blue!10 1.33 blue!10 7.51
blue!10 93.8% blue!10 88 blue!10 1.00 blue!10 7.42
blue!10 95.2% blue!10 84 blue!10 1.04 blue!10 6.97
92.6% 37 0.94 7.04
97.8% 29 0.73 6.94
S% Det
blue!10 65.4% blue!10 856
blue!10 63.7% blue!10 862
blue!10 65.5% blue!10 933
57.4% 608
60.0% 554
60.5% 543
blue!10 59.5% blue!10 281
blue!10 57.0% blue!10 293
blue!10 57.7% blue!10 251
63.2% 162
57.5% 132
Avg Max
blue!10 4.52 blue!10 37.86
blue!10 4.52 blue!10 37.86
blue!10 4.52 blue!10 37.86
2.27 16.61
2.27 16.61
2.27 16.61
blue!10 0.92 blue!10 5.46
blue!10 0.92 blue!10 5.46
blue!10 0.92 blue!10 5.46
0.21 0.78
0.21 0.78
TABLE I: Comparison of different algorithms with ReDUCE (+Re) for CoCo (CoCo+Re), Random Forest (RF+Re), random sampling (Baseline+Re), and MIP (MIP+Re). Success rate and deterioration on the objective is denoted as S% and Det, respectively. Average solving time and maximum solving time are denoted as Avg and Max, respectively. Int stands for the number of integer variables within a cluster. With more data the algorithms generally tend to improve performance.

We randomly sample 4,000 bookshelves and implement DBSCAN to realize 100 clusters. Fig. 1 shows the first 2 dimensions of the projected solution set and 6 modes with distinct labels and colors. Fig. 3 shows the solution set packed into tighter groups compared to the same sample of features using t-distributed Stochastic Neighbor Embedding (t-SNE) [24] as a projection of the high dimensional space to 2-dim. The upper graph in Fig. 3 shows the clusters in the solution space, while the lower one depicts the corresponding clusters in the feature space according to from the kick off data. The colors in Fig. 3 denote the different clusters. We can tell obvious separations in the solution space. Although the clusters are more intertwined in the feature space because of the complexity of mapping, there are still distinct regions where certain colors are more dominant. We trained a random forest (RF) classifier on the features . The classification accuracy reaches 97% indicating that RF is able to achieve a reasonable mapping.

Fig. 3: Top: projection onto a 2-dim manifold using t-SNE to depict clustering of a 48-dim solution space. Each color and label specify a solution clustering. Under this projection the solution appears very much structured. Bottom: projection of the 17-dim feature space corresponding to the solution using t-SNE. The clusters have some structure over the feature space with certain labels only being found in certain regions. For visualization purposes only the top 8 clusters out of 100 are being displayed.

Iv-C Supervised Learning

Success (%) Det Avg Time Max Time
CoCo+Re 99.2% 140 1.21 sec 13.47 sec
MIP+Re 100 % 0 2.36 sec 48 sec
MIP n/a n/a 851 sec n/a
MINLP n/a n/a 10 min n/a
TABLE II: CoCo+Re (+Re denotes using ReDUCE) returned the top 50 candidate solutions over 100 clusters. MIP+Re used those same clusters to solve the problem on Gurobi. MIP also used Gurobi. MINLP ran using BONMIN as the solver. MIP and MINLP solving time for the full set of samples have exceeded reasonable limitations beyond practical purposes.
Fig. 4: Solved scenes of bookshelves. Top Row: Instance solved by CoCo with ReDUCE. Bottom Row: Instance solved by RF with ReDUCE. Column (a), (b), (c) corresponds to cluster 0, 1, 2, respectively. Each cell contains a before and after scene with the upper diagram showing the original bookshelf with orange rectangles representing stored books and the lower diagram showing the solved bookshelf where the blue rectangle represents the inserted book. For column (a), we demonstrate a scene where RF gives a much worse result than CoCo such that multiple books are moved. However, CoCo and RF usually give similar results.

With the RF classifier trained on 100 clusters, we can quickly sample different bookshelf problems and solve the reduced MIP problem within seconds to collect more data for supervised learning. To verify the benefits with ReDUCE, we first run an experiment that observes an increase in performance with increasing amounts of data over several clusters each containing different number of integer variables (denoted as Int in Table I). We pick 3 different clusters with number of integer variables 77, 66, and 46, respectively. Four different methods to solve the reduced problem within a cluster are used and tested on a fixed testing set of 500 data points. The results are shown in Table I. The column titled ReDUCE shows the total number number of unique integer strategies (N) over training data (total), and the uniqueness percentage (%) is the resulting quotient. For the column titled CoCo+Re, we train a neural network with one hidden layer of size 10,000. The input size is equal to dimension of feature space (17) and the output layer is equal to the number of unique strategies within training data (N). The neural network then samples 30 candidate strategies online according to the rank of the softmax scores of the network and solves the associated convex optimization problems. If one strategy gives a feasible solution, we terminate the process and record the solving time and optimal cost. Otherwise, infeasibility is recorded. Since the maximum number of convex problems considered is 30 to be solved online, the problem setup time is non-negligible. To avoid additional overhead we only setup the problem once and keep on modifying the integer constraints for more instances. Thus, all solving times include the problem setup time. The solving process is done on a Core i7-7800X 3.5GHz

12 machine with Gurobi. For column RF+Re, we instead train a random forest with 150 decision trees and get the top 30 most voted strategies for candidate solutions. For column titled Baseline+Re, we simply randomly sample 30 unique strategies from the training strategies. For column MIP+Re, we solve the MIP with reduced number of integer variables, instead of using supervised learning or sampling method. For all sampling methods, we record the rate of feasibility (S%), the deterioration (Det) of optimal cost compared to the optimal cost from MIP+Re column. That is, for the MIP+Re column, its feasibility rates are all 100%, and its optimal deteriorations are 0. Solving times for baseline is omitted as they are significantly longer than learning based methods. All times under Avg (average solving time) and Max (maximum solving time) columns are given in seconds.

As the results show, the performances of learning based methods in general improve with more data. As ReDUCE improves the solving speed to collect larger amounts of data, we can further increase the performances with more data. The random sampling baseline (Baseline+Re) has significantly worse performance than learning methods. This column ensures that the clusters are not too small making the reduced problem too simple. For larger clusters with more integer variables (e.g. cluster 0), the learning methods demonstrate an increase in solving speed over MIP. For smaller clusters (e.g. cluster 2), MIP has faster solving speeds. There is a tradeoff between problem scale and solving methods depending on the cluster size.

For a second set of experiments, we collect data for all 100 clusters, in total 17,000, and train a neural network with the same size as above. The network is then tested on a 1,000 testing set with 100 clusters blended together. The network then samples 50 strategies. The percentage of feasibility (Success (%)), the optimal cost deterioration (Det), the average (Avg Time) and max (Max Time) solving time are shown in Table II. For comparison, we record the average and maximal solving time for MIP with ReDUCE (all clusters blended together). We also record the solving time for MIP without ReDUCE, the original formulation, and the solving time for formulation (1) solved with BONMIN, an MINLP solver. CoCo solves faster than MIP for larger clusters which is associated with more data from the kick off data. When sampling , we get more samples for larger clusters. Therefore, solving time is improved when averaged. For non-reduced MIP, Table II shows the average solving time over 10 samples where the solving process is interrupted if it exceeds 1,000 sec. Thus, the average solving time is at least 851 sec. It is clear that without ReDUCE, it is intractable to gather the amount of data required to train a learner of decent performance. Similar speed is seen for the MINLP solver. All the code is available at: https://github.com/RoMeLaUCLA/ReDUCE.git.

Iv-D Hardware Experiment

We implement our optimization results on hardware with a 6 degree of freedom manipulator

[18] to insert a book onto a shelf. To automate the system, a trajectory planner is required to generate the insertion motion, which is common in practice. As the focus of this paper is on high level planning, we simplify this part by manually selecting waypoints. The hardware implementation is shown in the attached video.

V Conclusion, Discussion and Future Work

This paper proposes ReDUCE, an algorithm that relies on unsupervised learning on relatively smaller dataset to reduce the number of integer variables and generate large amounts of strategies for supervised learners. The results demonstrated improvements in solving speed over the original MIP formulations allowing for practical implementations.

As this algorithm requires kick off data, considering data collection is an important aspect. Beyond collecting data from simulations or hardware demonstrations, we consider automatically generating formulations for known problems and transfer knowledge to a target domain as an interesting next step. In real life, we often encounter problems with similar solutions in different contexts. Developing tools that automatically generate MIP formulations from other domains may help generate kick off data.

It is noticed from the test results that higher feasibility rates sometime lead to lower optimality. This may be due to the optimal strategy for the testing problem not included in the training data. One future work is using generative learning approaches to combine features of different training instances to generate new strategies. Other future work include increasing the amount of data to further boost performance, increasing the amount of books to test the limit of ReDUCE, implementing parallel computing to increase online solving speed, and comparing the performance of ReDUCE with reinforcement learning algorithms.

Acknowledgements

The authors would like to thank Abhishek Cauligi and Professor Bartolomeo Stellato for helpful discussions, and Yusuke Tanaka and Hyunwoo Nam for the assisting with hardware implementation.

References

  • [1] D. Avis and K. Fukuda (1992) A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra. Discrete & Computational Geometry 8 (3), pp. 295–313. Cited by: ReDUCE: Reformulation of Mixed Integer Programs using Data from Unsupervised Clusters for Learning Efficient Strategies.
  • [2] P. Belotti, L. Liberti, A. Lodi, G. Nannicini, A. Tramontani, et al. (2011) Disjunctive inequalities: applications and extensions. Wiley Encyclopedia of Operations Research and Management Science 2, pp. 1441–1450. Cited by: ReDUCE: Reformulation of Mixed Integer Programs using Data from Unsupervised Clusters for Learning Efficient Strategies.
  • [3] T. Berthold (2014) Rens. Mathematical Programming Computation 6 (1), pp. 33–54. Cited by: §III.
  • [4] D. Bertsimas and B. Stellato (2019) Online mixed-integer optimization in milliseconds. arXiv preprint arXiv:1907.02206. Cited by: §III.
  • [5] D. Bertsimas and B. Stellato (2021) The voice of optimization. Machine Learning 110 (2), pp. 249–277. Cited by: §III, §III.
  • [6] P. Bonami and J. Lee (2007) BONMIN user’s manual. Numer Math 4, pp. 1–32. Cited by: §II.
  • [7] S. Boyd, S. P. Boyd, and L. Vandenberghe (2004) Convex optimization. Cambridge university press. Cited by: §II.
  • [8] S. Boyd and J. Mattingley (2007) Branch and bound methods. Notes for EE364b, Stanford University, pp. 2006–07. Cited by: §I.
  • [9] P. M. Castro (2015) Tightening piecewise mccormick relaxations for bilinear problems. Computers & Chemical Engineering 72, pp. 300–311. Cited by: §III.
  • [10] A. Cauligi, P. Culbertson, E. Schmerling, M. Schwager, B. Stellato, and M. Pavone (2021) CoCo: online mixed-integer control via supervised learning. arXiv preprint arXiv:2107.08143. Cited by: §I, §I, §III, §III.
  • [11] H. Dai, G. Izatt, and R. Tedrake (2019) Global inverse kinematics via mixed-integer convex optimization. The International Journal of Robotics Research 38 (12-13), pp. 1420–1441. Cited by: §III, §III.
  • [12] H. Dai, A. Valenzuela, and R. Tedrake (2014) Whole-body motion planning with centroidal dynamics and full kinematics. In 2014 IEEE-RAS International Conference on Humanoid Robots, pp. 295–302. Cited by: §I.
  • [13] R. Deits and R. Tedrake (2014) Footstep planning on uneven terrain with mixed-integer convex optimization. In 2014 IEEE-RAS international conference on humanoid robots, pp. 279–286. Cited by: §I, §II.
  • [14] M. Ester, H. Kriegel, J. Sander, and X. Xu (1996) Density-based spatial clustering of applications with noise. In Int. Conf. Knowledge Discovery and Data Mining, Vol. 240, pp. 6. Cited by: §III.
  • [15] X. Lin, M. S. Ahn, and D. Hong (2021) Designing multi-stage coupled convex programming with data-driven mccormick envelope relaxations for motion planning. In 2021 IEEE/RSJ International Conference on Robotics and Automation (ICRA), Cited by: §I, §I, §III, §III.
  • [16] X. Lin, J. Zhang, J. Shen, G. Fernandez, and D. W. Hong (2019) Optimization based motion planning for multi-limbed vertical climbing robots. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1918–1925. Cited by: §I.
  • [17] V. Nair, S. Bartunov, F. Gimeno, I. von Glehn, P. Lichocki, I. Lobov, B. O’Donoghue, N. Sonnerat, C. Tjandraatmadja, P. Wang, et al. (2020) Solving mixed integer programs using neural networks. arXiv preprint arXiv:2012.13349. Cited by: §I, §III.
  • [18] D. Noh, Y. Liu, F. Rafeedi, H. Nam, K. Gillespie, J. Yi, T. Zhu, Q. Xu, and D. Hong (2020) Minimal degree of freedom dual-arm manipulation platform with coupling body joint for diverse cooking tasks. In 2020 17th International Conference on Ubiquitous Robots (UR), pp. 225–232. Cited by: §IV-D.
  • [19] M. Posa, M. Tobenkin, and R. Tedrake (2015) Stability analysis and control of rigid-body systems with impacts and friction. IEEE Transactions on Automatic Control 61 (6), pp. 1423–1437. Cited by: §II.
  • [20] Y. Shirai, X. Lin, Y. Tanaka, A. Mehta, and D. Hong (2020) Risk-aware motion planning for a limbed robot with stochastic gripping forces using nonlinear programming. IEEE Robotics and Automation Letters 5 (4), pp. 4994–5001. Cited by: §I.
  • [21] M. Soler, A. Olivares, P. Bonami, and E. Staffetti (2011) En-route optimal flight planning constrained to pass through waypoints using minlp. Cited by: §I.
  • [22] Y. Tang, S. Agrawal, and Y. Faenza (2020) Reinforcement learning for integer programming: learning to cut. In International Conference on Machine Learning, pp. 9367–9376. Cited by: §I.
  • [23] J. Tordesillas, B. T. Lopez, and J. P. How (2019) Faster: fast and safe trajectory planner for flights in unknown environments. In 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 1934–1940. Cited by: §I.
  • [24] L. Van der Maaten and G. Hinton (2008) Visualizing data using t-sne.. Journal of machine learning research 9 (11). Cited by: §IV-B.
  • [25] J. P. Vielma and G. L. Nemhauser (2011) Modeling disjunctive constraints with a logarithmic number of binary variables and constraints. Mathematical Programming 128 (1), pp. 49–72. Cited by: ReDUCE: Reformulation of Mixed Integer Programs using Data from Unsupervised Clusters for Learning Efficient Strategies.
  • [26] J. P. Vielma (2015) Mixed integer linear programming formulation techniques. Siam Review 57 (1), pp. 3–57. Cited by: §II.
  • [27] A. W. Winkler, C. D. Bellicoso, M. Hutter, and J. Buchli (2018) Gait and trajectory optimization for legged systems through phase-based end-effector parameterization. IEEE Robotics and Automation Letters 3 (3), pp. 1560–1567. Cited by: §I.
  • [28] J. Zhu and G. Martius (2019) Fast non-parametric learning to accelerate mixed-integer programming for online hybrid model predictive control. arXiv preprint arXiv:1911.09214. Cited by: §I, §III.