Learning Objective Boundaries for Constraint Optimization Problems

06/20/2020 ∙ by Helge Spieker, et al. ∙ Simula Research Lab 0

Constraint Optimization Problems (COP) are often considered without sufficient knowledge on the boundaries of the objective variable to optimize. When available, tight boundaries are helpful to prune the search space or estimate problem characteristics. Finding close boundaries, that correctly under- and overestimate the optimum, is almost impossible without actually solving the COP. This paper introduces Bion, a novel approach for boundary estimation by learning from previously solved instances of the COP. Based on supervised machine learning, Bion is problem-specific and solver-independent and can be applied to any COP which is repeatedly solved with different data inputs. An experimental evaluation over seven realistic COPs shows that an estimation model can be trained to prune the objective variables' domains by over 80 find that Bion improves the solving process for some problems, although the effect of closer bounds is generally problem-dependent.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many scheduling or planning problems involve the exact optimization of some variable (e.g., timespan), that depends on decision variables, constrained by a set of combinatorial relations. These problems, called Constraint Optimization Problems (COP), are notoriously difficult to solve [Hooker2012]. They are often addressed with systematic tree-search, such as branch-and-bound, where parts of the search space with worse cost than the current best solution are pruned. In Constraint Programming, these systematic techniques work without prior knowledge and are steered by the constraint model. Unfortunately, the worst-case computational cost to fully explore the search space is exponential in the worst case and the performance of the solver depends on efficient domain pruning [Rossi2006].

COP solving with branch-and-bound is often considered without sufficient knowledge on boundaries of the objective variable [Milano2006]. When available, these boundaries can support the solver in discovering near-optimal solutions early during search and thus can reduce the computational effort [Beck2011, Hooker2012, Gualandi2012]. In addition, providing tight estimates of the objective variable is useful user-feedback to estimate interesting features of COP [DeRaedt2011]. Unfortunately, finding close under- and over estimations of the objective variable is still an open problem [Ha2015]

and almost impossible without actually running the solver with a good heuristic. Domain boundaries are therefore usually obtained through problem-specific heuristics

[Beck2011, Gualandi2012, Liu2018], which requires a deep understanding of the COP. Finding a generic method for closer domain estimation would allow many COP instances to be solved more efficiently.

This paper introduces Bion, a new method combining logic-driven constraint optimization and supervised machine learning (ML), for solving COP. Using the known results of already-solved instances of a COP, a trained data-driven ML estimation model predicts boundaries for the objective variable of a new instance. These boundaries are then exploited by a COP solver to prune the search space. In simpler words, for a given COP, an estimation model is trained once with already solved instances of the same problem. For a new instance, the trained model is exploited to estimate close boundaries of the objective variable. Using the estimated boundaries, additional domain constraints are added to the COP model and used by the solver to prune the search space at low cost. Note however that ML methods can only approximate the optimum and are therefore not a full alternative. To eliminate the inherent risk of misestimations, which can cause unsatisfiability, the ML model is trained with an asymmetric loss function, adjusted training labels, and other counter-measures. As a result, Bion is an exact method for solving COP and can complement advantageously any existing COP solver. Interestingly, the main computational cost of Bion lies in the training part, but the estimation cost for a new input is low.

Besides the general ability to estimate close objective boundaries, we explore with Bion how useful these boundaries are to prune the search space and to improve the COP solving process. Our results show that boundary estimation can generally improve solver performance, even though there are dependencies on the right combination of solver and COP model for best use of the reduced domains. The main contributions of this paper are threefold:

  1. We introduce Bion, a new exact method combining ML and traditional COP solving to estimate close boundaries of the objective variable and to exploit these boundaries for boosting the solving process. Bion can be advantageously applied to any COP which is repeatedly solved with different data inputs. To the best of our knowledge, this is the first time a problem- and solver-independent ML method using historical data is proposed.

  2. We discuss training techniques to avoid misestimations, compare various ML models such as gradient tree boosting, support vector machine and neural networks, with symmetric and asymmetric loss functions. A contribution lies in the dedicated feature selection and user-parameterized label shift, a new method to train these models on COP characteristics.

  3. We evaluate Bion’s ability to prune objective domains, as well as the impact of estimated and manually determined boundaries on solver performance with seven COPs.

2 Related Work

Many exact solvers include heuristics to initial feasible solutions that can be used for bounding the search, using for example, linear relaxations  [Hooker2012]. Others rely on branching heuristics and constraint propagation to find close bounds early [Rossi2006]. Recent works have also considered including the objective variable as part of the search and branch heuristic [Fages2017, Palmieri2018]. By exploiting a trained ML model, our approach Bion adds an additional bounding step before the solver execution but it does not replace the solvers’ bounding mechanisms. Hence, it complements these approaches by starting with a smaller search space.

The combination of ML and exact solvers has been previously explored from different angles [DeRaedt2011, Lombardi2018]. One angle is the usage of ML for solver configuration and algorithm selection, for example by selecting and configuring a search strategy [Arbelaez2010, Loth2013, Chu2015], deciding when to run heuristics [Khalil2017] or lazy learning [Gent2010], or to efficiently orchestrate a solver portfolio [Seipp2015, Amadini2015]. In [Cappart2019], Cappart et al.

propose the integration of reinforcement learning into the construction of decision diagrams to derive bounds for solutions of optimization problems. A framework for mutual integration of CP and ML was presented, the Inductive Constraint Programming (ICP) loop 

[Bessiere2017], which is also applicable to Bion. Lombardi et al. present a general framework for embedding ML-trained models in optimization techniques, called empirical model learning [Lombardi2017]

. This approach deploys trained ML models directly in the COP as additional global constraints, which is a promising approach, especially when the learning model works directly on the model input. In our case, the feature set uses external instance descriptions and works on different input sizes, which is a different type of integration. The coupling of data-driven and exact methods differs from the work on learning combinatorial optimization purely from training

[Bello2017, Dai2017, Deudon2018], without using any constraint solver. As the complexity of a full solution is much higher than estimating the objective value, these methods require considerably more training data and computational resources, and are not yet competitive in practice.

Previous work also exists on predicting instance characteristics of COPs. These characteristics include runtime prediction, e.g., for the traveling salesperson problem [Mersmann2013, Hoos2014] or combining multiple heuristics into a single admissible A* heuristic [Samadi2008]. However, these methods are tailored to a single problem and rely on problem-specific features for the statistical model, which makes them effective for their specific use-case. They require substantial analytic effort per problem. In contrast, our approach is both solver- and problem-independent.

3 Background

This section introduces constraint optimization in the context of Constraint Programming over Finite Domains [Rossi2006] and necessities of supervised ML.

We define a Constraint Optimization Problem (COP) as a triple where is a set of variables, is a set of constraints and is an objective function with value to optimize. Each variable , also called decision variable, is associated with a finite domain , representing all possible values. A constraint is defined over a subset of variables, represented by all allowed -tuples of the corresponding relation.

A (feasible) solution of the COP is an assignment of each variable to a single value from its domain, such that all constraints are satisfied. Each solution corresponds to an objective value . The goal of solving the COP is to find at least one , called an optimal solution, such that is optimal. Depending on the problem formulation, has either to be minimized or maximized by finding solutions with a smaller respectively larger objective value . We use and to refer to the lower and upper domain boundaries of a variable , and the notation to denote the domain, i.e. the integer set .

A COP is satisfiable iff it has at least one feasible solution. The instance is unsatisfiable iff it has no feasible solution. A COP is said to be solved iff at least one of its feasible solution is proved optimal.

A ML model can be trained for the regression task to approximate the objective function of a COP. In a typical regression task, a continuous value is predicted for a given input vector :

, e.g., for predicting closer domain boundaries. The model can be trained through supervised learning, that is, by examples of input vectors

and true outputs : . During successive iterations, the model parameters are adjusted with the help of a loss function, that evaluates the approximation of training examples [Hastie2009].

We now introduce the concept of estimated boundaries, which refers to providing close lower and upper bounds for the optimal value .An estimation is a domain which defines boundaries for the domain of . The domain boundaries are predicted by supervised ML, that is, , . An estimation is admissible iff . Otherwise, the estimation is inadmissible.

We further classify the two domain boundaries as

cutting and limiting boundaries in relation to their effect on the solver’s search process. Depending on whether the COP is a minimization or maximization problem, these terms refer to different domain boundaries. The cutting boundary is the domain boundary that reduces the number of reachable solutions. For minimization, this is the upper domain boundary ; for maximization, the lower domain boundary . Similarly, the limiting boundary is the domain boundary that does not reduce the number of reachable solutions, but only reduces the search space to be explored. For minimization, this is the lower domain boundary ; for maximization, the upper domain boundary .

4 Learning to Estimate Boundaries

In this section, we explain the initial part of our method Bion, which is to train a ML model to estimate close objective domain boundaries. This includes a discussion on the features to describe COPs and their instances, and how to train the model such that the estimated boundaries do not render the problem unsatisfiable or exclude the optimal objective value.

Training an estimator model has to be performed only once per COP. From each instance of an existing dataset, a feature vector is extracted in order to train an estimator model. This model predicts both lower and upper boundaries for the objective variable of each instance. When the estimated boundaries are used to support COP solving, they are embedded into the COP, either through additional hard constraints or with an augmented search strategy, which we will discuss in Section 4.3.

The training set is constructed such that the label of each COP instance, i.e., the true optimal objective value , is scaled by the original objective domain boundaries into , with and . After estimating the boundaries, the model output is scaled back from to the original domain. This scaling allows the model to relate the approximated objective function to the initial domain. This is useful both if the given boundaries tend to systematically under- and overestimate the optimal objective, and as it steers the estimations to be within the original boundaries and therefore improve admissibility. Furthermore, some ML models, such as those based on neural networks, benefit from having an output within a specified range for their estimation performance.

4.1 Instance Representation

The estimator ML model expects a fixed-size numeric feature vectors as its input. This feature vector is calculated from the COP model and the instance by analyzing its values and the model structure. As the presented method is problem-independent, a generic set of features is chosen, and problem-specific information is gathered by exploiting the variables and structure of the COP instance. This means, the set of features to describe a COP instance can be calculated for any instance of any COP model, which makes the method easily transferable without having to identify domain-dependent, specific features. Still, it is possible to extend the feature set by problem- or model-specific features to further improve the model performance.

At the same time, by requirement of the used ML methods, the size of the feature vector is fixed for all inputs of an estimator, i.e. for one COP model and all its possible instances. However, in practice, the size of each COP instance varies and the instance cannot directly function as a feature vector. For example, the number of locations varies in routing problems, or the number of tasks varies in scheduling problems, with each having a couple of additional attributes per location or task.

We construct the generic feature vector from two main sets of features. The first set of features focuses on the instance parameters and their values, i.e. these features directly encode the input to the COP model. The second set of features stem from the combination of COP model and instance and describe the resulting constraint system.

From the description of each decision variable of the COP, a first set of features is constructed. Thereby, each COP uses a problem-specific feature vector, depending on the number and the type of decision variables, constructed from problem-independent characteristics. Each decision variable of the COP instance is processed and described individually. Finite domains variables and constants are directly used without further processing. Data structures for collections of values, such as lists or sets, are described by

measures from descriptive statistics. Multidimensional and nested data structures are aggregated to a single, fixed-size dimension by the sum of the features for each nested dimension. The

statistical measures are: 1) The number of values in the collection and their 2) minimum and 3) maximum; 4) standard deviation and 5) interquartile range for dispersion; 6) mean and 7) median for central tendency; 8) skew and 9) kurtosis to describe the distribution’s shape.

Additionally, the second set of features characterize the complete constraint system of COP model and instance. These features analyze the number of variables, constraints, etc. for the COP [Amadini2014] and have originally been developed to efficiently schedule a portfolio of solvers [Amadini2015]. Some of these features are without relevant information, because they can be constant for all instances of a COP, e.g. the number of global constraints in the model. Although these features are less descriptive than the first set, we observed a small accuracy improvement from including them.

Finally, when the feature vectors for all instances in the training set have been constructed, a final step is to remove features with a low variance, that add little or no information, to reduce the number of inputs and model complexity.

4.2 Eliminating Inadmissible Estimations

The boundaries of a domain regulate which optimum values a variable can take. For the objective variable, this means which objective values can be reached. By further limiting the domain through an external estimation, the desired effect is to prune unnecessary values, such as low-quality objective values. This pruning focuses the search on the high-quality region of the objective space.

The trained estimator only approximates an instance’s objective value, but does not calculate it precisely, i.e., even after training estimation errors occur. These errors are usually approximately evenly distributed between under- and over estimations, because many applications make no difference in the error type. However, in the boundary estimation case, the type of error is crucial. If the estimator underestimates, resp. overestimates, the objective value in a minimization, resp. maximization COP, all solutions, including the optima, are excluded from the resulting objective domain. On the other hand, errors shifting the boundary away from the optimum only have the penalty of a larger resulting domain, but still find high-quality solutions, including the optima.

We consider three techniques to avoid inadmissible estimations. Two of these techniques, label shift and asymmetric losses, are applied during the estimator’s training phase. The third affects the integration of estimated boundaries during constraint solving and will be discussed in Section 4.3.

4.2.1 Adjusting Training Labels

As inadmissible estimations are costly, but some estimation error is acceptable, the first technique label shift changes the training label, i.e. the objective value of the sample instances, by a small margin, such that a small estimation error still leads to an admissible estimation. The estimator is thereby trained to always under- or overestimate the objective value. Label shift is similar to the prediction shift concept from [Tolstikov2017], but is specifically designed for boundary estimation and the application on COP models.

Formally, label shift is defined as:

with adjustment factor . The configuration parameter steers the label shift margin, with a smaller being closer to the true label . Therefore, setting is a trade-off between close estimations and their feasibility.

4.2.2 Training with Asymmetric Loss Functions

ML models are usually trained with symmetric loss functions, that do not differentiate between positive and negative errors. An asymmetric loss function, on the other hand, assigns higher loss values for either under- or overestimations, which penalizes certain errors stronger than others. Figure 1 shows an example of quadratic symmetric and asymmetric loss functions and the difference in penalization.

Shifted Squared Error Loss is an imbalanced variant of squared error loss. Formally speaking, the shifted squared error loss is defined as

where is the estimated value and is the true target value. The parameter shifts the penalization towards under- or overestimation and influences the magnitude of the penalty.

Residual

Loss

Symmetric Loss (Squared Error)

Asymmetric Loss (a = -0.8)

Figure 1: Symmetric and asymmetric loss functions. The asymmetric loss assigns a higher loss to a negative residuals, but lower loss to overestimations.

4.3 Estimated Boundaries during Search

One potential application of estimated objective boundaries is their usage to improve the COP solving process. Using Bion to solve a COP consists of the following steps: 1) (Initially) Train an estimator model for the COP; 2) Extract a feature vector from each COP instance; 3) Estimate both a lower and an upper objective boundaries; 4) Update the COP with estimated boundaries; and 5) Solve the updated COP with the solver.

The boundaries provided by the estimator can be embedded as hard constraints on the objective variable, i.e., by adding . The induced overhead is negligible, but dealing with misestimations requires additional control. If all feasible solutions are excluded, because the cutting bound is wrongly estimated, the instance is rendered unsatisfiable. This issue is handled by reverting to the original domain. If only optimal solutions are excluded, because the limiting bound is wrongly estimated, then only non-optimal solutions can be returned and this stays impossible to notice. This issue cannot be detected in a single optimization run of the solver. However, in practical cases where the goal is to find good-enough solutions early rather than finding truly-proven optima, it can be an acceptable risk to come-up with an good approximation of the optimal solutions only. In conclusion, hard boundary constraints are especially suited for cases where a high confidence in the quality of the estimator has been gained, and the occurrence of inadmissible estimations is unlikely.

5 Experimental Evaluation

We evaluate our method in three experiments, which focus 1) on the impact of label shift and asymmetric loss functions for training the estimator, 2) on the estimators’ performance to bound the objective domain, and 3) on the impact of estimated boundaries on solver performance.

We selected the seven COP having the most instances from the MiniZinc benchmark repository111github.com/MiniZinc/minizinc-benchmarks, some of which are deployed in MiniZinc challenges [Stuckey2014]. These seven COPs are MRCPSP (11182 instances), RCPSP (2904 inst.), Bin Packing (500 inst.), Cutting Stock (121 inst.), Jobshop (74 inst.), VRP (74 inst.), and Open Stacks (50 inst.). Considering training sets of different sizes, from 50 to over 11,000 instances, is relevant to understand scenarios that can benefit from boundary estimation.

We consider four ML models to estimate boundaries: neural networks (NN), gradient tree boosting (GTB), support vector machines (SVM) and linear regression (LR). NN and GTB come in two variants, using either symmetric (

, ) or asymmetric (,

) loss functions. All models are used with their default hyperparameters as defined by the libraries. The NN is a feed-forward neural network with

layers of hidden nodes, a larger NN did not show to improve the results. Bion is implemented in Python, using scikit-learn [Pedregosa2011]

for SVM and LR. To support the implementation of asymmetric loss functions, NNs are based on Keras 

[Chollet2015]

, and GTB on XGBoost 

[Chen2016].

Our experiments are focused towards the general effectiveness of Bion over a range of problems. Therefore, we used the default parameters and did not perform parameter tuning, although it could improve the performance. As loss factors for the asymmetric loss functions, we set for and for , where a smaller caused problems during training.

We trained the ML models on commodity hardware without GPU acceleration, and the training time took less than sec. per model, except for MRCPSP with up to minutes with the NN model. The estimation performance of the ML models is measured via repeated 10-fold validation. The training set is split randomly into 10 folds, of which 9 folds are used to train the model and 1 fold for evaluation. This step is repeated 10 times, so each part is used for evaluation once. We report the median results over 10 runs.

5.1 Avoiding Inadmissible Estimations

The first experiment focuses on label shift, a technique to avoid inadmissible estimations. Label shift changes the training label of the ML model towards under- or overestimation based on an adjustment factor . Setting is a trade-off between close boundaries and inadmissible estimations. For the experiment, we considered values between and for every estimator variant.

All models benefit from training with label shift, but the adjustment factor for best performance varies. The asymmetric models and only require small , respectively 0.3, to maximize the number of admissible estimations. For LR (), SVM (), (), and (), label shift is necessary to reach a large amount of admissible estimations. Here, the optimal is approximately , which shifts the label in the middle between optimum and domain boundary. As symmetric models do not distinguish between under- and overestimation error, this confirms the trade-off characteristic of . These results underline the benefit of training with both label shift and asymmetric loss functions for avoiding inadmissible estimations.

5.2 Estimating Tighter Domain Boundaries

GTB GTB LR NN NN SVM
Gap Size Gap Size Gap Size Gap Size Gap Size Gap Size
Bin Packing 68 65 60 58 48 48 78* 68* 50 48 15 18
Cutting Stock 64* 66 58* 59 48* 49 41*** 71+ 48* 49 29* 17
Jobshop 69 69 60 60 50 50 87 81 50 48 19 20
MRCPSP 64 61 60 59 49 49 80 76 49 49 13 19
Open Stacks 64* 60+ 59* 53 43** 43 56** 33* 47* 42+ 15+ 15
RCPSP 65 64 60 60 50 50 80+ 76+ 50 50 13 20
VRP 70 70 60 60 50 50 89 88 50 50 0 0
Table 1: Reduction in objective domain through estimated boundaries (in %). Gap: Domain size between cutting boundary and optimum (). Size: Ratio between new and initial domain size (). Cells show the median and the median absolute deviation (MAD): No superscript indicator .

We analyze here the capability of each model to estimate tight domain boundaries, as compared to the original domains of the COP. As evaluation metrics, the size of the estimated domain is compared to the original domain size. Furthermore, the distance between cutting boundary and optimal objective value is compared between the estimated and original domain. A closer gap between cutting bound and objective value leads to a relatively better first solution when using the estimations and is therefore of practical interest. Table

1 shows the estimation performance per problem and estimator. The results show that asymmetric models are able to estimate closer boundaries than symmetric models. For each model, the estimation performance is consistent over all problems.

First, we look at the share of admissible estimations. Most models achieve % admissible estimations in all problems. Exceptions exist for Cutting Stock (, , LR: 91 %, SVM: 50 %) and RCPSP (, SVM: 83 %, all other models: ). In general, has the highest number of admissible estimations, followed by . The largest reduction is achieved by , making it the overall best performing model. is also capable to consistently reduce the domain size by over %, but not as much as . Cutting Stock and Open Stacks are difficult problems for most models, as indicated by the deviations in the results. LR and reduce the domain size by approximately 50 %, when the label shift adjustment factor is , as selected in the previous experiment.

Conclusively, these results show that Bion has an excellent ability to derive general estimation models from the extracted instance features. The estimators reduce substantially the domains and provide tight boundaries.

5.3 Effects on Solver Performance

Our third experiment investigated the effect of objective boundaries on the solver performance. The setup for the experiments is as follows. For each COP, 30 instances were randomly selected. Each instance was run in four configurations, using: 1) The original COP model without any modification; 2) The COP model with added upper and lower boundary constraints, estimated by Bion with ; 3) The COP model with only an upper boundary constraint, estimated by Bion; 4) The COP model with a user-fixed upper boundary, set up on the middle between the true optimum and the first found solution when using no boundary constraints (): . This model is a baseline to evaluate whether there is any benefit in solving the COP with additional boundary constraints.

Equiv. Solution Time Quality of First Time to Completion
Fixed Upper Both Fixed Upper Both Fixed Upper Both
Bin Packing -9.4 36.1 13.0 -37.9 -57.7 -57.7 36.1 2140.7 2364.5
Jobshop -96.5 -96.4 -96.6 -38.1 -60.0 -60.0 -27.6 -53.6 -42.5
MRCPSP 0.0 0.0 0.0 -10.8 -0.4 0.0 1.2 0.3 -3.4
Open Stacks -1.3 -1.3 -0.9 -24.0 -13.2 -13.2 2.0 -0.4 2.9
RCPSP -3.2 197.4 25.3 -3.3 0.0 0.0 -4.2 0.0 -4.2
VRP 0.4 0.0 0.0 -23.5 0.0 0.0 2.0 7.0 7.0
(a) Chuffed
Bin Packing 53.5 0.3 -0.6 -4.7 0.0 0.0 -10.3 -4.0 -13.0
Cutting Stock 5627.0 7.3 -29.5 -8.5 -5.5 -2.6
Jobshop 189.3 -6.4 37.4 -10.9 6.1 6.1 0.0 0.0 0.0
MRCPSP 0.0 0.0 23.6 -10.8 -0.4 -0.2 1.3 0.0 4.0
Open Stacks 0.0 -1.5 0.0 -24.0 -12.8 -12.8 8.9 6.4 6.8
RCPSP -17.2 56.8 -14.4 -2.8 0.0 0.0 -11.8 12.0 -9.4
VRP 0.0 0.0 0.0 -21.0 0.0 0.0 -19.0 -18.0 -8.0
(b) Gecode
Bin Packing -22.7 35.0 39.2 -37.4 -57.0 -57.2 104.4 170.0 172.4
Jobshop 1.1 0.0 0.0 -16.5 -0.8 -0.8 0.0 0.0 0.0
MRCPSP -3.2 -3.0 45.3 -10.8 -0.4 0.0 -2.4 -2.1 1.2
Open Stacks -5.0 -2.6 -3.1 -24.0 -13.2 -13.2 6.3 -1.2 2.3
RCPSP 0.0 147.2 30.4 -3.3 0.0 0.0 -6.6 27.0 7.8
VRP -95.3 0.0 0.0 -38.2 0.0 0.0 32.0 -3.0 -5.0
(c) OR-Tools
Table 2: Effect of boundaries on solver performance (in %). Fixed: Upper boundary set to middle between optimum and first found solution of unbounded run. Upper: Upper boundary set to estimated boundary. Both: Upper and lower boundary set to estimated boundaries. Results are averaged over 30 instances, lower values are better.

We selected three distinct State-of-the-Art COP solvers, among those which have the highest rank in MiniZinc challenges: Chuffed (as distributed with MiniZinc 2.1.7) [Chu2016], Gecode 6.0.1 [Schulte2018], and Google OR-Tools 6.8. All runs were performed with a -hour timeout on a single-core of an Intel E5-2670 with 2.6GHz.

Three metrics were used for evaluation (all in  %), each comparing a run with added boundary constraint to the original COP model. The Equivalent Solution Time compares the time taken by the original model to reach a solution of similar or better quality than the first found solution when using Bion. It is calculated as . The Quality of First compares the quality of the first found solutions with and without boundary constraints and is calculated as . The Time to Completion relates the times until the search completed and the optimal solution is found. It is calculated in the same way as the Equivalent Solution Time.

The results are shown in Table 2, listed per solver and problem. The results for the Cutting Stock problem for Chuffed and OR-Tools are not given, because none of the configurations, including the original COP, found a solution for more than one instance. Gecode found at least one solution for of instances. We obtain mixed results for the different solvers and problems, which indicates that benefiting from objective boundaries is both problem- and solver-specific. This holds true both for the boundaries determined by boundary estimation (columns Upper and Both) and the user-fixed boundary (column Fixed).

The general intuition, also confirmed by the literature, is that in many cases a reduced solution space allows more efficient search and for several COPs, this is confirmed. An interpretation for why the boundary constraints in some cases hinder effective search, compared to the original COP, is that the solvers can apply different improvement techniques for domain pruning or search once an initial solution is found. The best results are obtained for solving Jobshop with Chuffed, where the constraints improve both the time to find a good initial solution and the time until the search is completed. Whether both an upper and lower boundary constraint can be useful is visible for the combination of Gecode and RCPSP. Here, posting only the upper boundary constraint is not beneficial for the Equivalent Solution Time, but with both upper and lower boundary Gecode is 14 % faster than without any boundaries. A similar behaviour shows for Chuffed and RCPSP regarding Time to Completion, where only the upper boundary has no effect, but posting both bounds reduces the total solving time by 4 %. At the same time, we observe that posting both upper and lower boundaries, even though they reduce the original domain boundaries, does not always help the solver, such as for example in the combination of Chuffed and Jobshop. This can come from the efficiency of search heuristics, which can somethimes reach better initial solutions than those obtained with Bion in some cases.

In conclusion, our method Bion can generally produce objective variable boundaries which are helpful to improve the solver performance. Still, it is open to understand which combination of solvers, search heuristics and COP model benefits the most from strongly reduced domains. To the best of our knowledge, no clear answer is yet available in the literature. From the comparison with user-fixed boundaries that are known to reduce the solution space, we observe that the estimated boundaries with Bion are competitive and provide a similar behaviour. This makes Bion a promising approach in many contexts where COP are solved without any prior knowledge on initial boundaries.

6 Conclusion

This paper presents Bion, a boundary estimation method for constraint optimization problems (COP), based on machine learning (ML). A supervised ML-model is trained with solved instances of a given COP, in order to estimate close boundaries of the objective variable. We utilize two training techniques, namely, asymmetric loss functions and label shift, and another counter-measure to adjust automatically the training labels, and discard any wrong estimation. Bion is lightweight and both solver- and problem-independent. Our experimental results, obtained on realistic COPs, show that already a small set of instances is sufficient to train an estimator model to reduce the domain size by over 80 %.

Solving practical assignment, planning or scheduling problems often requires to repeatedly solve the same COP with different inputs [Ernst2004, Szeredi2016, Mossige2017, Spieker2019]. Our approach is especially well-suited for those scenarios, where training data can be collected from previous iterations and historical data.

In future work, we plan to explore in depth the actual effects of objective boundaries on constraint solver performances with the goal to better predict in which scenarios Bion can be the most beneficial. Another research lead we intend to follow is to integrate Bion with search heuristics that explicitly focus on evaluating the objective variable [Fages2017, Palmieri2018].

References