1 Introduction
We consider (nominal) combinatorial optimization problems of the form
where denotes the set of feasible solutions, and
is a cost vector. For the case that the cost coefficients
are not known exactly, robust optimization approaches have been developed. In the most basic form, we assume a discrete set of possible costs to be given, the socalled uncertainty set. Depending on the problem application, may be found by sampling from a distribution, or by using past observations of data. The robust (minmax) problem is then to solveThis type of problem was first introduced in [KY97], and several surveys are now available, see [ABV09, GS16, KZ16]. The robust problem turns out to be NPhard for all relevant problems that have been considered so far, even for . This is also the case if the nominal problem is solvable in polynomial time, for example the Shortest Path or the Assignment problem.
However, practical experience tells us that an NPhard problem can sometimes still be solved sufficiently fast for relevant problem sizes. In fact, where NPhardness proofs typically rely on constructing problem instances with specific properties, nothing is known about hardness of randomly generated instances, or smoothed analysis, in robust optimization. Whereas the related minmax regret problem has sparked research into specialized solution algorithms (see, e.g., [CLSN11, PA11, KMZ12]), little such research exists for the minmax problem, as simply using an offtheshelf mixedinteger programming solver such as CPLEX can already lead to satisfactory results.
Faced with a similar situation for nominal knapsack problems, [Pis05] asked: “Where are the hard knapsack problems?” The related aim of this paper is to construct computationally challenging robust optimization problems. To this end, we use the Selection problem as a benchmark, where
While Selection is the focus of this paper, the proposed methods are general and can be applied to any robust combinatorial problem.
Looking into other fields of optimization problems, instance libraries have been a main driver of algorithm development [MHS10]. Examples include MIPLIB [KAA11] for mixedinteger programs, road networks from the DIMACS challenge for Shortest Path problems [DGJ09] or the Solomon instances for the vehicle routing problem with time windows[Sol87]. There is a clear gap in robust optimization, where instance generators often need to be reimplemented to reproduce previous results. Our research is intended as a first step towards a library of hard instances to guide future research.
As there is no free lunch in optimization, we cannot hope to construct instances that are hard for all possible optimization algorithms. We therefore avoid constructing instances that are hard for a particular solution method (e.g., using CPLEX), but rather aim at maximizing hypothetical measures of hardness. Whether or not they actually correspond to harder instances for the solver is then a matter of computational experiments.
We follow two approaches: Our main focus is to find an uncertainty set such that the optimal objective value of the resulting robust problem is as large as possible (Section 2). As an alternative, we also consider an approach that finds an uncertainty set such that the objective value of the optimal solution of the averagecase solution is as large as possible (Section 3). These approaches are then compared in Section 4, where we find that it is possible to construct instances that are considerably harder to solve than i.i.d. uniformly sampled problems—the current standard approach.
2 Maximizing the Robust Objective Value
This paper proposes the use of an optimization problem to construct hard problem instances. Throughout this section the proposed model will be presented along with a number of different solution techniques. In the presentation of the model and related discussions, the vectors and matrices are written in bold font, for example , and for sets the shorthand notation is used.
2.1 Model
Let some problem instance with scenarios be given, represented through the scenario objective coefficient vectors , with . From this initial instance, the goal is to modify the inputs in such a way that the resulting robust problem is harder to solve. The approach that is used in this paper is to modify the values of the cost vectors in each of the scenarios. However, the base problem is to be modified, and not completely changed, so a limit on the magnitude of the change for each cost value is imposed.
Consider a scenario , which is a vector of cost coefficients denoted by . The modification of the problem involves the selection of cost coefficients from the set of all possible candidate values, which is denoted by . In the approach proposed in this paper, the set is defined as
where and denote the lower and upper bounds, respectively, on the cost coefficient . Additionally, imposes the constraint that the sum of coefficients for this scenario remains the same, but any feasible sum that respects the upper and lower bounds is permitted as a scenario vector. We will use and with a budget parameter and a global maximum cost coefficient .
Our approach aims at finding scenarios for all , so that the objective value of the optimal solution to the corresponding robust optimization problem is increased. This approach can be formulated as the following optimization problem
where MRO stands for “maximize robust objective” The intuition behind the proposed optimization problem for generating difficult robust problem instances is the following: For each , the objective is a piecewise linear, convex function in . By maximizing the smallest value of the objective over all , we spread out the solution costs, balancing the objective values of the best solutions in . This way, finding and proving optimality of the best becomes a more difficult task for an optimization algorithm. Naturally, whether the instances produced using the proposed method are actually more difficult to solve than the original problem can only be tested computationally.
2.2 Illustrative Example
Consider a robust variant of the Selection problem where the tasks is to choose two out of four items such that the maximum costs over two scenarios are as small as possible. The cost vectors for these two scenarios are
item  

1  2  3  4  
4  1  9  2  
4  7  4  4 
In this small example there are possible solutions. For this instance of the robust selection problem there is only one optimal solution to this problem, which is to choose items 1 and 4 with a robust objective value 8. The sorted vector of the corresponding six robust objective values is
Now let us assume that and and the budget is given by . Thus, two alternative cost vectors and are
item  

1  2  3  4  
3  2  10  1  
5  6  3  5 
Given these cost vectors, the objective value of the optimal robust solution increases to 10. The optimal solution still remains as the selection of 1 and 4, but the sorted vector of robust objective values has become
An important observation is that the difference between the best and secondbest solutions has reduced. This can have the effect of increasing the difficulty of proving optimality. As mentioned previously, the difficulty of the instance can only be evaluated computationally. Using CPLEX to solve the minmax robust selection problem given by this small example, the first instance takes 0.013 ticks of the deterministic clock, whereas the second instances is solved in 0.209 deterministic ticks—more than 16 times as long.
2.3 Solution Approaches
A clear drawback of MRO is that the inner problem is the robust optimization problem that we are attempting to make hard. Therefore, constructing a hard problem is at least as hard as actually solving it. Due to this fact we will focus on producing hard, but relatively small instances. This is an alternative to the trivial approach to producing hard instances, which is to produce larger ones. Note that even evaluating the objective value of some fixed scenario variables is NPhard for all commonly considered combinatorial problems (see [KZ16]), as they are equivalent to solving a robust counterpart.
In the outer maximization problem, we determine vectors, and choose one of these vectors in the inner maximization problem. Formally, this is similar to the adaptability approach in robust optimization (see [BK17]), which uses a minmaxmin structure. Whereas their combinatorial part is in the outer minimization, the combinatorial part is in the inner minimization in our problem.
To address the difficulty of MRO, different solution approaches are developed. Each of the solution approaches aim to reduce the difficulty of solving MRO through alternative techniques. These approaches are:

Iterative method (Section 2.3.1): an exact approach that exploits the multilevel structure of MRO.

Column generation method (Section 2.3.2): an exact approach that applies decomposition to a relaxation of MRO.

Linear decision rules (Section 2.3.4): a heuristic method to find a compact formulation of MRO.
A description of each of the solution approaches will be presented in the following sections. The results in Section 4 will demonstrate the value of each approach.
2.3.1 Iterative Solution
Given the multilevel structure, it is difficult to solve MRO directly using general purpose solvers. However, decomposition techniques can be used to exploit this structure and to develop an effective solution approach.
Note that we can write the inner maximization problem for given , , and by introducing a variable vector representing the choice of scenario:
Let us now assume that some set of candidate solutions are already known. Then, the MRO problem on this set can be written as
(1)  
s.t.  
where the variables for each are used to determine the scenario that is assigned to each candidate . We refer to this problem also as the master problem.
Note that problem (1) is nonlinear through the product of and variables, which can be linearized using variables . The resulting model is then given as
(2)  
s.t.  
Once the master problem is solved for a fixed set of candidate solutions, we have determined an upper bound on the MRO problem. By solving the resulting robust optimization problem for , we also construct a lower bound. If both bounds are not equal, we add the current robust solution to the set of candidate solutions and repeat the process by solving the master problem. This iterative approach will converge after a finite number of steps, as contains a finite number of solutions. It is therefore an exact solution approach to MRO.
An interesting question is whether the master problem is solvable in polynomial time. Note that for scenarios and solutions, there are possibilities to assign solutions to scenarios. For each assignment, constructing optimal scenarios
can be done in polynomial time by solving a linear program. This means that if
is constant, the master problem can be solved in polynomial time as well.If is unbounded, however, the problem becomes hard, as the following theorem shows.
Theorem 1.
The master problem is NPhard, if is part of the input.
Proof.
We use a reduction from Hitting Set, see [GJ79]: Given a ground set , a collection of sets , and some integer . Is there a subset with such that for all ?
Let an instance of Hitting Set be given. We set , and . We further set , and for each (i.e., we get and for all ). Finally, we set if and otherwise.
We now claim that Hitting Set is a yesinstance if and only if there is a solution to MRO with objective value at least 1.
To prove this claim, let us first assume Hitting Set is a yesinstance. Let be a corresponding subset of (w.l.o.g. we assume that ). Then we build a solution to MRO in the following way. For each , set and for all . For each , choose one and set and all other . Thus we obtain a feasible solution to MRO with objective value at least 1.
We illustrate this process with a small example. Let , , , , and . Our MRO instance has the following values of and and :
1  1  1  0  0  0  0  
0  0  1  1  1  0  0  
0  0  0  0  0  1  1  
0  0  1  0  0  0  0  
0  0  0  0  0  1  0 
In the same table, we also show an optimal solution for and . The variables are chosen such that and are assigned to , and is assigned to .
Now let us assume that for some Hitting Set instance, we construct our MRO problem as detailed above and find an objective value of at least 1. We show that Hitting Set is a yesinstance. To this end, we first show that there exists an optimal solution to MRO where all variables are either 0 or 1. Consider any , and let be all solutions assigned to scenario . We distinguish two cases:

There exists some such that for all . In this case, we can set .

There is no such , i.e., there are and with that choose disjoint sets of items. As , at least one of them must have an objective value strictly less than 1, which contradicts our assumptions.
We can thus set by including all elements for which there is with . By construction, is a hitting set with cardinality at most .
∎
While the iterative algorithm is an exact solution approach, there are limitations to its use. Specifically, solving the master problem can become a bottleneck to the solution process as the number of solutions increases. In each iteration of the algorithm, the addition of a new candidate results in an additional constraints. Computationally, the additional constraints have a significant negative impact between consecutive iterations. Two different solution methods will be presented to address the issue in solving the master problem. A DantzigWolfe decomposition approach will be presented in Section 2.3.2 and an alternating heuristic will be described in Section 2.3.3.
2.3.2 Column Generation
DantzigWolfe reformulation is applied to (2) to decompose the problem into disjoint subsystems—one for each candidate solution . A column corresponds to a feasible assignment of a cost vector to the solution vector . For a given column , the parameter is the contribution of to the objective of the inner minimization problem given the assignment of to solution vector . The variables equal 1 if the cost vector assignment given by column is selected and 0 otherwise. Finally, the variables are introduced to map the solution of the outer maximization problem to the set of cost vectors for the inner minimization problem.
The formulation of the column generation master problem is given by
(3)  
s.t.  
Initially, the master problem is formulated with only a subset of column . The corresponding problem is described as the restricted master problem (RMP). For each , a single initial column is included in , which is formed by assigning to . The variables and represent the dual variables corresponding to the constraints in (3).
A complicating aspect of the RMP is the set of linking constraints given by the uncertainty sets . This complication arises from the fact that the constraints do not explicitly link the variables, but an implicit linking of the variables is through the third set of constraints in (3). While the uncertainty set linking constraints ensure that exactly one cost vector is selected from each scenario, this requirement could be overly restrictive in our contexts. As such, a relaxation of (2) is formed by replacing with , where , so that a different cost vector from scenario could be selected for each solution . Performing this relaxation eliminates the linking constraints from the uncertainty sets and transfers the additional relaxed constraints to the column generation subproblems.
A column generation subproblem is formed for each solution . Given the optimal dual solution to the RMP, each column generation subproblem is solved to find a feasible cost vector assignment that has a positive reduced cost. The dual variables are denoted by and respectively for the constraints of the RMP. Using an optimal dual solution—denoted by —the reduced cost of a column for solution is given by
(4) 
A feasible assignment of to solution forms an improving column for the RMP if (4) is positive. The feasible cost vector assignment that forms a column with the most positive reduced cost is found by solving the subproblem given by
(5)  
s.t.  
The optimal solution to (3) provides a scenario set that is expected to form a hard robust optimization problem. Since only a relaxation of (2) is solved by this approach, objective function value will be greater than that found by the iterative approach (Section 2.3.1). However, in the proposed approach for generating hard instances, maximizing the minimum robust objective value is used only as a proxy for hardness. As such, it is expected that even solving the relaxation of (2) will provide instances that are of comparative hardness to the exact approach in Section 2.3.1.
2.3.3 Alternating Heuristic
As an alternative to the relaxation and decomposition approach presented in Section 2.3.2, an alternating heuristic has been developed to solve the master problem (2) of the iterative approach. The alternating heuristic is motivated by the observation that for a given assignment of scenarios to solutions , selecting the cost coefficients to maximize the minimum objective becomes simple. Similarly, for a fixed set of cost coefficients for each scenario, the difficulty in assigning scenarios to solutions is greatly reduced. As such, the alternating heuristic iterates between fixing either the scenario assignment or the scenario cost coefficients.
To formally present the alternating heuristic, first reconsider the master problem
s.t.  
for a subset of solutions. Let us assume the variables are all fixed. In that case, an optimal solution to the remaining variables can be found through the following procedure: For each , choose one such that is not smaller than for all . Then, set and all other . To determine which is a maximizer of the objective value for some , we can simply calculate all possible objective values. Thus, finding optimal values is possible in time. Now let us assume that all variables are fixed. In this case, the remaining variables are continuous. Under the assumption that the are polyhedra, the resulting problem can then be solved in polynomial time as well. This leads to the alternating heuristic described in Algorithm 1.
2.3.4 Linear Decision Rules
A common reformulation of robust optimization problems involves the application of decision rules[BTGGN04]. This approach involves introducing the adjustable variables which map solutions to the worstcase scenario. In the context of MRO, such a mapping would result in setting if scenario is a worstcase scenario for solution , and 0 otherwise.
Considering the MRO, the use of a decision rule results in an equivalent formulation given by
(6)  
s.t.  
The optimal decision rule can only be found through the solution to the original robust optimization problem. As such, it is common to apply approximations of the decision rules to find a closed form of the reformulated problem. Firstorder or linear decision rules involve defining the vector mapping as an affine linear function, such as
This introduces the new variables , for all . An approximation of MRO is given by substituting the linear function mapping in (6), resulting in the reformulation given by
(7)  
s.t.  (8)  
(9)  
(10)  
(11)  
(12) 
Note that it is possible to remove constraints (11) since they are implied by constraints (9) and (10).
It can be observed that the reformulated problem has an exponential number of constraints, resulting from a set of constraints for each solution contained in . As such, problem (7)–(12) is intractable in its current form. Using the following linear relaxation assumption, a further reformulation can be performed to address the intractability of problem (7)–(12)
Assumption 2.
There exists a suitable polyhedron
with , , such that for any cost vector , we have
To apply Assumption 2, each set of constraints in (8)–(10) are examined in turn to construct a polyhedral description of linear constraints. For each set of constraints, the bounding limit is found by minimizing (maximizing) the activity for greater (less) than constraints. Assumption 2 is given for a wide range of commonly considered robust combinatorial optimization problems, such as Selection, Spanning Tree, and Assignment (see also [KZ16]).
Consider any constraint of the form
(13) 
for some vector and righthand side . This is equivalent to
(14) 
Using strong duality, this means that
(15) 
where denotes the set of dual feasible solutions and is the righthand (lefthand) side of less (greater) than constraints in (14). Due to weak duality, it is further sufficient to find some such that , which implies that Inequality (13) is fulfilled. Analogously, for constraints of the form
the original mathematical program that is dualized in the reformulation is
This reformulation approach is applied to Constraints (8)–(10) to form a tractable problem.
For ease of presentation, we describe the reformulation using Selection as an example. Consider Constraint (8), which is equivalent to
First the product is linearized by introducing a new variable . Then the resulting problem can be relaxed to form
Note that this will give a conservative approximation to Constraint (8), as the minimum in the righthand side is underestimated. Also, the righthand side of (8) is ignored when applying Assumption 2, since it will be enforced in the reformulation of MRO. By dualizing the problem, we find
s.t.  
By strong duality, this model can be substituted for Constraint (8).
Consider Constraint (9):
The linear programming reformulation of this constraint is given by
s.t.  
Same as for Constraint (8), the righthand side is ignored when applying Assumption 2. The dual of this problem is given by
s.t.  
Finally, we use the same approach for Constraint (10). For each , we have
for which the dual is
s.t.  
Putting the above discussion together, the linear decision rule approach to MRO is given through the following optimization problem:
(16)  
s.t.  
(17)  
(18)  
(19)  
(20)  
(21)  
(22)  
(23)  
(24)  
(25)  
(26)  
(27)  
(28)  
(29)  
(30) 
The reformulation of constraint (8) is given by the objective (16) and constraints (17)–(18). For constraint (9), the reformulation is given by (19)–(20). Note that the righthand side of (9) is the righthand side of (19). Finally, the reformulation of (10) is given by (21)–(22). Similarly, the lefthand side of (10) is the lefthand side of (21).
There is still a nonlinearity between variables and , with being unbounded. We solve the optimization problem heuristically, using an alternating approach similar to Section 2.3.3. By fixing either variables , , , , or variables , we increase the current objective value in each iteration, until a local optimum has been reached.
Note that while we described the reformulation for the special case of Selection, the same method can be used for any problem with Assumption 2.
3 Maximizing the Midpoint Objective Value
We now explore a different view on problem hardness. Instead of maximizing the objective value of the resulting optimal solution, which, as the discussion in Section 2 has shown, is a complex optimization problem, we use the objective value of the midpoint solution as a proxy. The midpoint method is one of the most popular heuristics for minmax robust combinatorial optimization. It aggregates all scenarios into one average scenario and solves the resulting singlescenario problem, which is possible in polynomial time for some combinatorial problems (see Assumption 2). It is known to give an approximation to the robust problem [ABV09], and has been the best known general method until recently [CG18]. Due to its simplicity, it is also a popular submethod for exact branchandbound approaches [CG15].
The optimization problem to generate hard instances we consider here is therefore given as
where denotes an optimal solution to scenario .
This problem can be linearized if the nominal problem can be written as a linear program under Assumption 2. Using the Selection problem as an illustration, we enforce that is an optimal solution to the midpoint scenario by requiring the corresponding primal and dual objective values to be equal. The resulting optimization problem is then
(31)  
s.t.  (32)  
(33)  
(34)  
(35)  
(36)  
(37)  
(38)  
(39)  
(40)  
(41)  
(42) 
Here, denotes the objective value of the midpoint solution in scenario (see Constraint (32)). The optimization problem maximizes the largest by choice variables (see Objective (31) and Constraint (33)). Constraints (3436) ensure that is indeed the midpoint solution by enforcing primal and dual feasibility, and equality of primal and dual objective values.
There are still nonlinearities between and . We linearize the first product using with and , where suffices. The second product is linearized using with and . The resulting linearized problem is then
s.t.  
4 Experiments
The computational experiments have two different purposes: demonstrating the potential of the proposed approach for generating hard instances and highlighting the key features of each of the presented algorithms. First, the instances generated from each of the presented algorithms will be compared against the difficulty of solving random instances. Since the difficulty of instances can only be evaluated computationally, the generated minmax robust instances will be solved using CPLEX and the run times will be compared. Second, each of the proposed algorithms exploit different features of MRO to develop computationally efficient methods—such as relaxation and decomposition in Section 2.3.2 and heuristic methods in Section 2.3.3. The effect that the various approaches has on the generation of hard robust instances and the resulting hardness will be assessed in the computational experiments.
4.1 Setup
The approaches presented in Sections 2.3 and 3 are general methods that can be applied to the generation of hard instances for any minmax robust optimization problem. The alternative methods that have been developed focus specifically on the computation of cost coefficients for each scenario, which is the master problem in the proposed iterative methods. As such, the inner problem—the subproblem—can be set to any minmax robust optimization problem. To demonstrate the potential of the methods previously presented, the inner problem is the robust Selection problem. This problem is used for its simplicity, meaning that the impact of the instance generation is more easily observed.
The current stateoftheart for robust optimization instance generation is to randomly sample scenario. To this end, the baseline for comparison is a set of instances where the scenario coefficients are sampled randomly uniform i.i.d. with . This method of instance generation will be labeled as RU. In the following results, the proposed methods will be labeled as follows:

MROEx: The exact method from Section 2.3.1.

MROCG: The column generation method that is applied to the relaxation of MRO as described in Section 2.3.2.

MROHeu: The alternating heuristic from Section 2.3.3.

MROLDR: The approach from Section 2.3.4 where linear functions are used to approximate the assignment of solutions to scenarios.

Mid: The method that generates hard instances by maximizing the midpoint solution as presented in Section 3.
Three problem sizes are consider in the computational experiments: the number of scenario coefficients is set to , , and . For the Selection problem, the set of scenario coefficients is equal to the size of the set from which items are selected. In each case we set the number of items to be selected to and the total number of scenarios is set to . For each problem size, we generate 100 instances using RU, and then the resulting scenarios are used as the initial scenarios for the iterative methods described above. To evaluate the effect of the uncertainty set budget on the run times of the iterative methods and the hardness of the generated instances, budgets of 1, 2 and 5 will be used. The total number of randomly generated instances is . Since these instances are used as an input to each of the iterative method, a further hard instances are generated.
A maximum run time of 3600 seconds is given to each of the iterative methods. This run time limit is only enforced between the iterations of the algorithm, as such it is possible for this time limit to be exceeded. The instance generated from the iterative algorithms is taken from the last iteration that was started before the time limit.
The hardness of the instances is evaluated by using CPLEX to solve the resulting minmax robust optimization problem. The measured CPU time, the deterministic solution time (a measure provided by CPLEX, given in ticks), the number of branchandbound nodes processed during the solution process, and the number of LP iterations are used to evaluate the instance difficulty.
All experiments were conducted using one core of a computer with an Intel Xeon E52670 processor, running at 2.60 GHz with 20MB cache, with Ubuntu 12.04 and CPLEX v12.6.
4.2 Exact Methods for Instance Generation
The initial assessment of the instance generation approach compares the randomly generated instances against those generated using the exact methods for instance generation: Specifically, the methods presented in Sections 2.3.1 and 3. If the assumption regarding the hardness of instances discussed in Section 2 is valid, then it should be possible to observe an increase in instance hardness by employing the exact approaches. Since maximizing the minimum solution objective is a proxy for hardness that is used in the methods presented in Section 2.3, it is necessary to compare against alternative measures of hardness, such as maximizing the midpoint objective proposed in Section 3.
The average run times of the instances generated from MROEx and Mid are presented in Table 1. For comparison, the average run times to solve the randomly generated instances is also presented.
Budget  Method  20  30  40 

1  MROEx  0.04  0.61  7.83 
Mid  0.03  0.15  1.62  
2  MROEx  0.06  0.92  7.49 
Mid  0.03  0.16  1.74  
5  MROEx  0.08  0.77  10.59 
Mid  0.03  0.23  2.90  
RU  0.03  0.13  1.43 
It can be observed in Table 1 that MROEx generates instances that are significantly harder than random instances. First focusing on the results with a budget , a small increase in the hardness of the instances is achieved for the instances where . As the difficulty of the random instances increases, which occurs as increases, then a greater absolute increase in the run time for the generated instances can be seen. This suggests that the assumption underlying the hard instance generation method is valid.
Budget  Method  20  30  40 

1  MROEx  1.7  207.0  3005.8 
Mid  1.1  15.2  49.2  
2  MROEx  49.3  3238.1  3796.0 
Mid  2.2  28.3  148.5  
5  MROEx  3677.9  4305.1  4081.9 
Mid  6.0  1047.4  1807.2 
The generation of hard instances using MROEx is computationally difficult, especially compared to the generation of random instances. Table 2 presents the average run time of MROEx, where the maximum run time is set to 3600 seconds. The run time for the generation of random instances is not included in Table 2 since it is very close to 0 seconds. The results highlight that the difficulty of the instance generation problem increases as the budget and/or problem size increases. In fact, the time limit is exceeded for all instances where the budget is 5, and when many of the experiments with a budget of 2 and 5 exceed the time limit. Interestingly, while MROEx regularly exceeds the time limit, the difficulty of the problem instances still increases. This suggests that the use of the alternative solution approaches—developed to improve the computational performance of the iterative method—could aid in producing even harder problem instances.
The alternative exact approach for instance generation is algorithm Mid. The results presented in Table 1 show that while Mid produces instances that are more difficult than random instances, the increase in difficulty is much less than that achieved by MROEx. Given that Mid is a more complex algorithm for generating problem instances than a random generator, the results presented in Table 1 suggest there is little value in this approach. This is further highlighted by the average computation times of Mid presented in Table 2. These results demonstrate that maximizing the minimum solution objective is a better proxy for instance hardness than maximizing the midpoint objective.
4.3 Alternative Instance Generation Methods
The potential of the MRO to generate hard minmax robust optimization instances is demonstrated in Section 4.2. While there is a clear increase in the difficulty of the resulting robust optimization problems compared to RU, the exact approach MROEx fails to solve MRO for many of the larger instances. In fact, none of the instances with a budget of 5 could be solved by MROEx, as shown in Table 2. This inability to solve the MRO to optimality is the main motivation for the development of the alternative algorithms presented in Sections 2.3.2–2.3.4.
Budget  Method  20  30  40 

1  MROEx  1.7  207.0  3005.8 
MROCG  1.0  16.1  253.4  
MROHeu  0.2  4.6  151.9  
MROLDR  2.1  31.3  232.1  
2  MROEx  49.3  3238.1  3796.0 
MROCG  2.9  56.3  707.5  
MROHeu  0.6  24.2  947.9  
MROLDR  2.2  33.8  297.3  
5  MROEx  3677.9  4305.1  4081.9 
MROCG  11.5  200.1  2035.6  
MROHeu  3.1  998.7  3614.6  
MROLDR  2.7  40.3  439.7 
The performance of the alternative methods for solving the MRO, with respect to the run time, is presented in Table 3. The results for MROEx are included for comparison with the other proposed solution methods. It can be seen that there is a significant reduction in the run time for all methods compared to MROEx. The best performing approach when is MROHeu, with an average run time that is 5% of that for MROEx when