The robust bilevel continuous knapsack problem

We consider a bilevel continuous knapsack problem where the leader controls the capacity of the knapsack and the follower's profits are uncertain. Adopting the robust optimization approach and assuming that the follower's profits belong to a given uncertainty set, our aim is to compute a worst case optimal solution for the leader. We show that this problem can be solved in polynomial time for both discrete and interval uncertainty. In the latter case, we make use of an algorithm by Woeginger for a class of precedence constraint knapsack problems.

Authors

• 4 publications
• 4 publications
• The stochastic bilevel continuous knapsack problem with uncertain follower's objective

We consider a bilevel continuous knapsack problem where the leader contr...
08/27/2021 ∙ by Christoph Buchheim, et al. ∙ 0

• Robust production planning with budgeted cumulative demand uncertainty

This paper deals with a problem of production planning, which is a versi...
09/12/2020 ∙ by Romain Guillaume, et al. ∙ 0

• A new exact approach for the Bilevel Knapsack with Interdiction Constraints

We consider the Bilevel Knapsack with Interdiction Constraints, an exten...
11/07/2018 ∙ by Federico Della Croce, et al. ∙ 0

• Optimization problems in graphs with locational uncertainty

Many discrete optimization problems amount to select a feasible subgraph...
09/01/2021 ∙ by Marin Bougeret, et al. ∙ 0

• Recoverable Robust Representatives Selection Problems with Discrete Budgeted Uncertainty

Recoverable robust optimization is a multi-stage approach, where it is p...
08/28/2020 ∙ by Marc Goerigk, et al. ∙ 0

• Robust Commitments and Partial Reputation

Agents rarely act in isolation -- their behavioral history, in particula...
05/28/2019 ∙ by Vidya Muthukumar, et al. ∙ 0

• On the Complexity of Robust Bilevel Optimization With Uncertain Follower's Objective

We investigate the complexity of bilevel combinatorial optimization with...
05/18/2021 ∙ by Christoph Buchheim, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Bilevel optimization has received increasing attention in the last decades. The aim is to model situations where certain decisions are taken by a so-called leader, but then one or more followers optimize their own objective functions subject to the choices of the leader. The follower’s decisions in turn influence the leader’s objective, or even the feasibility of her decisions. The objective is to determine an optimal decision from the leader’s perspective. In general, bilevel optimization problems are very hard to solve. Even in the case that both the leader and the follower solve linear programs, the bilevel problem turns out to be strongly NP-hard in general

[7]. Several surveys and books on bilevel optimization have been published recently, e.g., [3, 5, 6].

Due to the hardness of deterministic bilevel optimization, it is not surprising that relatively few articles dealing with uncertain bilevel optimization problems have been published so far. Most of them adopt the stochastic optimization approach, where (some of) the problem’s parameters are assumed to be random variables, and the aim is to determine a solution optimizing the expected objective value; see e.g.,

[8] and the references therein.

Our research is motivated by the question of how much harder does bilevel optimization become, when adopting the robust optimization approach to address the uncertainties. In this approach, the uncertain parameters are specified by so-called uncertainty sets which contain all possible (or likely) scenarios; the aim is to find a solution that is feasible in each of these scenarios and that optimizes the worst case. The only article we are aware of that addresses robustness in bilevel optimization is [2]. There, the authors consider bilevel problems with linear constraints and a linear follower’s objective, while the leader’s objective is a polynomial. The robust counterpart of the problem, with interval uncertainty in the leader’s and the follower’s constraints, is solved via a sequence of semidefinite programming relaxations.

In the following, we only consider uncertainty in the objective function. Even in this case, in classical one-level robust optimization, some classes of uncertainty sets may lead to substantially harder problems, e.g., finite uncertainty sets in the context of combinatorial optimization

[10]. In other cases, the problems can be solved by an efficient reduction to the underlying certain problem. This is true in particular for the case of interval uncertainty, where each coefficient may vary independently within some interval. Indeed, it is not hard to see that, in the one-level case, each interval may be replaced by one of its endpoints, depending on the direction of optimization, so that the robust counterpart in this case is not harder than the certain variant of the problem. For an overview of complexity results in robust combinatorial optimization under objective uncertainty, we refer the reader to the recent survey [1] and the references therein.

However, we show in the following that the situation in case of interval uncertainty is more complicated in bilevel optimization. We concentrate on a bilevel continuous knapsack problem where the leader only controls the capacity. Without uncertainty, this problem is easy to solve; see Section 2. However, if the follower’s objective is uncertain, the problem becomes much more involved. It turns out that this approach requires to deal with partial orders, more precisely, with the interval orders induced by the relations between the follower’s profit ranges. Adapting an algorithm by Woeginger [11] for some precedence constraint knapsack problem, we show that the problem can still be solved in polynomial time; see Section 4. Before, we also discuss why the case of finite uncertainty sets is tractable as well; see Section 3.

Both results are problem-specific and thus do not answer the question whether an efficient oracle-based algorithm exists, using an oracle for the certain case. However, we believe that the additional difficulty of the problem in the interval case makes the existence of such an algorithm unlikely.

2 Underlying certain problem

We first discuss the deterministic variant of the bilevel optimization problem under consideration, in which the follower solves a continuous knapsack problem, while the leader determines the knapsack’s capacity and optimizes another linear objective function than the follower. This problem is also discussed in [6], but we replicate the formulation and the algorithm here for sake of completeness.

First recall that an important issue in bilevel optimization is that the follower’s optimum solution is not necessarily unique, but the choice among the optimum solutions might have an impact on the leader. The two main approaches here are the optimistic and the pessimistic one. In the former case, the follower is assumed to decide in favor of the leader, while in the latter case, he chooses the optimum solution that is worst for the leader. For more details, see e.g., [6].

In the optimistic variant, the overall (certain) problem we consider can be formulated as follows:

 min d⊤x+pb (P) s.t. b−≤b≤b+ x∈argmaxc⊤xs.t.a⊤x≤b0≤x≤1

The leader’s only variable is and the follower’s variables are

. The vectors

and and the bounds as well as the weight  are given. We may assume and .

Moreover, we may assume that : items with  and  (or , respectively) will never be selected by the follower in the optimistic (or pessimistic) case, so they can be removed from the instance. For all other items with , it does not change anything to increase their cost by some  that is smaller than all other values .

The follower solves a continuous knapsack problem which can be done, for example, using Dantzig’s algorithm [4]: by first sorting the items, we may assume . The idea is then to pack the items into the knapsack in this order until it is full. More formally, if , everything can be taken, so the optimum solution is for all . Otherwise, we consider the critical item

 k:=min{i∈{1,…,n}:∑ij=1aj>b}

and an optimum solution is given by

 xi:=⎧⎪ ⎪⎨⎪ ⎪⎩1 for i∈{1,…,k−1}1ak(b−∑k−1j=1aj) for i=k0 for i∈{k+1,…,n}.

Note that the order of items and hence the follower’s optimum solution is not unique if the are not all different. An optimistic follower would sort the elements with same in descending order of the values , a pessimistic one in ascending order. If this is still not unique, there is no difference for the leader either.

Turning to the leader’s objective, first note that, due to the assumptions  and , every optimum solution of the follower’s problem satisfies . We may thus assume  in Problem (P), since we have  then.

Now, as only the critical item , but not the sorting depends on , the leader can just compute the described order of items, and her problem can be reformulated as minimizing the function

 f(b):=⎧⎪ ⎪⎨⎪ ⎪⎩0 for % b=0j−1∑i=1di+djaj(b−j−1∑i=1ai) for b∈(j−1∑i=1ai,j∑i=1ai], j∈{1,…,n}

over . As is piecewise linear, it suffices to evaluate at the boundary points  and  and at all feasible vertices, i.e., at for all with . Hence, Problem (P) can be solved in time , which is the time needed for sorting.

3 Finite uncertainty

Now we look at the robust version of the problem where the follower’s objective function is uncertain for the leader, and this uncertainty is given by a finite uncertainty set  containing the possible objective vectors :

 min maxc∈Ud⊤x s.t. b−≤b≤b+ x∈argmaxc⊤xs.t.a⊤x≤b0≤x≤1

The inner maximization problem can be interpreted as being controlled by an adversary, thus leading to an optimization problem involving three actors: first, the leader takes her decision , then the adversary chooses a follower’s objective  that is worst possible for the leader, and finally the follower optimizes this objective choosing .

Again, we aim at solving this problem from the leader’s perspective, which can be done as follows: for every , consider the piecewise linear function  as described in Section 2. The vertices of each  can be computed in  time. The task is then to minimize the pointwise maximum  over .

The pointwise maximum of two piecewise linear functions with and vertices, respectively, has vertices, since between two vertices arising from intersections of the function graphs, at least one of the functions must have another vertex. It can be computed in time by processing the vertices of the two functions from left to right and checking for intersections of the linear segments that have some common range.

To compute the pointwise maximum of piecewise linear functions, one can build a binary tree with a leaf for each of the functions and recursively compute the maximum of the two children at every other vertex. This results in a piecewise linear function with vertices, where

is the constant from the estimation in the case with two functions, in

time.

Using  for all  and plugging in , we obtain the following result.

Theorem 1.

The robust bilevel continuous knapsack problem with finite uncertainty set  can be solved in time.

4 Interval uncertainty

In this section, we look at a robust version of the problem having the same structure as in Section 3, but now the uncertainty is given by an interval for each component of . We thus consider and assume . In classical robust optimization, exploiting , one could just replace the uncertain vector  by the individual worst cases  and obtain a certain problem again. However, such a replacement is not a valid reformulation in the bilevel context. We will show that, in fact, the situation in the bilevel case is more complicated, even though we can still devise an efficient algorithm. To simplify the notation, we define

 p−i:=c−iai,p+i:=c+iai

for the remainder of this section. It turns out that interval orders defined by the intervals  will play a crucial role.

4.1 Interval orders and precedence constraint knapsack problems

For the leader, the exact entries of  in their intervals  do not matter, but only the induced sorting that the follower will use. Given and , the possible sortings are exactly the linear extensions of the partial order  that is induced by the intervals  in the sense that we set

 i

Such a partial order is called an interval order. In other words, if the intervals corresponding to two elements are disjoint, then their order in the follower’s sorting is fixed, otherwise, by appropriate choices of , both orders of the two elements in the sorting are possible.

For simplicity, we assume that all values  and  are pairwise different. With this, we do not have to distinguish between the optimistic or pessimistic variant, since for every linear extension of the interval order, can be chosen such that the follower’s optimum solution corresponds to this linear extension and is unique. If some endpoints of different intervals coincide, an optimistic or pessimistic follower can be modelled by shifting these endpoints slightly (in one or the other direction) such that the assumption holds.

One could compute all linear extensions of and the pointwise maximum over all corresponding piecewise linear functions as in Section 3, but these could be exponentially many. However, it turns out that it is not necessary to consider all linear extensions explicitly and that the problem can still be solved in polynomial time. We will see that the adversary’s problem

 max d⊤x s.t. c∈U x∈argmaxc⊤xs.t.a⊤x≤b0≤x≤1

for fixed  is closely related to the precedence constraint knapsack problem or partially ordered knapsack problem; see, e.g., Section 13.2 in [9]. This is a - knapsack problem, where additionally, a partial order on the items is given and it is only allowed to pack an item into the knapsack if all its predecessors are also selected.

For the special case of this problem where the partial order is an interval order, Woeginger described a pseudopolynomial algorithm, see Lemma 11 in [11]. There the problem is formulated in a scheduling context and is called good initial set. The algorithm uses the idea that every initial set (i.e. prefix of a linear extension of the interval order) consists of

• a head, which is the element whose interval has the rightmost left endpoint among the set,

• all predecessors of the head in the interval order, and

• some subset of the elements whose intervals contain the left endpoint of the head in their interior.

The algorithm iterates over all elements as possible heads, and looks for the optimum subset of the elements whose intervals contain the left endpoint of the head in their interior that results in an initial set satisfying the capacity constraint. Since these elements are incomparable to each other in the interval order, each subproblem is equivalent to an ordinary - knapsack problem and can be solved in pseudopolynomial time using dynamic programming; see e.g., [9]. Our algorithm for the adversary’s problem is a variant of this algorithm for the continuous knapsack and uses Dantzig’s algorithm as a subroutine, therefore we will obtain polynomial runtime.

For this, we need the notion of a fractional prefix of a partial order , which is a triple such that , , , and there is an order of the elements in , ending with , that is a prefix of a linear extension of . Every optimum solution of the follower, given some  and , corresponds to a fractional prefix. The follower’s solution corresponding to a fractional prefix is defined by

 xFi:=⎧⎪⎨⎪⎩1 for i∈J∖{j}0 for i∈{1,…,n}∖Jλ for i=j.

Additionally, there is the empty fractional prefix with .

Let be the set of all fractional prefixes of the interval order  given by  and . Then the leader’s problem can be reformulated as follows:

 minb∈[b−,b+] maxF∈Pa⊤xF=b  d⊤xF

In the next subsections, we first describe an algorithm to solve the inner maximization problem for fixed , which will then be generalized to the minimization problem over .

First, consider the special case where the interval order has no relations. This means that all intervals intersect and hence, all permutations are valid linear extensions. Note that a pairwise intersection of intervals implies that all intervals have a common intersection, since for every two intervals and holds for , so the smallest right endpoint is right of or equal to the largest left endpoint. Then the problem is very similar to the bilevel continuous knapsack problem without uncertainty that was described in Section 2, where the adversary here becomes the follower there. The only differences are that and this is not necessarily positive, and that the constraint is replaced by . But with this changed, the follower’s continuous knapsack problem can still be solved as described in Section 2 – note that the algorithm fills the knapsack completely anyway if .

Denote the algorithm for this special case, returning the corresponding fractional prefix, by Dantzig. We will also need this algorithm as a subroutine on a subset of the elements (like the pseudopolynomial knapsack algorithm in Woeginger’s algorithm). Therefore, we consider the algorithm as having input , , , and .

The adversary’s problem can now be solved by Algorithm 1.

In the notation of Woeginger’s algorithm, the -th element is the head in iteration , is the set of its predecessors, and corresponds to the intervals containing the left endpoint of the head – not necessarily in their interior, so that, in particular, also .

The basic difference to Woeginger’s algorithm is that due to the fractionality, it is important to have a dedicated last element of the prefix. Apart from that, the order of the elements in the prefix is not relevant. In our construction, any element of could be this last element, in particular it could be , but it does not have to. In Algorithm 1, the prefix constructed in iteration  does not necessarily contain the -th element, but still, all prefixes that do contain it as their head are covered by this iteration.

Lemma 2.

Algorithm 1 is correct.

Proof.

For , the only feasible and therefore optimum solution is , so that the result is correct if the algorithm terminates in line 1.

So assume  now. The first part of the proof shows that the algorithm returns a feasible solution, if : in each iteration , is the set of predecessors of in the interval order . The set consists of elements that are incomparable to and to each other in , since the corresponding intervals all contain the point by definition. Hence it is valid (with respect to ) to call Dantzig’s algorithm in line 1 on . The condition in line 1 makes sure that we only call the subroutine if the available capacity is in the correct range, i.e., if it is possible to fill the knapsack with the elements in  and a subset of the elements in .

Then is a fractional prefix, as all predecessors of and therefore also all predecessors of all (for which  holds) belong to . The element is a valid last element of a prefix consisting of the elements in because by construction and therefore, there are no successors of in . Moreover, by construction and by the correctness of Dantzig’s algorithm.

Now we prove the optimality of the returned solution. Let be an optimum solution (if is optimum, then must be , and this case is trivial). Choose with maximal . Then since is a prefix and , so all predecessors of  must be in as well. Moreover, as all elements in have at least one successor (namely ) in . By the choice of , we have . Hence, and is a feasible solution of the subproblem solved by the call of Dantzig’s algorithm in line 1 since . Thus

 d⊤x(J,j,λ) = ∑i∈I−kdi+d⊤x(J∖I−k,j,λ) ≤ ∑i∈I−kdi+d⊤x(J′k,jk,λk) = d⊤x(Jk,jk,λk),

which is at most the cost of any returned solution. The second part of the proof also shows that . Thus, the algorithm always returns an optimum solution. ∎

An optimum solution of the adversary’s problem in the original formulation, i.e., a vector , can be derived from the fractional prefix returned by the algorithm in the following way:

 ci:=⎧⎪ ⎪⎨⎪ ⎪⎩c−i for i∈Jk∖{jk}c+i for i∈{1,…,n}∖Jkc−k+ε for i=jk,

where is chosen small enough such that for all .

Note that this solution sets each variable except for to an endpoint of its corresponding interval. In general, there is no optimum solution with all variables set to an interval endpoint. This can be seen in the following example: Set , , and . The optimum solution returned by the algorithm is with value . For the follower to select the first element and half of the third element, i.e., for to hold, the adversary must choose , so it cannot be at one of the endpoints of .

Next, we describe an algorithm to solve the robust bilevel optimization problem, which performs the minimization over the capacity . For this, we will use the variant of Dantzig’s algorithm which returns a piecewise linear function, as described in Section 2. We call this routine Dantzig’ and assume its input to be , , , and . The output is a piecewise linear function , which can be represented by a list of all its vertices, given as points of the graph of .

The leader’s problem can now be solved by Algorithm 2.

Lemma 3.

Algorithm 2 is correct.

Proof.

First note that Algorithm 1 can be considered as the special case of Algorithm 2 where . For the correctness of Algorithm 2, it is enough to show that the function describes the value of the output of Algorithm 1 depending on . For , which is only possible if , this is clearly the case.

The condition in line 2 ensures that , so the call of Dantzig’s algorithm in line 2 is valid. Let be the fractional prefix in Algorithm 1 called for . We claim that is defined at if and only if is defined, and then for all  and all with .

Let and with . Then is defined if and only if , i.e., if and only if

 ∑i∈I−kai≤b≤∑i∈I0kai+∑i∈I−kai,

which is (almost) the same condition as the one for defining . Actually, is not defined if , but this is only for convenience in Algorithm 1. We could define it there as , where is a maximal element in . But this is not relevant since this fractional prefix is also considered in iteration .

In case  is defined, the corresponding values and agree because the piecewise linear function returned by Dantzig’ consists of the values of the solutions returned by Dantzig for given values of . This proves the correctness of Algorithm 2. ∎

Theorem 4.

The robust bilevel continuous knapsack problem with interval uncertainty can be solved in time.

Proof.

In every of the iterations, the algorithm needs time to compute the sets , and , and, since , time for Dantzig’s algorithm. As explained in Section 3, the pointwise maximum of the at most piecewise linear functions and the minimum of the resulting function (lines 2 and 2) can be computed in time. ∎

References

• [1] Christoph Buchheim and Jannis Kurtz. Robust combinatorial optimization under convex and discrete cost uncertainty. EURO Journal on Computational Optimization, 6(3):211–238, 2018.
• [2] Thai Doan Chuong and Vaithilingam Jeyakumar. Finding robust global optimal values of bilevel polynomial programs with uncertain linear constraints. Journal of Optimization Theory and Applications, 173(2):683–703, 2017.
• [3] Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of Operations Research, 153(1):235–256, 2007.
• [4] George B. Dantzig. Discrete-variable extremum problems. Operations Research, 5(2):266–277, 1957.
• [5] Stephan Dempe. Annotated bibliography on bilevel programming and mathematical programs with equilibirium constraints. Optimization, 52(3):333–359, 2003.
• [6] Stephan Dempe, Vyacheslav Kalashnikov, Gerardo A. Pérez-Valdés, and Nataliya Kalashnykova. Bilevel Programming Problems. Springer, 2015.
• [7] Pierre Hansen, Brigitte Jaumard, and Gilles Savard. New branch-and-bound rules for linear bilevel programming. SIAM Journal on Scientific and Statistical Computing, 13(5):1194–1217, 1992.
• [8] Charlotte Henkel. An algorithm for the global resolution of linear stochastic bilevel programs. PhD thesis, University of Duisburg-Essen, 2014.
• [9] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack Problems. Springer, 2004.
• [10] Panos Kouvelis and Gang Yu. Robust Discrete Optimization and Its Applications. Springer, 1996.
• [11] Gerhard J. Woeginger. On the approximability of average completion time scheduling under precedence constraints. Discrete Applied Mathematics, 131(1):237–252, 2003.