Multistage Knapsack

01/31/2019 ∙ by Evripidis Bampis, et al. ∙ Laboratoire d'Informatique de Paris 6 0

Many systems have to be maintained while the underlying constraints, costs and/or profits change over time. Although the state of a system may evolve during time, a non-negligible transition cost is incured for transitioning from one state to another. In order to model such situations, Gupta et al. (ICALP 2014) and Eisenstat et al. (ICALP 2014) introduced a multistage model where the input is a sequence of instances (one for each time step), and the goal is to find a sequence of solutions (one for each time step) that are both (i) near optimal for each time step and (ii) as stable as possible. We focus on the multistage version of the Knapsack problem where we are given a time horizon t=1,2,...,T, and a sequence of knapsack instances I_1,I_2,...,I_T, one for each time step, defined on a set of n objects. In every time step t we have to choose a feasible knapsack S_t of I_t, which gives a knapsack profit. To measure the stability/similarity of two consecutive solutions S_t and S_t+1, we identify the objects for which the decision, to be picked or not, remains the same in S_t and S_t+1, giving a transition profit. We are asked to produce a sequence of solutions S_1,S_2,...,S_T so that the total knapsack profit plus the overall transition profit is maximized. We propose a PTAS for the Multistage Knapsack problem. Then, we prove that there is no FPTAS for the problem even in the case where T=2, unless P=NP. Furthermore, we give a pseudopolynomial time algorithm for the case where the number of steps is bounded by a fixed constant and we show that otherwise the problem remains NP-hard even in the case where all the weights, profits and capacities are 0 or 1.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In a classical combinatorial optimization problem, given an instance of the problem we seek a feasible solution optimizing the objective function. However, in many systems the input may change over the time and the solution has to be adapted to the input changes. It is then necessary to determine a tradeoff between the optimality of the solutions in each time step and the stability/similarity of consecutive solutions. This is important since in many applications there is a significant transition cost for changing (parts of) a solution. Recently, Gupta et al. [Gupta] and Eisenstat et al. [Eisenstat] introduced a multistage model in order to deal with such situations. They consider that the input is a sequence of instances (one for each time step), and the goal is to find a sequence of solutions (one for each time step) reaching such a tradeoff.

Our work follows the direction proposed by Gupta et al. [Gupta] who suggested the study of more combinatorial optimization problems in their multistage framework. In this paper, we focus on the multistage version of the Knapsack problem. Consider a company owning a set of production units. Each unit can be used or not; if is used, it spends an amount of a given resource (energy, raw material,…), and generates a profit . Given a bound on the global amount of available resource, the static Knapsack problem aims at determining a feasible solution that specifies the chosen units in order to maximize the total profit under the constraint that the total amount of the resource does not exceed the bound of . In a multistage setting, considering a time horizon of, let us say, days, the company needs to decide a production plan for each day of the time horizon, given that data (such as prices, level of resources,…) usually change over time. This is a typical situation, for instance, in energy production planning (like electricity production, where units can be nuclear reactors, wind or water turbines,…), or in data centers (where units are machines and the resource corresponds to the available energy). Moreover, in these examples, there is an extra cost to turn ON or OFF a unit like in the case of turning ON/OFF a reactor in electricity production [thesececile], or a machine in a data center [Albers17]. Obviously, whenever a reactor is in the ON or OFF state, it is beneficial to maintain it at the same state for several consecutive time steps, in order to avoid the overhead costs of state changes. Therefore, the design of a production plan over a given time horizon has to take into account both the profits generated each day from the operation of the chosen units, as well as the potential transition profits from maintaining a unit at the same state for consecutive days. We refer the reader interested in planning problems in electricity production to [thesececile].

We formalize the problem as follows. We are given a time horizon , and a sequence of knapsack instances , one for each time step, defined on a set of objects. In every time step we have to choose a feasible knapsack of , which gives a knapsack profit. Taking into account transition costs, we measure the stability/similarity of two consecutive solutions and by identifying the objects for which the decision, to be picked or not, remains the same in and , giving a transition profit. We are asked to produce a sequence of solutions so that the total knapsack profit plus the overall transition profit is maximized.

Our main contribution is a polynomial time approximation scheme (PTAS) for the multistage version of the Knapsack problem. Up to the best of our knowledge, this is the first approximation scheme for a multistage combinatorial optimization problem and its existence contrasts with the inapproximability results for other combinatorial optimization problems that are even polynomial-time solvable in the static case (e.g. the multistage Spanning Tree problem [Gupta], or the multistage Bipartite Perfect Matching problem [Bampis]).

1.1 Problem definition

Formally, the Multistage Knapsack problem can be defined as follows.

Definition 1.

In the Multistage Knapsack problem () we are given:

  • a time horizon , a set of objects;

  • For any , any :

    • the profit of taking object at time

    • the weight of object at time

  • For any , any :

    • the bonus of the object if we keep the same decision for at time and .

  • For any : the capacity of the knapsack at time .

We are asked to select a subset of objects at each time so as to respect the capacity constraint: . To a solution are associated:

  • A knapsack profit corresponding to the sum of the profits of the knapsacks;

  • A transition profit where is the set of objects either taken or not taken at both time steps and in (formally ).

The value of the solution is the sum of the knapsack profit and the transition profit, to be maximized.

1.2 Related works

Multistage combinatorial optimization. A lot of optimization problems have been considered in online or semi-online settings, where the input changes over time and the algorithm has to modify the solution (re-optimize) by making as few changes as possible. We refer the reader to [Anthony, Blanchard, Cohen, Gu, Megow, Nagarajan] and the references therein.

Multistage optimization has been studied for fractional problems by Buchbinder et al. [Buchbinder] and Buchbinder, Chen and Naor [Buchbinder+]. The multistage model considered in this article is the one studied in Eisenstat et al. [Eisenstat] and Gupta et al. [Gupta]. Eisenstat et al. [Eisenstat] studied the multistage version of facility location problems. They proposed a logarithmic approximation algorithm. An et al. [An] obtained constant factor approximation for some related problems. Gupta et al. [Gupta] studied the Multistage Maintenance Matroid problem for both the offline and the online settings. They presented a logarithmic approximation algorithm for this problem, which includes as a special case a natural multistage version of Spanning Tree. The same paper also introduced the study of the Multistage Minimum Perfect Matching problem. They showed that the problem becomes hard to approximate even for a constant number of stages. Later, Bampis et al. [Bampis] showed that the problem is hard to approximate even for bipartite graphs and for the case of two time steps. In the case where the edge costs are metric within every time step they first proved that the problem remains APX-hard even for two time steps. They also show that the maximization version of the problem admits a constant factor approximation algorithm but is APX-hard. In another work [Bampis+], the Multistage Max-Min Fair Allocation problem has been studied in the offline and the online settings. This corresponds to a multistage variant of the Santa Klaus problem. For the off-line setting, the authors showed that the multistage version of the problem is much harder than the static one. They provide constant factor approximation algorithms for the off-line setting.

Knapsack variants.

Our work builds upon the Knapsack literature [Kelerrer]. It is well known that there is a simple 2-approximation algorithm as well as a fully polynomial time (FPTAS) for the static case [Ibarra, Lawler, Magazine, Kellerer]. There are two variants that are of special interest for our work:

(i) The first variant is a generalization of the Knapsack problem known as the -Dimensional Knapsack () problem:

Definition 2.

In the -dimensional Knapsack problem (), we have a set of objects. Each object has a profit and weights , . We are also given capacities . The goal is to select a subset of objects such that:

  • The capacity constraints are respected: for any , ;

  • The profit is maximized.

It is well known that for the usual Knapsack problem, in the continuous relaxation (variables in ), at most one variable is fractional. Caprara et al. [Carpara] showed that this can be generalized for .

Let us consider the following ILP formulation of the problem:

[Carpara] In the continuous relaxation of where variables are in , in any basic solution at most variables are fractional.

Note that with an easy affine transformation on variables, the same result holds when variable is subject to instead of : in any basic solution at most variables are such that .

Caprara et al. [Carpara] use the result of Theorem 1.2 to show that for any fixed constant admits a polynomial time approximation scheme (PTAS). Other PTASes have been presented in [Oguz, Frieze]. Korte and Schrader [Korte] showed that there is no FPTAS for unless .

(ii) The second related variant is a simplified version of called , where the dimension is 2, all the profits are 1 and, given a , we are asked if there is a solution of value at least (decision problem). In other words, given two knapsack constraints, can we take objects and verify the two constraints? The following result is shown in [Kelerrer].

[Kelerrer] is -complete.

1.3 Our contribution

As stated before, our main contribution is to propose a PTAS for the multistage Knapsack problem. Furthermore, we prove that there is no FPTAS for the problem even in the case where , unless . We also give a pseudopolynomial time algorithm for the case where the number of steps is bounded by a fixed constant and we show that otherwise the problem remains NP-hard even in the case where all the weights, profits and capacities are 0 or 1. The following table summarizes our main result pointing out the impact of the number of time steps on the difficulty of the problem (“no FPTAS” means “no FPTAS unless P=NP”).

fixed any
pseudopolynomial pseudopolynomial strongly -hard
FPTAS PTAS PTAS
- no FPTAS no FPTAS

We point out that the negative results (strongly NP-hardness and no FPTAS) hold even in the case of uniform bonus when for all and all .

2 ILP formulation

The Multistage Knapsack problem can be written as an ILP as follows. We define binary variables equal to 1 if is taken at time () and 0 otherwise. We also define binary variables corresponding to the transition profit of object between time and . The profit is 1 if is taken at both time steps, or taken at none, and 0 otherwise. Hence, . Considering that we solve a maximization problem, this can be linearized by the two inequalities: and . We end up with the following ILP (called ):

In devising the PTAS we will extensively use the linear relaxation of where variables and are in .

3 A polynomial time approximation scheme

In this section we show that Multistage Knapsack admits a PTAS. The central part of the proof is to derive a PTAS when the number of steps is a fixed constant (Sections 3.1 and 3.2). The generalization to an arbitrary number of steps is done in Section 3.3.

To get a PTAS for a constant number of steps, the proof follows the two main ideas leading to the PTAS for in [Carpara]. Namely, for :

  • The number of fractional variables in the continuous relaxation of is at most (Theorem 1.2);

  • A combination of bruteforce search (to find the most profitable objects) and LP-based solution allows to compute a solution close to optimal.

The main difficulty is to obtain a similar result for the number of fractional variables in the (relaxed) Multistage Knapsack problem: we end up with a result stating that there are at most fractional variables in an optimal (basic) solution. The brute force part is similar in essence though some additional difficulties are overcome by an additional preprocessing step.

We show how to bound the number of fractional variables in Section 3.1. We first illustrate the reasoning on the case of two time-steps, and then present the general result. In Section 3.2 we present the PTAS for a constant number of steps.

For ease of notation, we will sometimes write a feasible solution as (subsets of objects taken at each time step), or as (values of variables in or ).

3.1 Bounding the number of fractional objects in

3.1.1 Warm-up: the case of two time-steps

We consider in this section the case of two time-steps (), and focus on the linear relaxation of with the variables and in (we write instead of for readability). We say that an object is fractional in a solution if , or is fractional.

Let us consider a (feasible) solution of , where (variables are set to their optimal value w.r.t. ).

We show the following.

Proposition 1.

If is a basic solution of , at most 4 objects are fractional.

Proof.

First note that since we assume , if and are both integers then is an integer. So if an object is fractional either or is fractional.

Let us denote:

  • the set of objects such that .

  • the set of objects such that .

We first show Fact 1.

Fact 1. In there is at most one object with fractional.

Suppose that there are two such objects and . Note that since , is fractional, and so is . Then, for a sufficiently small , consider the solution obtained from by transfering at time 1 an amount of weight from to (and adjusting consequently and ). Namely, in :

  • , , where if and if (since is in ).

  • , , where if and otherwise.

Note that (for sufficiently small) is feasible. Indeed (1) and are fractional (2) the weight of the knapsack at time 1 is the same in and in (3) if increases by a small , if then decreases by so can increase by (so ), and if then has to decrease by (so ), and similarly for .

Similarly, let us define obtained from with the reverse transfer (from to ). In :

  • ,

  • ,

As previously, is feasible. Then is clearly a convex combination of and (with coefficient 1/2), so not a basic solution, and Fact 1 is proven.

In other words (and this interpretation will be important in the general case), for this case we can focus on variables at time one, and interpret locally the problem as a (classical, unidimensional) fractional knapsack problem. By locally, we mean that if then must be in (in , cannot be larger than , otherwise the previous value of would be erroneous); similarly if then must be in . The profit associated to object is (if increases/decreases by , then the knapsack profit increases/decreases by , and the transition profit increases/decreases by , as explained above). Then we have at most one fractional variable, as in any fractional knapsack problem.

In there is at most one object with fractional. Similarly there is at most one object with fractional. In , for all but at most two objects, both and , and thus , are integers.

Note that this argument would not hold for variables in . Indeed if , then , and the transition profit decreases in both cases: when increases by and when it decreases by . So, we cannot express as a convex combination of and as previously.

However, let us consider the following linear program

obtained by fixing variables in to their values in , computing the remaining capacities , and “imposing” :

Clearly, the restriction of to variables in is a solution of . Formally, let defined as . is feasible for . Let us show that it is basic: suppose a contrario that , with two feasible solutions of . Then consider the solution of defined as:

  • If then , and .

  • Otherwise (for in ) is the same as .

is clearly a feasible solution of Multistage Knapsack. If we do the same for , we get a (different) feasible solution , and , so is not basic, a contradiction.

By the result of [Carpara], has at most 2 fractional variables. Then, in , for all but at most 2 variables both , and are integers. ∎

3.1.2 General case

The case of 2 time steps suggests to bound the number of fractional objects by considering 3 cases:

  • Objects with fractional and . As explained in the proof of Proposition 1, this can be seen locally (as long as does not reach ) as a knapsack problem from which we can conclude that there is at most 1 such fractional object.

  • Similarly, objects with fractional and .

  • Objects with fractional. As explained in the proof of Proposition 1, this can be seen as a from which we can conclude that there are at most 2 such fractional objects.

For larger , we may have different situations. Suppose for instance that we have 5 time steps, and a solution with an object such that: . So we have fractional and constant for , and different from and . The idea is to say that we cannot have many objects like this (in a basic solution), by interpreting these objects on time steps as a basic optimal solution of a (locally, i.e. with a variable such that ).

Then, roughly speaking, the idea is to show that for any pair of time steps , we can bound the number of objects which are fractional and constant on this time interval (but not at time and ). Then a sum on all the possible choices of gives the global upper bound.

Let us state this rough idea formally. In all this section, we consider a (feasible) solution of , where (variables are set to their optimal value w.r.t. ).

In such a solution , let us define as previously an object as fractional if at least one variable or is fractional. Our goal is to show the following result.

If is a basic solution of , it has at most fractional objects.

Before proving the theorem, let us introduce some definitions and show some lemmas. Let be two time steps with .

Definition 3.

The set associated to is the set of objects (called fractional w.r.t. ) such that

  • ;

  • Either or ;

  • Either or ;

In other words, we have fractional and constant on , and is maximal w.r.t. this property.

For , we note the remaining capacity of knapsack at time considering that variables outside are fixed (to their value in ):

As previously, we will see as a single variable . We have to express the fact that this variable cannot “cross” the values (if ) and (if ), so that everything remains locally (in this range) linear. So we define the lower and upper bounds induced by Definition 3 as:

  • Initialize . If then do . If then do .

  • Similarly, initialize . If then do . If then do .

Note that with this definition . This allows us to define the polyhedron as the set of such that

Definition 4.

The solution associated to is defined as for .

If is a basic solution, then the solution associated to is feasible of and basic.

Proof.

Since is feasible, then respects the capacity constraints (remaining capacity), and so is feasible.

Suppose now that for two feasible solutions of . We associate to a feasible solution as follows.

We fix for , and for . We fix variables to their maximal values, i.e. . This way, we get a feasible solution . Note that:

  • for , since coresponding variables are the same in and ;

  • for , since variables are constant on the interval .

Then, for variables , the only modifications between and concerns the “boundary” variables for and .

We build this way two solutions and of corresponding to and . By construction, and are feasible. They are also different provided that and are different. It remains to prove that is the half sum of and .

Let us first consider variables :

  • if , so .

  • if , and , so .

Now let us look at variables : first, for , so . The last and main part concerns about the last 2 variables (if ) and (if ).

We have and . The crucial point is to observe that thanks to the constraint , and by definition of and , , and are either all greater than (or equal to) , or all lower than (or equal to) .

Suppose first that they are all greater than (or equal to) . Then:

Similarly, . So

Now suppose that they are all lower than (or equal to) . Then:

Similarly, . So

Then, in both cases, .

With the very same arguments we can show that .

Then, is the half sum of and , contradiction with the fact that is basic. ∎

Now we can bound the number of fractional objects w.r.t. .

.

Proof.

is a polyhedron corresponding to a linear relaxation of a , with . Since is basic, using Theorem 1.2 (and the note after) there are at most variables such that . But by definition of , for all . Then . ∎

Now we can easily prove Theorem 3.1.2.

Proof.

First note that if and are integral, then so is . Then, if an object is fractional at least one is fractional, and so will appear in (at least) one set .

We consider all pairs with . Thanks to Lemma 3.1.2, . So, the total number of fractional objects is at most:

Indeed, there are less than choices for and at most fractional objects for each choice. ∎

Note that with standard calculation we get , so for time steps : we have at most 4 fractional objects, the same bound as in Proposition 1.

3.2 A PTAS for a constant number of time steps

Now we can describe the . Informally, the algorithm first guesses the objects with the maximum reward in an optimal solution (where is defined as a function of and ), and then finds a solution on the remaining instance using the relaxation of the LP. The fact that the number of fractional objects is small allows to bound the error made by the algorithm.

For a solution (either fractional or integral) we define as the reward of object in solution : . The value of a solution is .

Consider the algorithm which, on an instance of Multistage Knapsack:

  • Finds an optimal (basic) solution of the relaxation of ;

  • Takes at step an object if and only if .

Clearly, outputs a feasible solution, the value of which verifies:

(1)

where is the set of fractional objects in . Indeed, for each integral (i.e., not fractional) object the reward is the same in both solutions.

Now we can describe the algorithm Algorithm , which takes as input an instance of Multistage Knapsack and an .

Algorithm

  1. Let .

  2. For all such that , :

    If for all , then:

    • Compute the rewards of object in the solution , and find the smallest one, say , with reward .

    • On the subinstance of objects :

      • For all , for all