In a classical combinatorial optimization setting, given an instance of a problem one needs to find a good feasible solution. However, in many situations, the data may evolve over the time and one has to solve a sequence of instances. The natural approach of solving every instance independently may induce a significant transition cost, for instance for moving a system from one state to another. This cost may represent e.g. the cost of turning on/off the servers in a data center [LinWAT13, BansalGKPSS15, AntoniadisS17, AlbersQ18], the cost of changing the quality level in video streaming [Joseph], or the cost for turning on/off nuclear plants [thesececile]. Gupta et al. [Gupta] and Eisenstat et al. [Eisenstat] proposed a multistage model where given a time horizon , the input is a sequence of instances , (one for each time step), and the goal is to find a sequence of solutions (one for each time step) reaching a tradeoff between the quality of the solutions in each time step and the stability/similarity of the solutions in consecutive time steps. The addition of the transition cost makes some classic combinatorial optimization problems much harder. This is the case for instance for the minimum weighted perfect matching problem in the off-line case where the whole sequence of instances is known in advance. While the one-step problem is polynomially-time solvable, the multistage problem becomes hard to approximate even for bipartite graphs and for only two time steps [Bampis, Gupta].
In this work, we focus on the on-line case, where at time no knowledge is available for instances at times . When it is not possible to handle the on-line case, we turn our attention to the -lookahead case, where at time the instances at times are also known. This case is of interest since in some applications like in dynamic capacity planning in data centers, the forecasts of future demands may be very helpful [Lin, Liu]. Our goal is to measure the impact of the lack of knowledge of the future on the quality and the stability of the returned solutions. Indeed, our algorithms are limited in their knowledge of the sequence of instances. Given that the number of time steps is given, we compute the competitive ratio of the algorithm after time step : As we focus on maximization problems, we say that an algorithm is (strictly) -competitive (with competitive ratio ) if its value is at least times the optimal value on all instances.
As it is usual in the online setting, we consider no limitations in the computational resources available. This means that at every time step , where instance is known, we assume the existence of an oracle able to compute the optimal solution for that time step. Notice also that our lower bounds do not rely on any complexity assumptions. Some recent results are already known for the on-line multistage model [Bampis+, Gupta], however all these results are obtained for specific problems. In this work, we study multistage variants of a broad family of maximization problems. The family of optimization problems that we consider is the following.
(Subset Maximization Problems.) A Subset Maximization problem is a combinatorial optimization problem whose instances consist of
A ground set ;
A set of feasible solutions such that ;
A positive weight for every .
The goal is to find such that . We will consider that the empty set is always feasible, ensuring that the feasible set of solutions is non empty. This is a very general class of problems, including the maximization Subset Selection problems studied by Pruhs and Woeginger in [Pruhs] (they only considered linear objective functions). It contains for instance graph problems where
is the set of vertices (as in any maximization induced subgraph problem verifying some property) or the set of edges (as in matching problems). It also contains classical set problems (knapsack, maximum 3-dimensional matching,…), and more generally 0-1 linear programs (with non negative profits in the objective function).
Given a problem in the previous class, we are interested in its multistage version [Gupta, Eisenstat]. The stability over time of a solution sequence is classically captured by considering a transition cost when a modification is made in the solution. Here, dealing with maximization problems, we will consider a transition bonus for taking into account the similarity of two consecutive solutions. In what follows, we will use the term object to denote an element of (so an object can be a vertex of a graph, or an edge,…, depending on the underlying problem).
(Multistage Subset Maximization Problems.) In a Multistage Subset Maximization problem , we are given
a number of steps , a set of objects;
for any , an instance of the optimization problem. We will denote:
the objective (profit) function at time
the set of feasible solutions at time
a given transition profit.
the value of a solution sequence is
where is the transition bonus for the solution between time steps and . We will use the term profit for , bonus for the transition bonus , and value of a solution for ;
the goal is to determine a solution sequence of maximum value.
There are two natural ways to define the transition bonus. We will see that these two ways of measuring the stability induce some differences in the competitive ratios one can get.
(Types of transition bonus.) If and denote, respectively, the solutions for time steps and , then we can define the transition bonus as:
Intersection Bonus: times : in this case the bonus is proportional to the number of objects in the solution at time that remain in it at time .
Hamming Bonus: times . Here we get the bonus for each object for which the decision (to be in the solution or not) is the same between time steps and . In other words, the bonus is proportional to minus the number of modifications (Hamming distance) in the solutions.
Note that by scaling profits (dividing them by ), we can arbitrarily fix . So from now on, we assume .
In this article, we will consider two possible ways for the data to evolve.
(Types of data evolution.)
Static Set of Feasible Solutions (SSFS): only profits may change over time, so the structure of feasible solutions remains the same: for all .
General Evolution (GE): any modification in the input sequence is possible. Both the profits and the set of feasible solutions may change over time. In this latter model, for knapsack, profits and weights of object (and the capacity of the bag) may change over time; for maximum independent set edges in the graph may change,….
1.1 Related Work
A series of papers consider the online or semi-online settings, where the input changes over time and the algorithm has to modify (re-optimize) the solution by making as few changes as possible (see [Anthony, Blanchard, Cohen, Gu, Megow, Nagarajan] and the references therein). The multistage model considered in this paper has been introduced in Eisenstat et al. [Eisenstat] and Gupta et al. [Gupta]. Eisenstat et al. [Eisenstat] studied the multistage version of facility location problems. They proposed a logarithmic approximation algorithm. An et al. [An] obtained constant factor approximation algorithms for some related problems. Gupta et al. [Gupta] studied the Multistage Maintenance Matroid problem for both the offline and the online settings. They presented a logarithmic approximation algorithm for this problem, which includes as a special case a natural multistage version of Spanning Tree. They also considered the online version of the problem and they provide an efficient randomized competitive algorithm against any oblivious adversary. The same paper also introduced the study of the Multistage Minimum Perfect Matching problem for which they proved that it is hard to approximate even for a constant number of stages. Bampis et al. [Bampis] improved this negative result by showing that the problem is hard to approximate even for bipartite graphs and for the case of two time steps. When the edge costs are metric within every time step they proved that the problem remains APX-hard even for two time steps. They also showed that the maximization version of the problem admits a constant factor approximation algorithm, but is APX-hard. Olver et al. [Olver] studied a multistage version of the Minimum Linear Arrangement problem, which is related to a variant of the List Update problem [Sleator], and provided a logarithmic lower bound for the online version and a polylogarithmic upper bound for the offline version.
The Multistage Max-Min Fair Allocation problem has been studied in the offline and the online settings in [Bampis+]. This problem corresponds to a multistage variant of the Santa Klaus problem. For the off-line setting, the authors showed that the multistage version of the problem is much harder than the static one. They provided constant factor approximation algorithms for the off-line setting. For the online setting they proposed a constant competitive ratio for SSFS-type evolving instances and they proved that it is not possible to find an online algorithm with bounded competitive ratio for GE-type evolving instances. Finally, they showed that in the 1-lookahead case, where at time step we know the instance of time step , it is possible to get a constant approximation ratio.
Buchbinder et al. [Buchbinder] and Buchbinder, Chen and Naor [Buchbinder+] considered a multistage model and they studied the relation between the online learning and competitive analysis frameworks, mostly for fractional optimization problems.
1.2 Summary of Results and Overview
The contribution of our paper is a framework for online multistage maximization problems (comprising different models), a characterization of those models in which a constant competitive ratio is achievable, and almost tight upper and lower bounds on the best-possible competitive ratio for these models.
We increase the complexity of the considered models over the course of the paper. We start with the arguably simplest model: Considering a static set of feasible solutions clearly restricts the general model of evolution; while such a straightforward comparison between the Hamming and intersection bonus is not possible, the Hamming bonus seems simpler in that, compared to the intersection model, there are (somewhat comparable) extra terms added on the profit of both the algorithm and the optimum. As we show in Subsection 2.1, there is indeed a simple -competitive algorithm: At each time , it greedily chooses the set that either maximizes the transition bonus w.r.t. (that is, choosing , which is possible in this model) or maximizes the value . We complement this observation with a matching lower bound only involving two time steps.
We then toggle the transition-bonus model and the data-evolution model separately and show that constant competitive ratios can still be achieved. First, in Subsection 2.2, we consider intersection bonus. We show that, after modifying the profits to make larger solutions more profitable, a -competitive algorithm can be achieved by a greedy approach again. We also give an (almost matching) lower bound of again. Next, we toggle the evolution model. In Subsection 3.1, we adapt the greedy algorithm from Subsection 2.1 by reweighting to obtain a -competitive algorithm using a more complicated analysis. We complement this result with a lower bound of .
In Subsection 3.2, we finally consider the general-evolution model with intersection bonus, where we give a simple lower bound showing that a constant-competitive ratio is not achievable. This lower bound relies on forbidding to choose any item in the second step that the algorithm chose in the first step. We circumnavigate such issues by allowing the algorithm a lookahead of one step and present a
-competitive algorithm for that setting. A similar phase transition has been observed for a related problem[Bampis+], but our algorithm, based on a doubling approach, is different. We also give a matching lower bound of on the competitive ratio of any algorithm in the same setting. We summarize all results described thus far in Table 1.
|static set of feasible solutions||general evolution|
|Theorems 2.1 and 2.1||Theorems 3.1.3 and 3.1.1|
|Theorems 2.2 and 2.2||Theorems 3.2, 3.2, and 3.2|
We note that the lower bounds mentioned for the Hamming model are only shown for a specific fixed number of time steps, and that in general there is no trivial way of extending these bounds to a larger number of time steps. One may however argue that the large- regime is in fact the interesting one for both practical applications and in theory, the latter because the effect of having a first time step without bonus vanishes. At the end of the respective sections, we therefore give asymptotical lower bounds of and roughly for the cases of a static set of feasible solutions and general evolutions, respectively. These bounds are non-trivial, but we do not know if they are tight.
It is plausible that the aforementioned upper bounds can be improved if extra assumptions on characteristics of the objective function and the sets of feasible solutions are made. In Subsubsection 3.1.2, we show that already very natural assumptions suffice: Assuming that at each time the feasible solutions are closed under taking subsets and the objective function is submodular, we give a -competitive algorithm for the model with a general evolution and Hamming bonus, improving the previous -competitive ratio. Our lower bounds for general evolution and Hamming bonus in fact fulfill the extra assumptions.
In Section 4, we summarize our results and mention directions for future research that we consider interesting.
2 Model of a Static Set of Feasible Solutions
We consider here the model of evolution where only profits change over time: for any . We first consider the Hamming bonus model and show a simple 2-competitive algorithm. We will then show that a (asymptotic) competitive ratio of 2 can also be achieved in the intersection bonus model using a more involved algorithm. In both cases, this ratio 2 is shown to be (asymptotically) optimal.
2.1 Hamming-Bonus Model
In the SSFS model with Hamming bonus, there is a 2-competitive algorithm.
We consider the very simple following algorithm. At each time step , the algorithm computes an optimal solution with associated profit . At we fix . For , if then fix , otherwise fix (which is possible thanks to the fact that the set of feasible solutions does not change).
Let be the optimal value. Since any solution sequence gets profit at most at time , and bonus at most between two consecutive time steps, we get .
By construction, at time , either the algorithm gets profit when , or bonus (from ) when . So in any case the algorithm gets profit plus bonus at least . At time it gets profit at least . So
which completes the proof.∎
Consider the SSFS model with Hamming bonus. For any , there is no -competitive algorithm, even if there are only time steps.
We consider a set of objects, and time steps. There are three feasible solutions: , and . At , all the profits are 0. Let us consider an on-line algorithm A. We consider the three possibilities for the algorithm at time 1:
At time 1, A chooses : at time 2 we give profit 1 to all objects. If A takes no object at time 2, it gets profit 0 and bonus . If it takes , it gets profit 1 and bonus . If it takes , it gets profit and bonus 1, so in any case the computed solution has value . The solution consisting of taking at both time steps has profit and bonus , so value .
At time 1, A chooses : at time 2 we give profit 0 to object 1, and profit 1 to all other objects. Then, if the algorithm takes (resp, , ), at time 2 its gets value (resp, , ) while the solution consisting of taking at both time steps has value .
At time 1, A chooses : at time 2 we give profit to object 1, and 0 to all other objects. Then if the algorithm takes (resp, , ) at time 2 its gets value (resp, , ), while the solution consisting of taking at both time steps has value .
In any case, the ratio is at least . ∎
We complement this lower bound with an asymptotical result for large .
Consider the SSFS model with Hamming bonus. For every , there is a such that, for each number of time steps , there is no -competitive algorithm.
Let . The static set of feasible solutions is . Initially, and . As long as the algorithm has not picked item until some time , we set and again. Note that, in order to be -competitive, the algorithm however has to pick item eventually. Further, the ratio between the profit of the optimum and the algorithm during this part is as the length of this part approaches .
The remaining time horizon is partitioned into contiguous phases. Consider a phase that starts at time . The invariant at the beginning of the phase is that both the algorithm and the optimum have picked the same item in the previous time step . Let this item be w.l.o.g. item ; the other case is symmetric. Then and . By the same reasoning as above, we can assume the algorithm chooses an item at . Let be that item. Then and . As long as the algorithm is still not picking item during the time interval , and . Once the algorithm picks item at some time, the phase ends regularly; otherwise it ends by default.
Now consider a phase of length that ends regularly (note ). We claim that the values of the algorithm and the optimum have a ratio of at least
. This is because of the following estimates on the algorithm’s and optimum’s value:
In either case for , the algorithm obtains a value of in time step . Furthermore, the total bonus in all subsequent time steps is , because the algorithm has to switch from item to item . There is an additional profit of at time . Therefore, the total value is
The value of the optimum is at least : It chooses item already at time and keeps it until time , obtaining a value of in that time step and another in each subsequent time step.
This proves the claim and thereby the theorem as a phase that ends by default can be extended to one that ends regularly by modifying the optimum’s and algorithm’s values by constants. ∎
2.2 Intersection-Bonus Model
In the intersection-bonus model things get harder since an optimal solution may be of small size and then gives very small (potential) bonus for the next step. As a matter of fact, the algorithm of the previous section has unbounded competitive ratio in this case: take a large number of objects, , and at time 1 all objects have profit 0 up to one which has profit . The algorithm will take this object (instead of taking objects of profit 0) and then potentially get bonus at most 1 instead of .
Thus we shall put an incentive for the algorithm to take solutions of large size, in order to have a chance to get a large bonus. We define the following algorithm called MP-Algo (for Modified Profit algorithm). Informally, at each time step , the algorithm computes an optimal solution with a modified objective function . These modifications take into account (1) the objects taken at time (2) an incentive to take a lot of objects. Formally, MP-Algo works as follows:
At : let . Choose as an optimal solution for the problem with modified profits .
For from 2 to : let . Choose as an optimal solution for the problem with modified profit function .
At : let . Choose as an optimal solution with modified profit function .
The cases and are specific since there is no previous solution for , and no future solution for .
In the SSFS model with intersection bonus, MP-Algo is -competitive.
Let be an optimal sequence. Since is optimal with respect to , for we have:
Since is also a feasible solution at time , we have:
Similarly, at so
At , so
Since in the optimal sequence the transition bonus between time and is at most , we get:
From this we easily derive:
We note that competitive ratio 2 can be derived with a similar analysis when the number of time steps is 2 or 3. We show a matching lower bound (which is also valid in the asymptotic setting).
Consider the SSFS model with intersection bonus. For any and number of time steps , there is no -competitive algorithm.
Let and . We consider time steps, and a set of objects. The objective function is linear, and feasible solutions are sets of at most 1 object. At , the profit of each object is 1. Then, at each time step, if the algorithm takes an object, this object will have profit 0 until the end. While an object is not taken by the algorithm, its profit remains 1.
Since the algorithm takes at most one object at each time step, there is an object which is never taken till the last step. The solution of taking this object during all the process has value . But at each time step the algorithm either takes a new object (and gets no bonus) or keeps the previously taken object and gets no profit. So the value of the computed solution is at most . The ratio is . ∎
3 Model of General Evolution
We consider in this section that the set of feasible solutions may evolve over time. We will show that in the Hamming bonus model, we can still get constant competitive ratios, though ratios slightly worse than in the case where only profits could change over time. Then, we will tackle the intersection bonus model, showing that no constant competitive ratio can be achieved. However, with only -lookahead we can get a constant competitive ratio.
3.1 Hamming-Bonus Model
In this section we consider the Hamming bonus model. We first show in Section 3.1.1 that there exists a -competitive algorithm. Interestingly, we then show in Section 3.1.2 that a slight assumption on the problem structure allows to improve the competitive ratio. More precisely, we achieve a 21/8 (asymptotic) competitive ratio if we assume that the objective function is submodular (including the additive case) and that a subset of a feasible solution is feasible. These assumptions are satisfied by all the problems mentioned in introduction. We finally consider lower bounds in Section 3.1.3.
3.1.1 General Case
We adapt the idea of the 2-competitive algorithm working for the Hamming bonus model for a static set of feasible solutions (Section 2.1) to the current setting where the set of feasible solutions may change. Let us consider the following algorithm BestOrNothing: at each time step , BestOrNothing computes an optimal solution with associated profit and compares it to times the maximum potential bonus, i.e to . It chooses if the associated profit is at least , otherwise it chooses . A slight modification is applied for the last step .
Formally, BestOrNothing works as follows:
For from 1 to :
Compute an optimal solution at time with associated profit
If set , otherwise set .
At time :
if , then .
Otherwise: if set , otherwise set .
We shown an upper bound on the competitive ratio achieved by this algorithm.
In the GE model with Hamming bonus, BestOrNothing is -competitive.
Proof of Theorem 3.1.1.
Let us define as the set of time steps where .
If , let be the largest element in . We first upper bound the loss of the algorithm up to time . We will then deal with the time period from up to .
The global profit of an optimal solution up to time is at most . Its bonus (including the one from time to ) is at most . So its global value is at most .
The solution computed by BestOrNothing gets profit at least . Note that it chooses the empty set always but times, so it gets transition bonus at least times (each step in may prevent to get the bonus only between and , and between and ). So the global value of the computed solution up to time is at least .
Up to time , the ratio between the optimal value and the value of the solution computed by BestOrNothing verifies
where we used the fact that .
Since the ratio is at most 3 up to time .
Now, let us consider the end of the process, from time (or 1 if is empty) up to time . If then we take the best solution at time and get no extra loss, so the algorithm is 3-competitive in this case.
Now assume . We know that BestOrNothing chooses the empty set up to . Let us first assume that . Then on the subperiod from to BestOrNothing gets value (bonuses), while the optimum gets bonus at most and profit at most . The optimal value is then at most .
Now suppose that . On the subperiod from to BestOrNothing gets value , while the optimum gets bonus at most and profit at most . The worst case ratio occurs when . In this case, as before, the value of the computed solution is , while the optimal value is at most .
Then, in all cases we have that the optimal value is at most . But (the computed solution has value at least up to , and then at least ), and the claimed ratio follows. ∎
3.1.2 Improvement for Submodularity and Subset Feasibility
In this section we assume that the problem have the following two properties:
subset feasibility: at any time step, every subset of a feasible solution is feasible.
submodularity: for any , any , .
Note that this implies that, if a feasible set is partitioned into (disjoint) subsets , then are feasible and .
We exploit this property to devise algorithms where we partition the set of objects and solve the problems on subinstances. As a first idea, let us partition the set of objects into into sets of size (roughly) ; consider the algorithm which at every time step computes the best solutions on each subinstance on , and , and chooses as the one of maximum profit between these 3 solutions. By submodularity and subset feasibility, the algorithm gets profit at least 1/3 of the optimal profit at each time step. Dealing with bonuses, at each time step the algorithm chooses a solution included either in , or in , or in so, for any , at least one set among is not chosen neither at time nor at time , and the algorithm gets transition bonus at least . Hence, the algorithm is 3-competitive.
We now improve the previous algorithm. The basic idea is to remark that if for two consecutive time steps the solution and are taken in the same subset, say , then the bonus is (at least) instead of . Roughly speaking, we can hope for a ratio better than 1/3 for the bonus. Then the algorithm makes a tradeoff at every time step: if the profit is very high then it will take a solution maximizing the profit, otherwise it will do (nearly) the same as previously. More formally, let us consider the algorithm 3-Part. We first assume that is a multiple of 3. will be defined later.
Partition in three subsets of size .
For : compute a solution maximizing
Case (1): If : define
Otherwise (): compute solutions with optimal profit , , included in , and . Let , and the respective profits.
Case (2): if and Case (1) did not occur at , do:
If (resp. , ), compute (resp. , ) and define as , or accordingly.
Case (3) ( or Case (1) occurred at ) do:
Define as the solution with maximum profit among , , .
If is not a multiple of 3, we add one or two dummy objects that are in no feasible solutions (at any step). We prove an upper bound on the competitive ratio of this algorithm.
Consider the GE model with Hamming bonus. Under the assumption of subset feasibility and submodularity, 3-Part is -competitive.
We mainly show that in each case (1), (2) or (3) the computed solution achieves the claimed ratio.
Let us first consider a time step where Case (2) occurs. It means that Case (2) or (3) occurred at the previous step, so is included in , or . Suppose w.l.o.g that algorithm took . Then gives a bonus at least (between and ), and and gives a bonus at least . By computing , we derive:
where is a solution maximizing the profit at time , using the fact that by subset feasibility and submodularity. Since in Case (2) , we derive:
Now, consider a time step where Case (3) occurs. Then necessarily Case (1) occurs at step . So . Also, has profit at least . So
Since , we get:
Since , provided that we choose such that , we get:
Finally, suppose that Case (1) occurs at some step . Then and , so
By setting , we get .
It remains to look at step 1. If (Case (1)), then , so there is no profit loss. Otherwise, , Case (3) occurs, the loss it at most . Since the optimal value is at least , the loss it a fraction at most of the optimal value.
If is not a multiple of 3, adding one or two dummy objects add or to solution values, inducing a loss which is a fraction at most of the optimal value. ∎
3.1.3 Lower Bounds
We complement the algorithmic results with a lower bound for two time steps and an asymptotical one. Interestingly, these bounds are also valid for the latter restricted setting with subset feasibility and submodularity.
Consider the GE model with Hamming bonus. For any , there is no -competitive algorithm.
We consider a knapsack problem with objects ( suffices to show the result, but the proof is valid for any number of objects), and time steps. At time 1, all objects have weight 1 and profit ; the capacity of the bag is .
Let be the set of objects chosen at step 1 by the algorithm (possibly ). At the algorithm receives the instance where:
the capacity is .
each object not in receives weight and profit 1.
each object in has a weight greater than .
Then at step 2 the algorithm receives value 1 for each object not in (either by transition bonus from step 1, or by taking it at step 2). The value of its solution is . Now, the solution consisting of taking at both time steps has value . The chosen is such that , so the solution has value . The ratio is . ∎
Consider the GE model with Hamming bonus. For every , there is a such that, for each number of time steps , there is no -competitive algorithm where .
Consider some and some online algorithm A. The ground set only consists of the single item , that is, .
At time , it is not feasible to pick the item, that is, . We partition the remaining time horizon (with yet to be specified) into phases. Hence, the first phase starts in time step . In any phase, as long as A has not included item in its solution until time , both including and not including it is feasible at , that is, . Once A includes the item in its solution at time (meaning ), including it becomes unfeasible at the next time, that is, . The current phase also ends at this time. In this case, we say that the phase ends regularly. At , the current phase ends by default in any case. If a phase however ends regularly at time , a new phase starts at time .
There is no profit associated with the empty set, that is, ; the profit is whenever is the first time step of a phase, and it is in all other cases (note that, however, it may be unfeasible to include item in the solution). The remaining part of the proof is concerned with finding so as to maximize the competitive ratio.
For the analysis, denote by the optimal solution, and denote by the solution that A finds. We consider phases separately. First consider a phase of length starting at time ending regularly (at time ). Note that and that the initial situation is independent of and in that . For each time that is part of the phase, we count and towards the values of the optimum and algorithm, respectively. If , the resulting values of the algorithm and optimum are and , respectively. If , the value are and , respectively. Hence, in phases of length at most , the optimum does not pick the item; in longer phases, it picks the item at all times when it can.
To express the lower bound that we can show, first note that assuming that each phase ends regularly is only with an additive constant loss in both the algorithm’s and the optimum’s value, so we may make this assumption for the asymptotical competitive ratio considered here. Since the algorithm chooses the phase lengths, the lower bound that we can show here is equal to the largest lower bound on the ratio between the optimum’s and the algorithm’s value within any phase, which is lower bounded by
according to the above considerations.
3.2 Intersection-Bonus Model
We now look at the general-evolution model with intersection bonus. This model is different from the ones considered before: We first give a simple lower bound showing that there is no constant-competitive algorithm.
In the GE model with intersection bonus, there is no -competitive algorithm for any constant .
We consider an instance with no profit. Let , , and , that is, there are two items, and at time it is only forbidden to take both of them. Assume w.l.o.g. that the algorithm does not pick item at time . Then picking item becomes infeasible at time while picking item remains feasible. Then the algorithm achieves profit and bonus while the algorithm can achieve a bonus of . ∎
Note that in this model, by adding dummy time steps giving no bonus and no profit, the previous lower bound extends to any number of time steps. This lower bound motivates considering the 1-lookahead model: at time , besides , the algorithm knows the instance . It shall decide the feasible solution chosen at time . We consider an algorithm based on the following idea: at some time step , the algorithm computes an optimal sequence of 2 solutions of value for the subproblem defined on time steps and . Suppose it fixes . Then, at time , it computes of value . Depending on the values and , it will either choose to set , confirming its choice at (getting in this case value for sure between time and ), or change its mind and set (possibly no value got yet, but a value if it confirms this choice at ). When a choice is confirmed ( and