## 1 Introduction

In the (static) set cover problem, an algorithm is given a collection of sets over a universe of elements such that . Each set has a positive cost . After scaling these costs by some appropriate factor, we can always get a parameter such that:

(1.1) |

For any , let denote the total cost of all the sets in . We say that a set covers an element iff . Our goal is to pick a collection of sets with minimum total cost so as to cover all the elements in the universe .

Set cover is a fundamental optimization problem that has been extensively studied in the contexts of polynomial-time approximation algorithms and online algorithms. In recent years, it has received significant attention in the dynamic algorithms community as well, where the goal is to maintain a set cover of small cost efficiently under a sequence of element insertions/deletions in . In particular, a dynamic algorithm for set cover must support the following update operations.

Preprocess(): Create empty sets in . Return , and identifiers (e.g., integers) to the sets in .

Insert(): Insert to a new element which belongs to the sets (their identifiers are given as parameters). Return an identifier to the new element , and the identifiers of sets that get added to and removed from .

Delete(): Delete element from . Return the identifiers of sets that get added to and removed from .

After each update, the algorithm must guarantee that is a set cover; i.e. every element is in some set in . Let and be the maximum size of and , respectively, over all updates. The parameter is known as the maximum frequency.
It is usually assumed that and are known and fixed in the beginning, but note that our algorithm does not really need this assumption.^{1}^{1}1We mention that our algorithm do not really need the preprocessing step. When Insert() is called with a new set , it can simply create on the fly.
Note that dynamic set cover as defined above is a generalization of the dynamic vertex cover problem which, together with the dynamic maximum matching problem, have been studied extensively in recent years (e.g. [31, 3, 20, 13, 9, 10, 12, 35, 30, 32, 18, 6, 7, 8, 33, 36, 2, 25]).

The performance of dynamic algorithms is mainly measured by the update time, the time to handle each Insert and Delete operation. Previous works on set cover focus on the amortized update time, where an algorithm is said to have an amortized update time of if, for any , the total time it spends to process the first updates is at most .
We also consider only the amortized update time in this paper, and simply use “update time” to refer to “amortized update time”.
The time for the Preprocess operation is called the preprocessing time. It is typically not a big concern as long as it is polynomial.^{2}^{2}2Our algorithm requires only linear preprocessing time.

Perspective:
Since the static set cover problem is NP-complete, it is natural to consider approximation algorithms. An algorithm has an approximation ratio of if outputs a set cover with , where is cost of the optimal set cover.
Since the tight approximation factors for polynomial-time static set cover algorithms are and (e.g. [17, 16, 15, 34, 26]), it is natural to ask if one can also obtain these same guarantees in the dynamic setting with small update time.
The approximation ratio was already achieved in 2017 via greedy-like techniques by Gupta et al. [19]. Their algorithm is deterministic and has update time. This update time is only away from the trivial lower bound – the time needed for specifying the sets that contain a given element (which is currently being inserted).
A similar lower bound holds even in some settings where updates can be specified with less than bits [1], e.g., when elements and sets are fixed in advance, and the updates are activations and deactivations of elements.^{3}^{3}3Abboud et al. [1] showed that, under SETH, there is no algorithm with polynomial preprocessing time and update time for any constant when elements and sets are fixed in advance, and the updates are activations and deactivations of elements.
The approximation ratio was recently achieved by Abboud et al. [1] (improving upon the approximation factors of and higher by [11, 19, 9]). Abboud et al. show how to maintain an -approximation in amortized update time. Their algorithm, however, is randomized and does not work for the weighted case (when different sets have different costs).^{4}^{4}4A fundamental difficulty to extend Abboud et al.’s algorithm to the weighted case is the static algorithm it is based on. This static algorithm repeatedly picks an element that is not yet covered, and adds all sets belong to in the set cover solution. It is easy to prove that this algorithm returns an -approximation in the unweighted case. It is also easy to construct an example that shows that this algorithm cannot guarantee any reasonable approximation ratio for the weighted case.
Like most randomized dynamic algorithms currently existing in the literature, it works only when the future updates do not depend on the algorithm’s past output – this is also known as the oblivious adversary assumption. Removing this assumption is a central question in this area, since it may in general make many dynamic algorithms useful as subroutines inside fast static algorithms [14, 29, 5, 4, 27, 28, 12, 23]. Accordingly, prior to our work, it was natural to ask if there is an efficient -approximation algorithm for dynamic set cover that is deterministic and/or can handle the weighted case. In this paper we answer this question positively.

Reference | Approximation Ratio | Update Time | Deterministic? | Weighted? |
---|---|---|---|---|

[19] | yes | yes | ||

[19, 9] | yes | yes | ||

[11] | yes | yes | ||

[1] | no | no | ||

Our result | yes | yes |

###### Theorem 1.1.

We can maintain a -approximate minimum-cost set cover in the dynamic setting, deterministically, with amortized update time, where is the ratio between the maximum and minimum set costs.

Thus, we simultaneously (a) improve upon the update time of Abboud et al. [1], (b) derandomize their result, and (c) extend their result to the weighted case. Our algorithm, together with the one in Gupta et al. [19], settles an important open question in a line of work on dynamic set cover [9, 11, 19, 1]. We can now get an -approximation using a deterministic algorithm with update time. The approximation ratio matches the one achievable by the best possible polynomial-time static algorithm, whereas the update time is only away from a trivial lower bound of .

### 1.1 Technical Overview

Previous Approaches: The primal-dual schema is a powerful tool for designing many static approximation algorithms. In recent years, it has also been the main driving force behind deterministic dynamic algorithms for set cover and maximum matching (e.g. [10, 12, 9, 11, 19]), including all -approximations for dynamic set cover except the one by Abboud et al. [1].

The dual of minimum set cover happens to be a fractional packing problem, which is defined as follows. Given a set system as input, we have to assign a fractional weight to every element. We want to maximize , subject to the following constraint:

(1.2) |

For the rest of this paper, we let denote the total weight received by a set from all its elements. Furthermore, define for every subset of elements . Thus, the goal is to maximize subject to the constraint that for all sets .

Let denote the total cost of the minimum set cover in . We simply use when is clear from the context. All the previous primal-dual algorithms for dynamic set cover try to maintain some invariants about individual sets all the time. In particular, they are based on the following lemma.

###### Lemma 1.2.

Consider any fractional packing in that satisfy (1.2). Then we have . In addition, if there exist a set-cover of and an such that

(1.3) |

then we have .

All the previous dynamic primal-dual algorithms maintain a set cover and a fractional packing that satisfy (1.3). Needless to say, the algorithms have to change and the weights of some elements in a carefully chosen manner after each update. For example, the algorithm of Bhattacharya et al. [11] satisfies (1.3) with , implying an -approximation factor. To obtain an approximation factor we need to satisfy (1.3) with . However, as pointed out by Abboud et al. [1], it is not clear how to maintain such a strict constraint efficiently for . The trouble is that one update may violate (1.3) and may cause a sequence of weight changes for many elements. This creates difficulties for bounding the update time (which typically require intricate arguments via clever potential functions).

Because of this difficulty, Abboud et al. opted for a different approach that is based on the following static algorithm. (i) Pick any uncovered element uniformly at random, called pivot. (ii) Include all sets containing in the set cover solution. (iii) Repeat this process until all elements are covered. It is easy to see that this algorithm returns an -approximation for the unweighted case (when every set has the same cost). Since this approximation ratio does not hold for the weighted case in the static setting, it seems difficult to extend the approach of Abboud et al. to the weighted case. More importantly, in the analysis of their algorithm in the dynamic setting, Abboud et al. crucially rely on randomness and the oblivious adversary assumption. This allows them to argue that before a pivot element gets deleted, many other non-pivot elements must also get deleted in expectation. Thus they can charge the time their algorithm needs in handling the deletion of a pivot to the (large number of) non-pivot elements that got deleted in the past. This type of argument was also used for maintaining a maximal matching [3, 35]. To the best of our knowledge, there was no technique to derandomize this type of argument. In fact, all the known deterministic dynamic algorithms for set cover and matching use the primal-dual schema. This leads to a basic question: Is the primal-dual approach powerful enough to give an -approximation algorithm for dynamic set cover?

We answer this question in the affirmative. Unlike previous dynamic primal-dual algorithms, which try to satisfy conditions like (1.3) that are local to individual sets, our algorithm basically waits until the dual solution changes significantly globally, and then it fixes the solution only where the fix is needed.

Our Approach and the Showcase (Batch Deletion): To appreciate our main idea, consider the batch deletion setting, where we have to preprocess a set system and then there is an update that changes the set system to , where . Our goal is to recompute an approximately minimum set cover in the new input in time proportional to the size of the update (i.e., ).

Suppose that originally we have a pair that satisfies (1.3) with for ; thus, . Clearly, remains a set cover of the new set system . To simplify things even further, suppose that the element-weights are uniform,^{5}^{5}5This means that for every (for some ). and for some . This implies that . So Lemma 1.2 gives us:

###### Observation 1.3.

If and the element-weights are uniform, then

In words, remains a good approximation to if is small. Thus, intuitively we do not need to do anything when . This is already different from previous primal-dual algorithms that might have to do a lot of work, since (1.3) might be violated. In contrast, when becomes larger than , in time we can just compute a new pair satisfying (1.3) with for the set system . Since we do this only after deletions, we get an amortized update time of .

One lesson from the above argument is this: Instead of trying to satisfy (1.3), we might benefit from dealing with only when it is large enough, for this might help us ensure that the amortized update time remains small. Of course this is easy to argue under the uniform weight assumption, which often makes the situation too simple. The following lemma is the key towards doing something similar in the general setting.

###### Lemma 1.4.

Define for every set of elements. Suppose that:

(1.4) |

Then we have , and thus

###### Proof sketch.

Note that , where is one if , and zero otherwise. Thus, we have:

∎

Lemma 1.4 tells us that if for all , then we do not have to do anything. When for some , we need to “fix” . We do so by running a static algorithm on some sets and elements. The static algorithm is described in Algorithm 1.5. We describe how to use it in Algorithm 1.6.

###### Algorithm 1.5 (Static uniform-increment).

Given an input , start with for all and . Repeat the following until covers all elements: (i) Raise weights at the same rate for every not covered by , until some sets in are tight (i.e. ). (ii) Add such sets to .

###### Algorithm 1.6 (Batch deletion algorithm).

In short, Algorithm 1.6 fixes by running the static Algorithm 1.5 on input .
We can implement Algorithm 1.5 in time approximately.^{6}^{6}6We can implement an approximate version of Algorithm 1.5, where element weights are in the form for some , and we say that a set is tight if , increasing the approximation ratio by another multiplicative factor. It is not hard to see that this can be done in time. We further show that the term can be eliminated.
This gives an amortized update time of , since we can charge the time spent in “fixing” to the elements in that got deleted.
The lemma below implies the correctness of this strategy.

###### Lemma 1.7.

###### Proof Idea.

For (i), Algorithm (1.5) stops raising the weight of some element only when some set containing is tight. At this point we also stop raising the weight of every element in , making . For (ii), an intuition is that Algorithm 1.6 has subtracted from all that violate (1.4). This subtraction does not increase for any , and, thus, does not create any new violation to (1.4). ∎

Lemma 1.7(i) implies that we can add/remove to/from sets in without worrying about the coverage of elements in (they will be covered even when we remove all sets in from ). Consequently, we can guarantee that after processing , Algorithm 1.6 produces a set that covers all elements. Lemma 1.7(ii) immediately implies the claimed approximation guarantee. Note that it is crucial to apply our new Lemma 1.4 with appropriate , , and , which are changed after Algorithm 1.6 processes .

Note that running Algorithm 1.5 at the preprocessing is crucial for the correctness of Algorithm 1.6, as otherwise might not cover all elements after Algorithm 1.6 finishes. In other words, Lemma 1.7(i) might not be true if we replace Algorithm 1.5 by some other static algorithm.^{7}^{7}7Consider, e.g., when If we start with all element-weights being zero except , a deletion of causes and , making the new set system to violate (1.4) at . But it is not enough to change only the weight of which is the only element in . Lemma 1.7(i) guarantees that this will not happen if is computed in a certain way as in Algorithm 1.5.
We believe that this is key that gives the running time improvement over Abboud et al.’s algorithm, since otherwise we may have to spend more time checking if other elements remain covered. (This is essentially what happened in Abboud et al.’s algorithm.)
Note that Algorithm 1.5 was also used in previous dynamic algorithms [9, 11, 19], but to our knowledge it does not play a role in the correctness of those algorithms like in our algorithm.

One detail to mention is how Algorithm 1.6 finds . One simple way is to round element weights to the form for integers . (We refer to such as the level of an element in the rest of this paper.) With this rounding, we simply have to search for different choices of , where is defined in Theorem 1.1. The total update time amortized over deletions in becomes .

The Fully-Dynamic Algorithm (Sketched). Extending the above algorithm to handle more deletions is rather straightforward: We include a newly deleted element to , check for , and then fix as in Algorithm 1.6 if such a exists. This gives a decremental (i.e. deletion-only) algorithm with update time. This bound is faster than the bound from [1] when and might be of an independent interest given that decremental algorithms have been heavily studied and lead to some applications (e.g. [14, 4, 5, 24, 21, 22]).

Handling insertions, on the other hand, is more intricate. When an element is inserted, we set its weight to the maximum possible value to make some set containing tight, i.e. (note that elements in also contribute to the weights of sets). This means that if is already in a tight set, then . Otherwise, it is increased until a new tight set is created, which will be added to . We keep the newly inserted elements in a separate set (call it for now) because they do not get weights in the uniform way (like when we run Algorithm 1.5).^{8}^{8}8In particular, we can show that weights of all elements in the set system are as if we run the static algorithm (Algorithm 1.5) on this set system. We cannot say the same for .
When Algorithm 1.6 calls Algorithm 1.5 on some , it will try to include in a greedy manner elements
from in the uniform weight increment process and move them to . See Section 3 and Appendix 5 for the details.

## 2 Minimum Set Cover in the Static Setting

In this section, we describe some basic concepts about the set cover problem in the static setting. We use the notations that were introduced in Section 1. We start with a simple lemma that follows from LP-duality.

###### Lemma 2.1.

Consider a valid set cover and an assignment of nonnegative weights to every element that satisfy (1.2), i.e. forms a valid fractional packing. If , then is an -approximate minimum set cover.

In Section 1, we described a simple static primal-dual algorithm that returns an -approximate minimum set cover (see Algorithm 1.5). We now consider a discretized variant of the above algorithm, which increases the weights of elements in powers of , instead of increasing these weights in a continuous manner. This results in a hierarchical partition of the set-system , which assigns the sets and elements to different levels. In the Appendix, we explain how the algorithm generates this hierarchical partition. Here, we only state some important properties of the partition and show how these properties imply a -approximation for the minimum set cover problem. For the rest of the paper, we fix two parameters .

(2.1) |

The algorithm outputs a hierarchical partition of the set-system , where each set is assigned to some level . The level of an element is defined as the maximum level among all the sets it belongs to, i.e., . Note that if , then .

Tight and slack sets: Recall that denotes the total weight received by a set from all its elements. We say that a set is tight if and slack if . The hierarchical partition returned by the algorithm satisfies the following properties.

###### Property 2.2.

For every element , we have .

###### Property 2.3.

Every set has . Furthermore, every set that is slack has .

###### Property 2.4.

Every element is contained in at least one tight set.

###### Lemma 2.5.

Let denote the collection of tight sets. They form a -approximate minimum set cover of the input .

###### Proof.

Since each element belongs to at most sets, a simple counting argument gives us:

(2.2) | |||||

By Property 2.4, every element is covered by some set in . In other words, the sets in form a valid set cover. Furthermore, by Property 2.3, we have for all sets . Accordingly, the weights assigned to the elements form a valid fractional packing. From (2.2) and Lemma 2.1, we now infer that the sets in form a -approximate minimum set cover of the input . ∎

## 3 Our Dynamic Algorithm

Consider the minimum set cover problem in a dynamic setting, where the input keeps changing via a sequence of element insertions and deletions. Specifically, during each update, an element is either inserted into or deleted from the set system . When an element is inserted, we get to know about the sets in that contain the element . We assume that remains an upper bound on the maximum frequency of an element throughout this sequence of updates (although our dynamic algorithm does not need to know the value of in advance). We will present a deterministic dynamic algorithm for maintaining a -approximate minimum set cover in this setting with amortized update time.

### 3.1 Classification of elements

The main idea behind our dynamic algorithm is simple. We maintain a relaxed version of the hierarchical partition from Section 2 in a lazy manner. To be more specific, in the preprocessing phase we start with a set-system where . At this point, every set is at level and has a weight , and Properties 2.2, 2.3, 2.4 are vacuously true. Subsequently, while handling the sequence of updates, whenever we observe that a significant fraction of elements has been deleted from levels for some , we rebuild all the levels in a certain natural manner. We refer to the subroutine which performs this rebuilding as Rebuild().

We will classify elements into three distinct types –

active, passive and dead. Let and respectively denote the set of active, passive and dead elements. Informally, every element is active in the hierarchical partition described in Section 2, where we considered the static setting. To get the main intuition in the dynamic setting, consider an update at some time-step , and suppose that this update does not lead to a call to the subroutine for any . Recall that a set is called tight when its weight lies in the range and slack when its weight lies in the range . As in Section 2, suppose that the tight sets in the hierarchical partition form a valid set cover just before the update at time-step (see Lemma 2.5). Now, consider three possible cases.Case (a): The update at time-step deletes an element . In this case, we classify the element as dead. We continue to pretend, however, that the element still exists and do not change its weight . Thus, we take the value of into account while calculating the weight of any set in the fractional packing solution. This ensures that the collection of tight sets remains a valid set cover for the current input .

Case (b): The update at time-step inserts an element that belongs to at least one tight set. In this case, we assign the element to level , classify it as passive, and assign it a weight . This ensures that the tight sets continue to remain a set cover in .

Case (c): The update at time-step inserts an element such that all sets containing are slack. In this case, Property 2.3 implies that every set containing the element lies at level . Hence, we assign the element also to level . Unlike in Case (b), however, here we can no longer leave the hierarchical partition unchanged, since in that event the collection of tight sets will no longer form a valid set cover. We address this issue in the following manner. Let denote the collection of sets containing . Note that . Let be the minimum value such that if we increase the weight of each set in by an additive , then the weight of some set becomes equal to . We classify the element as passive, and assign it a weight . This ensures that now the collection of tight sets again forms a valid set cover. This also leads to a very important consequence, which is stated below.

###### Claim 3.1.

A passive element receives a weight of just after getting inserted.

###### Proof.

In Case (b) above, a passive element receives zero weight and the claim trivially holds. For the rest of the proof, consider the scenario described in Case (c) above. Recall that as per (1.1) we have for every set . Let be a set containing whose weight becomes equal to when we assign a weight of to the element (see the description for Case (c) above). Thus, we must have . ∎

### 3.2 Levels and Weights of elements

Throughout the duration of our algorithm, the level of an element (regardless of whether it is active, passive or dead) will be defined to be . From the preceding discussion, we also conclude that the weights assigned to the elements satisfy the following conditions.

If an element is active, then . In contrast, if an element is passive, then . Finally, if an element is dead, then its weight depends on its state at the time of its deletion. Specifically, if it was active at the time of its deletion, then . If it was passive at the time of its deletion, then . To summarize, a dead element always has .

### 3.3 The shadow input and the invariants

Recall that the set is partitioned into two subsets, namely and . From the way we assign the weights to elements, it follows that our algorithm works by pretending as if the dead elements were still present in the input. Accordingly, we consider an input , where . We refer to as the shadow input (as opposed to the actual input ). Indeed, the hierarchical partition maintained by our dynamic algorithm will be similar to the one from Section 2 on the shadow input , barring the fact that the passive/dead elements will have weights . To explain this more formally, we use the following notations. For every set , we let and respectively denote the total weight of all the elements that belong to in and in . Our dynamic algorithm will satisfy the three invariants stated below. Invariant 3.1 follows from the discussion in Section 3.2. Invariant 3.2 is analogous to Property 2.3, whereas Invariant 3.3 is analogous to Property 2.4.

###### Invariant 3.1.

Consider any element . The level of is defined as . If , then we have . Otherwise, if , then we have .

###### Invariant 3.2.

Every set satisfies . Furthermore, every set with weight is at level .

###### Invariant 3.3.

Each element is contained in at least one set with .

Let be the collection of sets with weights in the hierarchical partition maintained by our algorithm. Replacing Properties 2.3, 2.4 by Invariants 3.2, 3.3 in the proof of Lemma 2.5, we conclude that gives a -approximate minimum set cover in the shadow input . Invariant 3.3 further implies that is a valid set cover in the actual input . We will show later that is in fact a -approximate minimum set cover in the actual input as well. This happens because, intuitively, our dynamic algorithm ensures that the actual input always remains close to the shadow input .

### 3.4 The dynamic algorithm

Recall that , and respectively denote the set of active, passive and deleted elements. We let and respectively denote the set of active, passive and dead elements at level . Let denote the set of all elements in the current input that are at level . Thus, for each , the set is partitioned into two subsets: and . For each level , we also define:

(3.1) |

For every level , we maintain a counter . Each call to sets and for all . In contrast, every time an element gets deleted from some level , for all we decrease the counter by one. Finally, to ensure that the shadow input remains close to the actual input , we call Rebuild() whenever becomes equal to . If the counters of multiple levels become 0 during the same update, we call Rebuild() for the largest such level .

Tight sets: As in Section 3.3, we will let denote the collection of tight sets with respect to the shadow input .

Preprocessing phase: Initially, we have an input where . At this point, , every set is at level with weight , and hence Invariants 3.1, 3.2, 3.3 are vacuously true.

Handling the deletion of an element: When an element gets deleted, we call the subroutine described in Figure 1. Steps 01 – 02 in Figure 1 were explained under Case (a) in Section 3.1, whereas steps 03 – 07 in Figure 1 were explained while defining the counters .

Handling the insertion of an element: When an element gets inserted, we call the subroutine in Figure 2. Steps 02 – 04 and 05 – 09 in Figure 2 were respectively discussed under Case (b) and Case (c) in Section 3.1.

Output of our algorithm: We maintain the collection of tight sets . We show in Section 4 that is a -approximate minimum set cover in .

Correctness of the invariants: Suppose that Invariants 3.1, 3.2, 3.3 hold just before the deletion of an element . This is handled by the subroutine in Figure 1. It is easy to check that steps 01 – 02 in Figure 1 do not lead to a violation of any invariant. This is because the element gets moved from to , but its weight remains the same, and it still contributes to the weights of all the sets containing .

Similarly, suppose that Invariants 3.1, 3.2 and 3.3 hold just before the insertion of an element . We handle this insertion by calling the subroutine in Figure 2. Consider two possible cases.

Case (1): Steps 02 – 04 get executed in Figure 2. In this case, the element becomes passive with weight , and it belongs to at least one tight set. Thus, the weight of every set remains unchanged, and the three invariants continue to remain satisfied.

Case (2): Steps 05 – 09 get executed in Figure 2. In this case, all the sets containing have weights and are at level (see Invariant 3.2) at the time gets inserted. Let . After we assign weight to the element , every set gets weight , and every other set continues to have weight (even though its weight increased). The weights of the sets do not change. This ensures that Invariants 3.2 and 3.3 continue to hold. Finally, revisiting the proof of Claim 3.1, we infer that Invariant 3.1 also continues to hold, since becomes passive with weight .

To summarize, we conclude that if the subroutine Rebuild() has the property that a call to this subroutine never leads to a violation of the invariants, then the invariants continue to hold all the time.

Data structures: We use the following data structures. For each level , we maintain the sets and as doubly linked lists. Each entry in each of these lists also maintains a bidirectional pointer to the corresponding element. Using these pointers, we can determine the state of a given element (e.g., whether it is active, passive or dead) and insert/delete it in a given list in time.

For every element , we maintain its level and weight . For every set , we also maintain its level and weight with respect to the shadow input . Finally, for every level , we maintain the counter .

### 3.5 The Rebuild() subroutine

A detailed description of the subroutine appears in Section 5. Here, we summarize a few key properties of this subroutine that will be heavily used in the analysis of our algorithm. Property 3.4 ensures that Invariants 3.1, 3.2, 3.3 do not get violated. Property 3.5 specifies the time taken to implement a call to the subroutine, and how the counters get updated as a result of this call. Property 3.6, on the other hand, explains how the subroutine changes the states and levels of different elements in the hierarchical partition.

###### Property 3.4.

###### Property 3.5.

Consider any level . The time taken to implement a call to Rebuild() is proportional to times the number of elements in in the beginning of the call, plus . Furthermore, at the end of this call, we have for all levels .

###### Property 3.6.

Consider any level and any call to the subroutine Rebuild().

(1) The call to Rebuild() cleans up all the dead elements at level . Specifically, this means the following. Consider any element that belongs to just before the call to Rebuild(). Then that element does not appear in any of the sets , or at the end of the call.

(2) The call to Rebuild() converts some of the passive elements at level to passive elements at level , and the remaining passive elements at level get converted into active elements at level . Specifically, let denote the set of elements in just before the call to Rebuild(). Then during the call to Rebuild(), a subset of these elements gets added to , and the remaining elements get added to .

(3) The call to Rebuild() moves up some of the active elements at level to level , and the remaining active elements at level continue to be active at level . In other words, the elements in never go out of the set during the call to Rebuild().

(4) The call to Rebuild() does not touch the elements at level . In other words, for any , if an element belonged to , or just before the call to Rebuild(), then it continues to belong to the same set or at the end of the call to Rebuild().

###### Corollary 3.7.

At the end of any call to Rebuild(), we have for all .

###### Proof.

Follows from parts (1), (2) of Property 3.6. ∎

## 4 Analysis of our dynamic algorithm

We start by proving some simple properties of our algorithm that will be useful in the subsequent analysis. These properties formalize the intuition that the fractional packing solution maintained by the algorithm does not change significantly in between two successive calls to Rebuild() at any level . This happens because of three main reasons (see Figure 1). First, we set for all at the end of each call to Rebuild(). Second, we decrement the counter for all each time some element gets deleted from level . Third, we call Rebuild() whenever becomes equal to .

Notation: Throughout the rest of this section, we use the superscript to denote the status of some set/counter at time-step . For instance, the symbol will denote the set of dead elements at level at time-step , and the symbol will denote the value of the counter at time-step .

###### Lemma 4.1.

Fix any level and consider any two time-steps that satisfy the following properties: (1) A call was made to the subroutine Rebuild() for some just before time-step . (2) No call was made to Rebuild() for any during the time-interval . Let denote the set of elements that got deleted from level during the time-interval . Then we have:

###### Proof.

As the subroutine Rebuild() was called for some just before time-step , Property 3.5 implies that . Next, note that during the time-interval , no call is made to the subroutine Rebuild() for any . Hence, during this time-interval, the counter gets decremented by one iff an element gets deleted from level (see Figure 1), and the set consists precisely of these elements. Thus, we infer that: . ∎

###### Corollary 4.2.

Consider any level and time-steps as defined in Lemma 4.1. Then we have: