Deterministically Maintaining a (2+ε)-Approximate Minimum Vertex Cover in O(1/ε^2) Amortized Update Time

05/09/2018 ∙ by Sayan Bhattacharya, et al. ∙ 0

We consider the problem of maintaining an (approximately) minimum vertex cover in an n-node graph G = (V, E) that is getting updated dynamically via a sequence of edge insertions/deletions. We show how to maintain a (2+ϵ)-approximate minimum vertex cover, deterministically, in this setting in O(1/ϵ^2) amortized update time. Prior to our work, the best known deterministic algorithm for maintaining a (2+ϵ)-approximate minimum vertex cover was due to Bhattacharya, Henzinger and Italiano [SODA 2015]. Their algorithm has an update time of O( n/ϵ^2). Our result gives an exponential improvement over the update time of Bhattacharya et al., and nearly matches the performance of the randomized algorithm of Solomon [FOCS 2016] who gets an approximation ratio of 2 and an expected amortized update time of O(1). We derive our result by analyzing, via a novel technique, a variant of the algorithm by Bhattacharya et al. We consider an idealized setting where the update time of an algorithm can take any arbitrary fractional value, and use insights from this setting to come up with an appropriate potential function. Conceptually, this framework mimics the idea of an LP-relaxation for an optimization problem. The difference is that instead of relaxing an integral objective function, we relax the update time of an algorithm itself. We believe that this technique will find further applications in the analysis of dynamic algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider an undirected graph with nodes, and suppose that we have to compute an (approximately) minimum vertex cover111A vertex cover in is a subset of nodes such that every edge has at least one endpoint in . in . This problem is well-understood in the static setting. There is a simple linear time greedy algorithm that returns a maximal matching222A matching in is a subset of edges such that no two edges in share a common endpoint. A matching is maximal if for every edge , either or is matched in . in . Let denote the set of nodes that are matched in . Using the duality between maximum matching and minimum vertex cover, it is easy to show that the set forms a -approximate minimum vertex cover in . Accordingly, we can compute a -approximate minimum vertex cover in linear time. In contrast, under the Unique Games Conjecture [14], there is no polynomial time -approximation algorithm for minimum vertex cover for any . In this paper, we consider the problem of maintaining an (approximately) minimum vertex cover in a dynamic graph, which gets updated via a sequence of edge insertions/deletions. The time taken to handle the insertion or deletion of an edge is called the update time of the algorithm. The goal is to design a dynamic algorithm with small approximation ratio whose update time is significantly faster than the trivial approach of recomputing the solution from scratch after every update.

A naive approach for this problem will be to maintain a maximal matching and the set of matched nodes as follows. When an edge gets inserted into , add the edge to the matching iff both of its endpoints are currently unmatched. In contrast, when an edge gets deleted from , first check if the edge was matched in just before this deletion. If yes, then remove the edge from and try to rematch its endpoints . Specifically, for every endpoint , scan through all the edges incident on till an edge is found whose other endpoint is currently unmatched, and at that point add the edge to the matching and stop the scan. Since a node can have neighbors, this approach leads to an update time of .

Our main result is stated in Theorem 1.1. Note that the amortized update time333Following the standard convention in dynamic algorithms literature, an algorithm has amortized update time if starting from a graph where , it takes time overall to handle any sequence of edge insertions/deletions in . of our algorithm is independent of . As an aside, our algorithm also maintains a -approximate maximum fractional matching as a dual certificate, deterministically, in amortized update time.

Theorem 1.1.

For any , we can maintain a -approximate minimum vertex cover in a dynamic graph, deterministically, in amortized update time.

1.1 Perspective

The first major result on maintaining a small vertex cover and a large matching in a dynamic graph appeared in STOC 2010 [17]. By now, there is a large body of work devoted to this topic: both on general graphs [1, 2, 3, 4, 5, 7, 9, 10, 11, 12, 13, 15, 16, 17, 18] and on graphs with bounded arboricity [4, 5, 15, 18]. These results give interesting tradeoffs between various parameters such as (a) approximation ratio, (b) whether the algorithm is deterministic or randomized, and (c) whether the update time is amortized or worst case. In this paper, our focus is on aspects (a) and (b). We want to design a dynamic algorithm for minimum vertex cover that is deterministic and has (near) optimal approximation ratio and amortized update time. In particular, we are not concerned with the worst case update time of our algorithm. From this specific point of view, the literature on dynamic vertex cover can be summarized as follows.

Randomized algorithms. Onak and Rubinfeld [17] presented a randomized algorithm for maintaining a -approximate minimum vertex cover in expected update time. This bound was improved upon by Baswana, Gupta, Sen [3], and subsequently by Solomon [19], who obtained a -approximation in expected update time.

Deterministic algorithms. Bhattacharya, Henzinger and Italiano [8] showed how to deterministically maintain a -approximate minimum vertex cover in update time. Subsequently, Bhattacharya, Chakrabarty and Henzinger [6] and Gupta, Krishnaswamy, Kumar and Panigrahy [12] gave deterministic dynamic algorithms for this problem with an approximation ratio of and update time of . The algorithms designed in these two papers [6, 12] extend to the more general problem of dynamic set cover.


Approximation Ratio
Amortized Update Time Algorithm Reference

deterministic Bhattacharya et al. [8]
deterministic Gupta et al. [12] and
Bhattacharya et al. [6]
randomized Solomon [19]
Table 1: State of the art on dynamic algorithms with fast amortized update times for minimum vertex cover.

Thus, from our perspective the state of the art results on dynamic vertex cover prior to our paper are summarized in Table 1. Note that the results stated in Table 1 are mutually incomparable. Specifically, the algorithms in [8, 12, 6] are all deterministic, but the paper [8] gives near-optimal (under Unique Games Conjecture) approximation ratio whereas the papers [12, 6] give optimal update time. In contrast, the paper [19] gives optimal approximation ratio and update time, but the algorithm there is randomized. Our main result as stated in Theorem 1.1 combines the best of the both worlds, by showing that there is a dynamic algorithm for minimum vertex cover that is simultaneously (a) deterministic, and has (b) near-optimal approximation ratio and (c) optimal update time for constant . In other words, we get an exponential improvement in the update time bound in [8], without increasing the approximation ratio or using randomization.

Most of the randomized dynamic algorithms in the literature, including the ones [1, 3, 17, 19] that are relevant to this paper, assume that the adversary is oblivious. Specifically, this means that the future edge insertions/deletions in the input graph do not depend on the current solution being maintained by the algorithm. A deterministic dynamic algorithm does not require this assumption, and hence designing deterministic dynamic algorithms for fundamental optimization problems such as minimum vertex cover is an important research agenda in itself. Our result should be seen as being part of this research agenda.

Our technique. A novel and interesting aspect of our techniques is that we relax the notion of update time of an algorithm. We consider an idealized, continuous world where the update time of an algorithm can take any fractional, arbitrarily small value. We first study the behavior of a natural dynamic algorithm for minimum vertex cover in this idealized world. Using insights from this study, we design an appropriate potential function for analyzing the update time of a minor variant of the algorithm from [8] in the real-world. Conceptually, this framework mimics the idea of an LP-relaxation for an optimization problem. The difference is that instead of relaxing an integral objective function, we relax the update time of an algorithm itself. We believe that this technique will find further applications in the analysis of dynamic algorithms.

Organization of the rest of the paper. In Section 2, we present a summary of the algorithm from [8]. A reader already familiar with the algorithm will be able to quickly skim through this section. In Section 3, we analyze the update time of the algorithm in an idealized, continuous setting. This sets up the stage for the analysis of our actual algorithm in the real-world. Note that the first ten pages consists of Section 1 – Section 3. For the reader motivated enough to further explore the paper, we present an overview of our algorithm and analysis in the “real-world” in Section 4. The full version of our algorithm, along with a complete analysis of its update time, appears in Part II.

2 The framework of Bhattacharya, Henzinger and Italiano [8]

We henceforth refer to the dynamic algorithm developed in [8] as the BHI15 algorithm. In this section, we give a brief overview of the BHI15 algorithm, which is based on a primal-dual approach that simultaneously maintains a fractional matching444A fractional matching in assigns a weight to each edge , subject to the constraint that the total weight received by any node from its incident edges is at most one. The size of the fractional matching is given by . It is known that the maximum matching problem is the dual of the minimum vertex cover problem. Specifically, it is known that the size of the maximum fractional matching is at most the size of the minimum vertex cover. and a vertex cover whose sizes are within a factor of each other.

Notations. Let be the weight assigned to an edge . Let be the total weight received by a node from its incident edges, where denotes the set of neighbors of . The BHI15 algorithm maintains a partition of the node-set into levels, where . Let denote the level of a node . For any two integers and any node , let denote the set of neighbors of that lie in between level and level . The level of an edge is denoted by , and it is defined to be equal to the maximum level among its two endpoints, that is, we have . In the BHI15 framework, the weight of an edge is completely determined by its level. In particular, we have , that is, the weight decreases exponentially with the level .

A static primal-dual algorithm. To get some intuition behind the BHI15 framework, consider the following static primal-dual algorithm. The algorithm proceeds in rounds. Initially, before the first round, every node is at level and every edge has weight . Since each node has at most neighbors in an -node graph, it follows that for all nodes at this point in time. Thus, we have for all nodes , so that the edge-weights form a valid fractional matching in at this stage. We say that a node is tight if and slack otherwise. In each subsequent round, we identify the set of tight nodes , set for all , and then raise the weights of the edges in the subgraph induced by by a factor of . As we only raise the weights of the edges whose both endpoints are slack, the edge-weights continue to be a valid fractional matching in . The algorithm stops when every edge has at least one tight endpoint, so that we are no longer left with any more edges whose weights can be raised.

Clearly, the above algorithm guarantees that the weight of an edge is given by . It is also easy to check that the algorithm does not run for more than rounds, for the following reason. If after starting from we increase the weight of an edge more than times by a factor of , then we would end up having , and this would mean that the edge-weights no longer form a valid fractional matching. Thus, we conclude that for all at the end of this algorithm. Furthermore, at that time every node at a nonzero level is tight. The following invariant, therefore, is satisfied.

Invariant 2.1.

For every node , we have if , and if .

Every edge has at least one tight endpoint under Invariant 2.1. To see why this is true, note that if the edge has some endpoint at level , then Invariant 2.1 implies that the node is tight. On the other hand, if , then the edge has weight and both its endpoints are tight, for we have . In other words, the set of tight nodes constitute a valid vertex cover of the graph . Since every tight node has weight , and since every edge contributes its own weight towards both and , a simple counting argument implies that the number of tight nodes is within a factor of the sum of the weights of the edges in . Hence, we have a valid vertex cover and a valid fractional matching whose sizes are within a factor of each other. It follows that the set of tight nodes form a -approximate minimum vertex cover and that the edge-weights form a -approximate maximum fractional matching.

Making the algorithm dynamic. In the dynamic setting, all we need to ensure is that we maintain a partition of the node-set into levels that satisfies Invariant 2.1. By induction hypothesis, suppose that Invariant 2.1 is satisfied by every node until this point in time. Now, an edge is either inserted into or deleted from the graph. The former event increases the weight of each node by , whereas the latter event decreases the weight of each node by . As a result, one or both of the endpoints might now violate Invariant 2.1. For ease of presentation, we say that a node is dirty if it violates Invariant 2.1. To be more specific, a node is dirty if either (a) or (b) { and }. In case (a), we say that the node is up-dirty, whereas in case (b) we say that the node is down-dirty. To continue with our discussion, we noted that the insertion or deletion of an edge might make one or both of its endpoints dirty. In such a scenario, we call the subroutine described in Figure 1. Intuitively, this subroutine keeps changing the levels of the dirty nodes in a greedy manner till there is no dirty node (equivalently, till Invariant 2.1 is satisfied).

In a bit more details, suppose that a node at level is up-dirty. If our goal is to make this node satisfy Invariant 2.1, then we have to decrease its weight . A greedy way to achieve this outcome is to increase its level by one, by setting , without changing the level of any other node. This decreases the weights of all the edges incident on whose other endpoints lie at levels . The weight of every other edge remains unchanged. Hence, this decreases the weight . Note that this step changes the weights of the neighbors of that lie at level or below. These neighbors, therefore, might now become dirty. Such neighbors will be handled in some future iteration of the While loop. Furthermore, it might be the case that the node itself remains dirty even after this step, since the weight has not decreased by a sufficient amount. In such an event, the node itself will be handled again in a future iteration of the While loop. Next, suppose that the node is down-dirty. By an analogous argument, we need to increase the weight if we want to make the node satisfy Invariant 2.1. Accordingly, we decrease its level in step 5 of Figure 1. As in the previous case, this step might lead to some neighbors of becoming dirty, who will be handled in future iterations of the While loop. If the node itself remains dirty after this step, it will also be handled in some future iteration of the While loop.

To summarize, there is no dirty node when the While loop terminates, and hence Invariant 2.1 is satisfied. But due to the cascading effect (whereby a given iteration of the While loop might create additional dirty nodes), it is not clear why this simple algorithm will have a small update time. In fact, it is by no means obvious that the While loop in Figure 1 is even guaranteed to terminate. The main result in [8] was that (a slight variant of) this algorithm actually has an amortized update time of . Before proceeding any further, however, we ought to highlight the data structures used to implement this algorithm.

1. While there exists some dirty node : 2. If the node is up-dirty, then         // We have . 3. Move it up by one level by setting . 4. Else if the node is down-dirty, then          // We have and . 5. Move it down one level by setting .

Figure 1: Subroutine: FIX(.) is called after the insertion/deletion of an edge.

Data structures. Each node maintains its weight and level . This information is sufficient for a node to detect when it becomes dirty. In addition, each node maintains the following doubly linked lists: For every level , it maintains the list of edges incident on whose other endpoints lie at level . Thus, every edge has a weight . The node also maintains the list of edges whose other endpoints are at a level that is at most the level of . Thus, every edge has a weight of . We refer to these lists as the neighborhood lists of . Intuitively, there is one neighborhood list for each nonempty subset of edges incident on that have the same weight. For each edge , the node maintains a pointer to its own position in the neighborhood list of it appears in, and vice versa. Using these pointers, a node can be inserted into or deleted from a neighborhood list in time. We now bound the time required to update these data structures during one iteration of the While loop.

Claim 2.2.

Consider a node that moves from a level to level during an iteration of the While loop in Figure 1. Then it takes time to update the relevant data structures during that iteration, where is the set of neighbors of that lie on or below level .

Proof.

(Sketch) Consider the event where the node moves up from level to level . The key observation is this. If the node has to change its own position in the neighborhood list of another node due to this event, then we must have . And as far as changing the neighborhood lists of itself is concerned, all we need to do is to merge the list with the list , which takes time. ∎

Claim 2.3.

Consider a node that moves from a level to level during an iteration of the While loop in Figure 1. Then it takes time to update the relevant data structures during that iteration, where is the set of neighbors of that lie on or below level .

Proof.

(Sketch) Consider the event where the node moves down from level to level . If the node has to change its own position in the neighborhood list of another node due to this event, then we must have . On the other hand, in order to update the neighborhood lists of itself, we have to visit all the nodes one after the other and check their levels. For each such node , if we find that , then we have to move from the list to the list . Thus, the total time spent during this iteration is . The last equality holds since . ∎

2.1 The main technical challenge: Can we bring down the update time to ?

As we mentioned previously, it was shown in [8] that the dynamic algorithm described above has an amortized update time of . In order to prove this bound, the authors in [8] had to use a complicated potential function. Can we show that (a slight variant of) the same algorithm actually has an update time of for every fixed ? This seems to be quite a challenging goal, for the following reasons. For now, assume that is some small constant.

In the potential function developed in [8], whenever an edge is inserted into the graph, we create many tokens. For each endpoint and each level , we store tokens for the node at level . These tokens are used to account for the time spent on updating the data structures when a node moves up from a lower level to a higher level, that is, in dealing with up-dirty nodes. It immediately follows that if we only restrict ourselves to the time spent in dealing with up-dirty nodes, then we get an amortized update time of . This is because of the following simple accounting: Insertion of an edge creates at most many tokens, and each of these tokens is used to pay for one unit of computation performed by our algorithm while dealing with the up-dirty nodes. Next, it is also shown in [8] that, roughly speaking, over a sufficiently long time horizon the time spent in dealing with the down-dirty nodes is dominated by the time spent in dealing with the up-dirty nodes. This gives us an overall amortized update time of . From this very high level description of the potential function based analysis in [8], it seems intrinsically challenging to overcome the barrier. This is because nothing is preventing an edge from moving up levels after getting inserted, and according to [8] the only way we can bound this type of work performed by the algorithm is by charging it to the insertion of the edge itself. In recent years, attempts were made to overcome this barrier. The papers [6, 12], for example, managed to improve the amortized update time to , but only at the cost of increasing the approximation ratio from to some unspecified constant . The question of getting -approximation in time, however, remained wide open.

It seems unlikely that we will just stumble upon a suitable potential function that proves the amortized bound of by trial and error: There are way too many options to choose from! What we instead need to look for is a systematic meta-method for finding the suitable potential function – something that will allow us to prove the optimal possible bound for the given algorithm. This is elaborated upon in Section 3.

3 Our technique: A thought experiment with a continuous setting

In order to search for a suitable potential function, we consider an idealized setting where the level of a node or an edge can take any (not necessarily integral) value in the continuous interval , where . To ease notations, here we assume that the weight of an edge is given by , instead of being equal to . This makes it possible to assign each node to a (possibly fractional) level in such a way that the edge-weights form a maximal fractional matching, and the nodes with weights form a -approximate minimum vertex cover. We use the notations introduced in the beginning of Section 2. In the idealized setting, we ensure that the following invariant is satisfied.

Invariant 3.1.

For every node , we have if , and if .

A static primal-dual algorithm. As in Section 2, under Invariant 3.1 the levels of the nodes have a natural primal-dual interpretation. To see this, consider the following static algorithm. We initiate a continuous process at time . At this stage, we set for every edge . We say that a node is tight iff . Since the maximum degree of a node is at most , no node is tight at time . With the passage of time, the edge-weights keep increasing exponentially with . During this process, whenever a node becomes tight we freeze (i.e., stop raising) the weights of all its incident edges. The process stops at time . The level of a node is defined as , where is the time when it becomes tight during this process. If the node does not become tight till the very end, then we define . When the process ends at time , it is easy to check that for every edge and that Invariant 3.1 is satisfied.

We claim that under Invariant 3.1 the set of tight nodes form a -approximate minimum vertex cover in . To see why this is true, suppose that there is an edge between two nodes and with . According to Invariant 3.1, both the nodes are at level . But this implies that and hence , which leads to a contradiction. Thus, the set of tight nodes must be a vertex cover in . Since for every tight node , and since every edge contributes the weight to both and , a simple counting argument implies that the edge-weights forms a fractional matching in whose size is at least times the number of tight nodes. The claim now follows from the duality between maximum fractional matching and minimum vertex cover.

We will now describe how Invariant 3.1 can be maintained in a dynamic setting – when edges are getting inserted into or deleted from the graph. For ease of exposition, we will use Assumption 3.2. Since the level of a node can take any value in the continuous interval , this does not appear to be too restrictive.

Assumption 3.2.

For any two nodes , if , then we have .

Notations. Let denote the set of up-neighbors of a node , and let denote the set of down-neighbors of . Assumption 3.2 implies that . Finally, let (resp. ) denote the total weight of the edges incident on whose other endpoints are in (resp. ). We thus have and . We will use these notations throughout the rest of this section.

Insertion or deletion of an edge . We focus only on the case of an edge insertion, as the case of edge deletion can be handled in an analogous manner. Consider the event where an edge is inserted into the graph. By induction hypothesis Invariant 3.1 is satisfied just before this event, and without any loss of generality suppose that at that time. For ease of exposition, we assume that : the other case can be dealt with using similar ideas. Then, we have just before the event (by Invariant 3.1) and just after the event (since ). So the nodes and violate Invariant 3.1 just after the event. We now explain the process by which the nodes change their levels so as to ensure that Invariant 3.1 becomes satisfied again. This process consists of two phases – one for each endpoint. . We now describe each of these phases.

Phase I: This phase is defined by a continuous process which is driven by the node . Specifically, in this phase the node continuously increases its level so as to decrease its weight . The process stops when the weight becomes equal to . During the same process, every other node continuously changes its level so as to ensure that its weight remains fixed.555To be precise, this statement does not apply to the nodes at level . A node with remains at level as long as , and starts moving upward only when its weight is about to exceed . But, morally speaking, this does not add any new perspective to our discussion, and henceforth we ignore this case. This creates a cascading effect which leads to a long chain of interdependent movements of nodes. To see this, consider an infinitesimal time-interval during which the node increases its level from to . The weight of every edge with decreases during this interval, whereas the weight of every other edge remains unchanged. Thus, during this interval, the upward movement of the node leads to a decrease in the weight of every neighbor . Each such node wants to nullify this effect and ensure that remains fixed. Accordingly, each such node decreases its level during the same infinitesimal time-interval from to , where . The value of is such that actually remains unchanged during the time-interval . Now, the weights of the neighbors of also get affected as changes its level, and as a result each such node also changes its level so as to preserve its own weight , and so on and so forth. We emphasize that all these movements of different nodes occur simultaneously, and in a continuous fashion. Intuitively, the set of nodes form a self-adjusting system – akin to a spring. Each node moves in a way which ensure that its weight becomes (or, remains equal to) a “critical value”. For the node this critical value is , and for every other node (at a nonzero level) this critical value is equal to . Thus, every node other than satisfies Invariant 3.1 when Phase I ends. At this point, we initiate Phase II described below.

Phase II: This phase is defined by a continuous process which is driven by the node . Specifically, in this phase the node continuously increases its level so as to decrease its weight . The process stops when becomes equal to . As in Phase I, during the same process every other node continuously changes its level so as to ensure that remains fixed. Clearly, Invariant 3.1 is satisfied when Phase II ends.

“Work”: A proxy for update time. We cannot implement the above continuous process using any data structure, and hence we cannot meaningfully talk about the update time of our algorithm in the idealized, continuous setting. To address this issue, we introduce the notion of work, which is defined as follows. We say that our algorithm performs work whenever it changes the level of an edge by . Note that can take any arbitrary fractional value. To see how the notion of work relates to the notion of update time from Section 2, recall Claim 2.2 and Claim 2.3. They state that whenever a node at level moves up or down one level, it takes

time to update the relevant data structures. A moment’s thought will reveal that in the former case (when the node moves up) the total work done is equal to

, and in the latter case (when the node moves down) the work done is equal to . Since , we have . Thus, the work done by the algorithm is upper bounded by (and, closely related to) the time spent to update the date structures. In light of this observation, we now focus on analyzing the work done by our algorithm in the continuous setting.

3.1 Work done in handling the insertion or deletion of an edge

We focus on the case of an edge-insertion. The case of an edge-deletion can be analyzed using similar ideas. Accordingly, suppose that an edge , where , gets inserted into the graph. We first analyze the work done in Phase I, which is driven by the movement of . Without any loss of generality, we assume that is changing its level in such a way that its weight is decreasing at unit-rate. Every other node at a nonzero level wants to preserve its weight at its current value. Thus, we have:

(1)
(2)

A note on how the sets and and the weights and change with time: We will soon write down a few differential equations, which capture the behavior of the continuous process unfolding in Phase I during an infinitesimally small time-interval . Before embarking on this task, however, we need to clarify the following important issue. Under Assumption 3.2, at time the (nonzero) levels of the nodes take distinct, finite values. Thus, we have for any two nodes with , where denotes the level of a node at time . The level of a node can only change by an infinitesimally small amount during the time-interval . This implies that if for any two nodes , then we also have . In words, while writing down a differential equation we can assume that the sets and remain unchanged throughout the infinitesimally small time-interval .666The sets and will indeed change over a sufficiently long, finite time-interval. The key observation is that we can ignore this change while writing down a differential equation for an infinitesimally small time-interval. But, this observation does not apply to the weights and , for the weight of an edge will change if we move the level of its higher endpoint by an infinitesimally small amount.

Let denote the speed of a node . It is the rate at which the node is changing its level. Let denote the rate at which the weight of an edge is changing. Note that:

(3)

Consider a node with . By (2), we have . Hence, we derive that:

Rearranging the terms in the above equality, we get:

(4)

Now, consider the node . By (1), we have . Hence, we get:

(5)

Conditions (4) and (5) are reminiscent of a flow constraint. Indeed, the entire process can be visualized as follows. Let be the flow passing through an edge . We pump unit of flow into the node (follows from (1)). This flow then splits up evenly among all the edges with (follows from (5) and (3)). As we sweep across the system down to lower and lower levels, we see the same phenomenon: The flow coming into a node from its neighbors splits up evenly among its neighbors (follows from (4) and (3)). Our goal is to analyze the work done by our algorithm. Towards this end, let denote the power of an edge . This is the amount of work being done by the algorithm on the edge per time unit. Thus, from (3), we get:

(6)

Let denote the power of a node . We define it to be the amount of work being done by the algorithm for changing the level of per time unit. This is the sum of the powers of the edges whose levels change due to the node changing its own level. Let . From (3), it follows that either for all , or for all . In other words, every term in the sum has the same sign. So the quantity denotes the total flow moving from the node to its neighbors , and we derive that:

(7)

The total work done by the algorithm per time unit in Phase I is equal to . We will like to upper bound this sum. We now make the following important observations. First, since the flow only moves downward after getting pumped into the node at unit rate, conditions (4) and (5) imply that:

(8)

Now, suppose that we get extremely lucky, and we end up in a situation where the levels of all the nodes are integers (this was the case in Section 2). In this situation, as the flow moves down the system to lower and lower levels, the powers of the nodes decrease geometrically as per condition (7). Hence, applying (8) we can upper bound the sum by the geometric series . This holds since:

(9)

Thus, in Phase I the algorithm performs units of work per time unit. Recall that Phase I was initiated after the insertion of the edge , which increased the weight by (say) . During this phase the node decreases its weight at unit rate, and the process stops when becomes equal to . Thus, from the discussion so far we expect Phase I to last for time-units. Accordingly, we also expect the total work done by the algorithm in Phase I to be at most . Since , we expect that the algorithm will do at most units of work in Phase I. A similar reasoning applies for Phase II as well. This gives an intuitive explanation as to why an appropriately chosen variant of the BHI15 algorithm [8] should have update time for every constant .

3.2 Towards analyzing the “real-world”, discretized setting

Towards the end of Section 3.1 we made a crucial assumption, namely, that the levels of the nodes are integers. It turns out that if we want to enforce this condition, then we can no longer maintain an exact maximal fractional matching and get an approximation ratio of . Instead, we will have to satisfied with a fractional matching that is approximately maximal, and the corresponding vertex cover we get will be a -approximate minimum vertex cover. Furthermore, in the idealized continuous setting we could get away with moving the level of a node , whose weight has only slightly deviated from , by any arbitrarily small amount and thereby doing arbitrarily small amount of work on the node at any given time. This is why the intuition we got out of the above discussion also suggests that the overall update time should be in the worst case. This will no longer be possible in the real-world, where the levels of the nodes need to be integers. In the real-world, a node can move to a different integral level only after its weight has changed by a sufficiently large amount, and the work done to move the node to a different level can be quite significant. This is why our analysis in the discretized, real-world gets an amortized (instead of worst-case) upper bound of on the update time of the algorithm.

Coming back to the continuous world, suppose that we pump in an infinitesimally small amount of weight into a node at a level at unit rate. The process, therefore, lasts for time units. During this process, the level of the node increases by an infinitesimally small amount so as to ensure that its weight remains equal to . The work done per time unit on the node is equal to . Hence, the total work done on the node during this event is given by:

In this derivation, the first two steps follow from (6) and (7). The third step holds since for all , as the node moves up to a higher level. The fourth step follows from (5). Thus, we note that:

Observation 3.3.

To change the weight by , we need to perform units of work on the node .

The intuition derived from Observation 3.3 will guide us while we design a potential function for bounding the amortized update time in the “real-world”. This is shown in Section 4.

4 An overview of our algorithm and the analysis in the “real-world”

To keep the presentation as modular as possible, we describe the algorithm itself in Section 4.1, which happens to be almost the same as the BHI15 algorithm from Section 2, with one crucial twist. Accordingly, our main goal in Section 4.1 is to point out the difference between the new algorithm and the old one. We also explain why this difference does not impact in any significant manner the approximation ratio of derived in Section 2. Moving forward, in Section 4.2 we present a very high level overview of our new potential function based analysis of the algorithm from Section 4.1, which gives the desired bound of on the amortized update time. See Part II for the full version of the algorithm and its analysis.

4.1 The algorithm

We start by setting up the necessary notations. We use all the notations introduced in the beginning of Section 2. In addition, for every node and every level , we let denote what the weight of would have been if we were to place at level , without changing the level of any other node. Note that is a monotonically (weakly) decreasing function of , for the following reason. As we increase the level of (say) from to , all its incident edges with decrease their weights, and the weights of all its other incident edges remain unchanged.

Up-dirty and down-dirty nodes. We use the same definition of a down-dirty node as in Section 2 (see the second paragraph after Invariant 2.1) – a node is down-dirty iff and . But we slightly change the definition of an up-dirty node. Specifically, here we say that a node is up-dirty iff and . As before, we say that a node is dirty iff it is either up-dirty or down-dirty.

Handling the insertion or deletion of an edge. The pseudocode for handling the insertion or deletion of an edge remains the same as in Figure 1 – although the conditions which specify when a node is up-dirty have changed. As far as the time spent in implementing the subroutine in Figure 1 is concerned, it is not difficult to come up with suitable data structures so that Claim 2.2 and Claim 2.3 continue to hold.

Approximation ratio. Clearly, this new algorithm ensures that there is no dirty node when it is done with handling the insertion or deletion of an edge. We can no longer claim, however, that Invariant 2.1 is satisfied. This is because we have changed the definition of an up-dirty node. To address this issue, we make the following key observation: If a node with is not up-dirty according to the new definition, then we must have . To see why this true, suppose that we have a node with that is not up-dirty. If we move this node up by one level, then every edge incident on will decrease its weight by at most a factor of , and hence the weight will also decrease by a factor of at most . Therefore, we infer that , and the node is in fact up-dirty. This leads to a contradiction. Hence, it must be the case that if a node with is up-dirty, then . This observation implies that if there is no dirty node, then the following conditions are satisfied. (1) for all nodes . (2) for all nodes at levels . Accordingly, we get a valid fractional matching if we scale down the edge-weights by factor of . As before, the set of nodes with forms a valid vertex cover. A simple counting argument (see the paragraph after Invariant 2.1) implies that the size of this fractional matching is within a factor of the size of this vertex cover. Hence, we get an approximation ratio of . Basically, the approximation ratio degrades only by a factor of compared to the analysis in Section 2.

4.2 Bounding the amortized update time

Our first task is to find a discrete, real-world analogue of Observation 3.3 (which holds only in the continuous setting). This is done in Claim 4.1 below. This relates the time required to move up a node from level to level with the change in its weight due to the same event.

Claim 4.1.

Suppose that a node is moving up from level to level during an iteration of the While loop in Figure 1. Then it takes time to update the relevant data structures during this iteration.

Proof.

As the node moves up from level to level , the weight of every edge with decreases from to , whereas the weight of every other edge remains unchanged. Hence, it follows that:

Rearranging the terms in the above equality, we get:

The desired proof now follows from Claim 2.2. ∎

Next, consider a node that is moving down from level to level . We use a different accounting scheme to bound the time spent during this event. This is because the work done on the node during such an event is equal to , but it takes time to update the relevant data structures (see Claim 2.3 and the last paragraph before Section 3.1). Note that . Thus, although it is possible to bound the work done during this event in a manner analogous to Claim 4.1, the bound obtained in that manner might be significantly less than the actual time spent in updating the data structures during this event. Instead, we bound the time spent during this event as specified in Claim 4.1 below.

Claim 4.2.

Consider a node moving down from level to level during an iteration of the While loop in Figure 1. Then it takes time to update the relevant data structures during this iteration.

Proof.

By Claim 2.3, it takes time to update the relevant data structures when the node moves down from level to level . We will now show that . To see why this is true, first note that the node moves down from level only if it is down-dirty at that level (see step 4 in Figure 1). Hence, we get: . When the node is at level , every edge with has a weight . It follows that . Rearranging the terms in the resulting inequality, we get: . ∎

Node potentials and energy. In order to bound the amortized update time, we introduce the notions of potentials and energy of nodes. Every node stores nonnegative potentials , and energies , at every level . The potential and the energy are used to account for the time spent in moving the node up from level to level . Similarly, the potential and the energy are used to account for the time spent in moving the node down from level to level . Each unit of potential at level results in units of energy. Accordingly, we refer to this quantity as the conversion rate between potential and energy at level .

(10)

The potentials stored by a node across different levels depends on its weight . To be more specific, it depends on whether or . We first define the potentials stored by a node with weight . Throughout the following discussion, we crucially rely upon the fact that is a monotonically (weakly) decreasing function of . For any node with , let be the maximum level where . The potentials are then defined as follows.