Simple dynamic algorithms for Maximal Independent Set and other problems

04/05/2018 ∙ by Manoj Gupta, et al. ∙ Universität Wien IIT Gandhinagar 0

We study three fundamental graph problems in the dynamic setting, namely, Maximal Independent Set (MIS), Maximum Matching and Maximum Flows. We report surprisingly simple and efficient algorithms for them in different dynamic settings. For MIS we improve the state of the art upper bounds, whereas for incremental Maximum Matching and incremental unit capacity Maximum Flow and Maximum Matching, we match the state of the art lower bounds. Recently, Assadi et al. [STOC18] showed that fully dynamic MIS can be maintained in O({Δ,m^3/4}) amortized update time. We improve this bound to O({Δ,m^2/3}). Under incremental setting, we further improve this bound to O({Δ,√(m)}). Also, we show that a simple algorithm can maintain MIS optimally under fully dynamic vertex updates and decremental edge updates. Further, Assadi et al. [STOC18] reported hardness in achieving o(n) worst case update complexity for dynamic MIS. We circumvent the problem by proposing a model for dynamic MIS which does not maintain the MIS explicitly, rather allows queries on whether a vertex belongs to some MIS of the graph. In this model we prove that fully dynamic MIS can be maintained in worst case O({Δ,√(m)}) update and query time. Finally, similar to Assadi et al. [STOC18], all our algorithms can be extended to the distributed setting with update complexity of O(1) rounds and adjustments. Dahlgaard [ICALP16] presented lower bounds of amortized Ω(n) update time for maintaining incremental unweighted Maximum Flow and incremental Maximum Cardinality Matching. We report trivial extensions of two classical algorithms, namely incremental reachability and blossoms algorithm, which match these lower bounds. For completeness, we also report folklore algorithms for these problems in the fully dynamic setting requiring O(m) worst case update time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the last two decades, there has been a flurry of results in the area of dynamic graph algorithms. The motivation behind studying such problems is that many graphs encountered in the real world keep changing with time. These changes can be in the form of addition and/or deletion of edges and/or vertices. The aim of a dynamic graph algorithm is to report the solution of the concerned graph problem after every such update in a graph. One can trivially use the best static algorithm to compute the solution from scratch after each such update. Hence, the aim is to update the solution more efficiently as compared to the best static algorithm. Various graph problems studied in the dynamic setting include connectivity [27, 28, 31], minimum spanning tree [14, 17, 36], reachability [12, 41, 39], shortest path [43, 5, 11], matching [8, 4, 26], etc. However, three important graph problems that are perhaps not sufficiently addressed in the literature include independent sets, maximum matching (exact) and maximum flows.

For a given graph with vertices and edges where maximum degree of a vertex is , a set of vertices is called an independent set if no two vertices in share an edge in , i.e. , . Computing the maximum cardinality independent set is known to be NP-Hard [21]. However, a simple greedy algorithm is known to compute the Maximal Independent Set (MIS) in time, where MIS is any independent set such that no proper superset of , i.e. , is an independent set of the graph. Note that MIS is not a good approximation of maximum cardinality independent set111Consider a star graph where a vertex has an edge to all other vertices in . The maximum independent set is , whereas is a valid MIS. Here approximation factor is . unlike other known problems such as matching (which is a 2-approximation of maximum matching).

Fully Dynamic Maximal Independent Set

In the dynamic setting, one can trivially maintain MIS in update time, by computing MIS from scratch after every update. Moreover, the adjustment complexity222The adjustment complexity refers to the number of vertices that enter or leave after an update. of this algorithm can be . Until recently no non-trivial algorithm was known to maintain MIS in time.

The problem of MIS has been extensively studied in the distributed setting [3, 22, 30]. The first dynamic MIS algorithm was given by Censor-Hillel et al. [9] showing the maintenance of dynamic MIS in expected amortized rounds with messages per update, where is the maximum degree of a vertex in . This also translates to a centralized algorithm with update time of amortized  [2]. Recently, Assadi et al. [2] presented a deterministic centralized fully dynamic MIS algorithm requiring amortized time and amortized adjustments per edge update. Further, the update time for centralized algorithm was recently improved [37] for low arboricity graphs. In this paper, we present surprisingly simple deterministic algorithms to improve the results for general arboricity graphs.

Theorem 1.1 (Fully dynamic MIS (centralized)).

Given any graph having vertices and edges, MIS can be maintained in amortized time and amortized adjustments per insertion or deletion of an edge or a vertex, where is the maximum degree of a vertex in .

Remark: A limitation of our result over Assadi et al. [2] is that it cannot be trivially extended to the distributed setting with efficient round and adjustment complexity. Trivial adaptation of our algorithm in the distributed model requires amortized rounds and adjustments per update, whereas that of [2] requires rounds and adjustments per update. The message complexity in both cases matches the update time of the centralized setting.

Additionally, we present some other minor results related to dynamic MIS, Maximum Flow, and Maximum Matching as follows.

  1. Hardness of dynamic MIS and Incremental MIS
    We first discuss the hardness of different dynamic updates for maintaining MIS and report that the hardest update for maintaining MIS is that of edge insertions. For the remaining dynamic settings (fully dynamic vertex updates and decremental edge updates), a simple algorithm [2] maintains MIS optimally, i.e. total update time is of the order of input size. Hence, in addition to fully dynamic edge updates, the only interesting dynamic setting is handling edge insertions, where we again present a very simple algorithm to prove the following.

    Theorem 1.2 (Incremental MIS).

    Given any graph having vertices and edges, MIS can be maintained in amortized time and amortized adjustments per edge insertion, where is the maximum degree of a vertex in .

    Remark: Both these algorithms (for incremental MIS and remaining simple updates) trivially extend to the distributed CONGEST model with amortized rounds per update.

  2. Worst case bounds for dynamic MIS
    All the previous results for dynamic MIS primarily focus on amortized guarantees. This was explained by Assadi et al. [2] by proving that the adjustment complexity (and hence update time) of any dynamic MIS algorithm must be in the worst case. Hence, in order to achieve better worst case bounds, we are required to consider a relaxed model. Thus, we relax the requirement to explicitly maintain the MIS after each update. Rather, we allow queries of the form In-Mis, which reports whether a vertex is present in some MIS of the updated graph, ensuring the following properties. Firstly, the responses to all the queries after an update are consistent to some MIS of the updated graph. Secondly, the adjustment complexity of the algorithm is amortized per update (considering only updates), or worst case per update and query. Such a model have been previously studied for several problems including MIS [40, 1, 35]. In this relaxed model we show that

    Theorem 1.3 (Fully dynamic MIS (worst case)).

    Given any graph having vertices and edges, MIS can be implicitly maintained under fully dynamic edge updates requiring adjustments per update and query, which allows queries of the form In-Mis, where both update and query require worst case time.

    Remark: It trivially extends to CONGEST model with rounds per update and query.

  3. Dynamic Maximum Flow and Maximum Matching

    The maximum flow and maximum matching problems are some of the most studied combinatorial optimization problem having a lot of practical applications. Recently, Dahlgaard

    [10] proved conditional lower bounds for partially dynamic problems including maximum flow and maximum matching. Assuming the correctness of OMv conjecture, he proved that maintaining incremental Maximum Flow or Maximum Matching for unweighted graphs requires update time (even amortized). We report trivial extensions of two classical algorithms based on augmenting paths, namely incremental reachability algorithm [29] and blossoms algorithm [13, 19], which match these lower bounds.

    For the sake of completeness, we also report the folklore algorithms to update the maximum flow of an unweighted (unit-capacity) graph and maximum cardinality matching, in time using simple reachability queries. To the best of our knowledge, it is widely known but so far it has not been a part of any literature.

2 Overview

Let the given graph be with vertices and edges, where the maximum degree of a vertex is . The degree of a vertex shall be denoted by . We shall represent the currently maintained MIS by . We shall now give a brief overview of our results and the difference of our approach from the current state of the art.

A trivial static algorithm can compute an MIS of a graph in time. It visits each vertex and checks whether any of its neighbours is in . If no such neighbour exists, it adds to , clearly taking time for each edge while visiting its endpoints.

Assadi et al. [2] described a simple dynamic algorithm (henceforth referred as ) requiring amortized time per update, where essentially each vertex maintains a count of the number of its neighbours in . They also described an improved algorithm requiring time, which is based on a complicated case analysis. The algorithm essentially divides the vertices into four sets based on their degrees, where the count of all the vertices except the low degree vertices is maintained exactly. For vertices in , instead some

partial estimate

of count is maintained ignoring the high degree vertices in . The key difference of this improved algorithm from is the following: Instead of adding a vertex to only when none of its neighbours is in (i.e. when the true value of count is zero), in some cases the algorithm adds a low degree vertex to even when merely the partial estimate of count is zero. As a result, the algorithm may need to remove some vertices from to ensure correctness of the algorithm.

Now, the simple algorithm can be used to maintain MIS under all possible graph updates. We firstly provide a mildly tighter analysis of to demonstrate what kind of dynamic settings are harder for the MIS problem. We show that solves the problem optimally for all kinds of updates except edge insertions. Hence, the two dynamic settings in which the MIS problem is interesting are the fully dynamic and the incremental settings under edge updates. We improve the state-of-the-art for dynamic MIS in both these settings using very simple algorithms, which are essentially based on the simple algorithm as follows.

Our fully dynamic algorithm processes the low degree vertices (say having degree ) totally independent of the high degree vertices. Note that this simplifies the approach of [2] since the key difference of their algorithm from is the partial independence of processing with respect to higher degree vertices. Hence, we maintain the MIS of the subgraph induced by the vertices in irrespective of its high degree neighbours in . Using the MIS of this subgraph can be maintained in amortized update time. After each update, the MIS of high degree vertices not adjacent to any low vertex in (i.e. ) can be recomputed from scratch using the trivial static algorithm. This gives a trade off since the size of the subgraph induced by the high degree vertices (and hence the time taken by the static algorithm) decreases as is increased. Hence, choosing an appropriate results in amortized update time. Note that recomputing the MIS for high degree vertices from scratch may lead to a larger adjustment and round complexity in the distributed setting.

In the incremental setting, we show that indeed takes amortized time per update (see Appendix A). Moreover, we improve using a simple modification which prioritizes the removal of low degree vertices from instead of an arbitrary choice by . This simple modification improves the amortized update time to , which is also shown to be tight (see Appendix A).

For worst case complexity of a fully dynamic MIS algorithm Assadi et al. [2] showed strong lower bounds, where a single update may lead to vertices to enter or leave . Hence, for achieving better worst case bounds our relaxed model essentially maintains merely an independent set explicitly rather than MIS. Thus, the MIS is completely built over several queries allowing better worst case complexity. We present a simple algorithm for maintaining fully dynamic MIS in this model requiring worst case time for update and query.

3 Dynamic MIS

Assadi et al. [2] demonstrated a simple algorithm for maintaining fully dynamic MIS using amortized time per update. We first briefly describe the algorithm and its analysis, which shall be followed by a tighter analysis that can be used to argue the kind of dynamic updates for which it is harder to maintain MIS.

3.1 Simple dynamic algorithm  [2]

The following algorithm is the most natural approach to study the dynamic MIS problem. It essentially maintains the count of the number of neighbours of a vertex in the MIS. It is easy to see that the count for each vertex can be initialized in time using the simple greedy algorithm for computing the MIS of the initial graph.

Now, under dynamic updates the count of each vertex needs to be maintained explicitly. Hence, whenever a vertex enters or leaves , the count of its neighbours is updated in time. On insertion of a vertex , the count of is computed in time. In case this count is zero, is added to . In case of deletion of a vertex , an update is only required when , where is simply removed from . In case count of any neighbour of reduces to zero, it is added to . In case of deletion of an edge , update is required only when one of them (say ) is in . If count of reduces to zero, it is added to . Finally, on insertion of an edge , update is required only in case both are in , where either one of them (say ) is removed from . Again, if count of any neighbour of reduces to zero, it is added to .

Notice that in case of each update at most one vertex may be removed from and several vertices may be added to , where both addition or deletion of a vertex takes time. Hence, whenever a vertex is removed from , adjustments and work is charged for both this removal as well as the next insertion (if any) to . The initial charge required is the sum of degrees of all the vertices in , which is . As a result, fully dynamic MIS can be maintained in amortized adjustments and amortized time per update.

Theorem 3.1 (Fully dynamic MIS [2]).

Given any graph having vertices and edges, MIS can be maintained in amortized adjustments and amortized time per insertion or deletion of edge or vertex, where is the maximum degree of a vertex in .

3.1.1 Tighter analysis

We shall now show a mildly tighter analysis of the algorithm which is again very simple and apparent from the algorithm itself. Instead of generalizing the update time to , we show that the update time of the algorithm is , where is the vertex that is removed from or inserted in the graph (for vertex insertions), otherwise the charged update time is .

This analysis is again fairly straight forward using the similar arguments as in the previous section, which stated that a vertex can be charged twice when it is removed from to pay for the future cost of its insertion to . The only difference in our analysis is as follows: Since we are not charging the update with the maximum degree , rather the exact degree of a vertex , this may change between the time it was charged and when it is used. Particularly, consider a vertex which was charged when removed from or inserted in the graph. Now, on being inserted back in , its new can be much higher than its old . Hence, we need to explicitly associate this increase of degree to the update which led to this increase. To this end, we analyze the algorithm using the following potential function: . Thus, the amortized cost (time) of an update is the sum of the actual work done and the change in potential.

The insertion of a vertex requires amortized time for the actual work to add the edges, and the increase in potential . The potential can be increased by because of (if ), and its neighbours not in (change in their degrees). For the remaining updates, note that the work done for the addition of a vertex in is always balanced by the corresponding decrease in potential by . We thus only focus on the work done for removing a vertex from and the change in potential due to change in the degree of vertices. The deletion of a vertex requires amortized time, as the time required to remove edges of is and the potential decreases by at least (if then reduces due to , else it reduces due to neighbours of ). The insertion of an edge requires amortized time if both and are not simultaneously in , for the work done to add the edge in the graph and increase in because of degrees of its endpoints. In case both , the algorithm removes one vertex (say ) from , which requires amortized time for actual time taken to update all the neighbours of and increase in potential by , because of and its neighbours. The deletion of an edge again takes amortized time for work done to remove the edge and decrease in . Finally, since at most one vertex (if any) is removed from during an update, the adjustment complexity is amortized as the removal also accounts for the future insertion of the vertex in . Thus, we have the following theorem.

Theorem 3.2.

Given any graph having vertices and edges, MIS can be maintained using amortized adjustments per update, where the amortized update time is if is the only vertex (if any) that is removed from or the vertex inserted into (vertex insertion), otherwise the amortized update time is .

Remark: This does not imply tighter amortized bound to the fully dynamic algorithm, but is merely described to aid in the analysis of future algorithms. Further, since adjustment complexity is amortized , the algorithm can be trivially adapted to the distributed model [2] requiring amortized rounds per update.

3.1.2 Hardness of Dynamic MIS

Using Theorem 3.2 we can understand the hardest form of update in case of dynamic MIS. Additionally, we use the fact that a vertex is removed from only in case of edge insertion or when is deleted. Consider the case of fully dynamic MIS under only vertex updates. Here each vertex can enter into only once, either on being inserted or when all its neighbours in are deleted. Thus, it requires total time throughout its lifetime, requiring overall time. Similarly, consider the case of decremental MIS under edge updates. Here again, each vertex can enter exactly once, and never leave . Hence, using Theorem 3.2, it incurs amortized cost of , requiring total time. Thus, the algorithm works optimally under fully dynamic vertex updates or decremental MIS under edge updates.

As a result, the only update for which the simple algorithm does not solve the problem optimally is that of edge insertions, leaving the two problems of incremental MIS and fully dynamic MIS under edge updates. Under the current analysis both these algorithm takes total time which may not be optimal. Thus, we shall now focus on solving the two problems better than the improved fully dynamic MIS algorithm by Assadi et al. [2] requiring amortized time.

4 Improved algorithm for Fully Dynamic MIS

In addition to the simple amortized time algorithm , Assadi et al. [2] presented a substantially complex fully dynamic algorithm requiring amortized time per update. We show a very simple extension to that achieves amortized time per update. The core idea used by the improved algorithm of Assadi et al. [2] gives partial preference to low degree vertices when being inserted to . More precisely, in some cases they allow a low degree vertex to be inserted to despite having high degree neighbours in . This is followed by removal of these high degree neighbours from to maintain MIS property. We essentially use the same idea by using a stronger preference order. The low degree vertices are always inserted to despite having high degree neighbours in .

The main idea is as follows. We divide the vertices of the graph into vertices having degree , and vertices having degree , for some fixed constant . Let the subgraph induced by the vertices in and be and respectively. We use simple algorithm to maintain the MIS of in amortized time. After every such update, we can afford to rebuild the MIS for from scratch, using the trivial static algorithm. Since the total number of heavy vertices can be , the time taken to build the MIS for the heavy vertices is time. Choosing we get an amortized update time algorithm. However, if the entire graph is present in and we have an empty . Hence, our algorithm merely performs on the whole graph, resulting in amortized update time of our algorithm. Note that the main reason for faster update of this algorithm (compared to ) is as follows: When a heavy vertex enters or leaves , it does not inform its light neighbours.

Implementation Details

We shall now describe a few low level details regarding the implementation of the algorithm. Each vertex of both and will store a count  of their light neighbours in , i.e. neighbours in . This count  can be maintained easily by each vertex as whenever a light vertex enters of leaves , it informs all its neighbours (both heavy and light).

As we have mentioned before, we use algorithm to maintain a maximal independent set for all the light vertices. Thus, at each step we may have to rebuild the MIS for the heavy vertices from scratch. While rebuilding MIS for we have to only consider those heavy vertices whose , i.e., which do not have any light neighbour in . We can rebuild the MIS of from scratch in time using the trivial static algorithm. We have already argued that . Thus, the update time of our algorithm for light vertices is amortized (by algorithm ) and the update time of our algorithm for heavy vertices is . Choosing , the update time of our algorithm becomes .

Finally, note that the value of have to be changed during the fully dynamic procedure, because of the change is the value of . This can be achieved using the standard technique of periodic rebuilding, which can be described using phases as follows. We choose , where denotes the number of edges in the graph at the start of a phase. Hence at the start of the algorithm, . Whenever decreases to or increases to , we end our current phase and start a new phase. Also, we re-initialize count and rebuild but using the new value of , and hence new . Note that the cost for this rebuilding at the start of each phase is accounted to the edge updates during the phase. Thus, we have the following theorem.

Theorem 1.1 (Fully dynamic MIS).

Given any graph having vertices and edges, MIS can be maintained in amortized time per insertion or deletion, where is the maximum degree of a vertex in .

5 Improved Algorithm for Incremental MIS

We shall now consider the incremental MIS problem and show that if we restrict the updates to edge insertion only, we can achieve a faster amortized update time of . In this algorithm, we again give preference for a vertex being in MIS, based on its degrees. However, instead of dividing the vertices into sets of and explicitly, we simply consider the exact degree of the vertex.

Recall that in case of insertion of an edge, the simple algorithm updates , only when both the end vertices belong to . This leads to one of the vertices (say ) being removed from , and the count of its neighbours being updated. Several of these neighbours can now have and hence enter . However, using Theorem 3.2 we know that the amortized time required by the update is only , the degree of the vertex removed from .

Our main idea is to modify to ensure that when an edge is inserted between two vertices in , the end vertex with lower degree is removed from . Note that in the original , the end vertex to be removed from MIS is chosen arbitrarily. We show that this key difference of choosing the lower degree (instead of arbitrary) end vertex to be removed from , proves crucial in improving the amortized update time. We further prove our argument by showing worst case examples demonstrating the tightness of the upper bounds of the two algorithms (see Appendix A).

5.0.1 Analysis

We shall now analyze our incremental MIS algorithm. Consider the insertion of edge , where without loss of generality is the lower degree vertex amongst and . Using Theorem 3.2, we know that the amortized update time of the algorithm is in case both , else . Hence, the total update time taken by the algorithm is .

We shall analyze the time taken by a vertex, during two phases, when it is light (), and when it becomes heavy (). If is light, then we simply remove from in amortized time (as ). If is heavy, then must also be heavy. However, there are only heavy vertices in the graph. Hence, a heavy vertex can be removed times from  due to its heavy neighbours. Further, the time taken for such a vertex is , where is the final degree of after the end of updates. Thus, the total update time is calculated as follows.

= +
+
+
+

Hence, the total update time of our incremental algorithm is . Also, since it is simply a special case of the simple MIS algorithm  [2], we also have adjustment complexity of amortized , and an upper bound of total time, making the amortized update time .

Theorem 1.2 (Incremental MIS).

Given any graph having vertices and edges, MIS can be maintained in amortized time and amortized adjustments per edge insertion, where is the maximum degree of a vertex in .

Note: This technique cannot be trivially extended to the fully dynamic setting. This is because here the crucial fact exploited is that the high degree vertices would be removed less number of times from MIS, which cannot be ensured in a fully dynamic environment. Also, as it can be trivially adapted to model with amortized rounds and adjustments per update.

We have seen that the key difference of choosing the lower degree (instead of arbitrary) end vertex to be removed from proves crucial in reducing the amortized update time. This argument can be proved by the following worst case examples demonstrating the tightness of the upper bounds of the two algorithms (see Appendix A). Note that this example for in incremental setting also shows tightness of the fully dynamic case, as the incremental setting is merely a special case of the fully dynamic setting.

6 MIS with worst case guarantees

Assadi et al. [2] demonstrated using a worst case example that the adjustment complexity (number of vertices which enter or leave during an update) can be in the worst case. This justifies why the entire work on dynamic MIS is focused primarily on amortized bounds. However, in case we still want to achieve better worst case bounds, we are required to consider a relaxed model, where we settle at not maintaining the MIS explicitly. Rather we allow queries to answer whether a vertex is present in MIS after an update, such that the results of all the queries are consistent to some MIS of the graph.

Implicit Maintenance of MIS

We now formally define this model for the dynamic maintenance of MIS. The model supports updates in the graph, such that the MIS is not explicitly maintained after each update. Additionally, the model allows queries of the form In-Mis, which reports whether a vertex is present in the MIS of the updated graph. Such a model essentially allows us to compute partially, along with maintaining some information which makes sure that the results of the queries are consistent with each other. Thus, this model allows us to achieve time worst case bounds for both queries and updates, for the dynamic maintenance of MIS.

The underlying idea is as follows. Recall that in every update of exactly one vertex is removed from and several vertices may be added to (see Theorem 3.2). Thus, it is easier to maintain an independent set rather than a maximal independent set, by always processing the removal of a vertex from but not the insertion of vertices in . Hence, we only make sure that after each update, no two vertices in share an edge which leads to removing exactly one vertex from . Now, whenever a vertex is queried we verify whether it is already in or can be moved to , and respond accordingly. In order to resolve these queries efficiently, we again maintain the partial count of neighbours of a vertex in , which is described as follows.

We consider the vertices with high degree () as heavy and the rest as light, resulting in total heavy vertices. Our algorithm maintains the count for only heavy vertices and not for light vertices. Thus, whenever a vertex enters or leaves it only informs its heavy neighbours. We shall now describe the update and query algorithms.

The update algorithm merely updates the count of heavy end vertex (if any) in case of edge deletion, or edge insertion when both end vertices are not in . For edge insertion having both end vertices in , it removes one of the vertices from and updates the count of its heavy neighbours. Note that the update algorithm does not ensure that is an MIS, but necessarily ensures that is an independent set since it always removes a vertex from if any of its neighbours is in .

A vertex may be added to only if it is queried in In-Mis as follows. If , then we simply report it. Else, there are two cases depending on whether is light or heavy. If is heavy and its count is zero, it implies that has no neighbour in . Hence, we simply add into and update the count of its heavy neighbours. However, if is light we do not have the count of its neighbours in . Hence, we check if there is any neighbour of in in time. If none of the neighbours of are in , then we add to and update the count of its heavy neighbours. This completes the query algorithm. It is easy to see that both update and query algorithms require worst case time to add or remove at most one vertex from and visit its neighbours. Further, in case , this update and query time reduces to , as each vertex can have neighbours.

Now, in the fully dynamic setting the value of may change significantly after sufficient updates. So we instead define the heavy status of a vertex using a constant which is initialized as , and we ensure that . In case the value of increases beyond , we simply need to make some heavy vertices light by removing their corresponding value of count. This can be performed in a single step in time as there are only heavy vertices, and we update . However, in case the value of decreases below , we need to compute the value of count for the light vertices in each update, which will become heavy if is reduced to half. Since total degree of all vertices is , there can be only such vertices. Hence, in the next updates we can compute the count for one such vertex in each update requiring time. Since we have updates before the value of and we update , all the new heavy vertices will have computed its value of count. Also, if an edge update changes the heavy status of its end points, their corresponding count can be updated in time.

In order to prove the correctness of our algorithm, we are required to prove that (1) remains an independent set throughout the algorithm, and (2) All the neighbours of a vertex are verified before answering a query In-Mis. The former clearly follows from the update algorithm, and to prove the latter we look at the heavy and light vertices separately. Since every vertex entering or leaving necessarily informs its heavy neighbours, the second condition is clearly true for heavy vertices. For light vertices, the correctness of the second condition follows from the query algorithm. Finally, since at most one vertex (if any) leaves MIS in an update or enters MIS in a query, the adjustment complexity is . However, if we consider adjustment complexity considering updates only, it is still amortized (similar to ). Thus we have the following theorem.

Theorem 6.1 (Fully dynamic MIS (worst case)).

Given any graph having vertices and edges, MIS can be implicitly maintained under fully dynamic edge updates requiring adjustments per update and query, allowing queries of the form In-Mis, where both update and query requires worst case time.

Note: The above described algorithm can be trivially adapted to distributed model requiring messages, and rounds and adjustments per update or query in the worst case.

Remark: The algorithm can also support vertex deletion similar to edge insertion in the same time. However, vertex insertion with an arbitrary number of incident edges would not be allowed as processing input itself may require time. Thus, allowing fully dynamic vertex updates requires worst case update time. However, in case the vertices are inserted without any incident edges the same complexity of is also applicable for fully dynamic vertex updates.

7 Maximum Flow and Maximum Matching

A standard approach to both these problems uses the concept of augmenting paths, where finding an augmenting path allows us to increase the value of the solution by a single unit. Computing an augmenting path takes time for both the problems and in the unweighted case the maximum value of the solution can be . This simply gives an time algorithm to compute the solution for both the problems in the static setting.

However, maintaining them in the dynamic setting have not been explicitly examined. We report that both these bounds can be exactly matched by trivial extensions of the augmenting path algorithms in the incremental setting. Concretely, the incremental reachability algorithm [29] can be used to incrementally compute an augmenting path for maximum flow in total time. Similarly, the blossoms algorithm [19] (which is inherently incremental) can be used to incrementally compute an augmenting path for maximum matching in time. Once the augmenting path is found, the solution is updated and its value is increased by unit. Then we restart the computation of augmenting path in the updated graph. This process can be repeated times, which is the maximum value of the solution. Hence, both the problems can be solved in total time, matching the lower bound by Dahlgaard [10]. Furthermore, the fully dynamic algorithms for both these problems requiring time per update are known as folklore, though not explicitly stated in the literature. Both these algorithms are also based on the augmenting path approach. We also state those algorithms for the sake of completeness. Refer to Appendix B and Appendix C for details.

8 Conclusion

We have presented several surprisingly simple algorithms for fundamental graph problems in the dynamic setting. These algorithms either improve the known upper bounds of the problem or match the known lower bounds. Additionally, we considered some relaxed settings under which such problems can be solved better. A common trait among all these algorithms is that they are extremely simple and use no complicated data structures, making it suitable for even classroom teaching of fundamental concepts as amortization and introducing dynamic graph algorithms.

In the dynamic MIS problem, we also discuss the hardness of the problem in the dynamic setting. Most graph problems (as connectivity, reachability, maximum flow, maximum matching, MST, DFS, BFS etc.) are found to be harder to handle vertex updates instead of edge updates and handle deletions instead of insertions. This is surprisingly the opposite case with dynamic MIS, where except for edge insertions (which is the easiest update for most other problems), a trivial algorithm solves the problem optimally. Notably, this is also the case a few other fundamental problems as topological ordering, cycle detection, planarity testing, etc. We conjecture the reason for such a behaviour to be the following fundamental property: If the solution of the problem is still valid (though sub-optimal) after an update, it shall be easier to handle the update. This supports the behaviour of the problems mentioned above, both for which edge insertions are easiest, and for which they are hardest.

Finally, in the light of the above discussion, we propose some future directions of research in these problems. It seems that the fully dynamic MIS under edge updates, shouldn’t be much harder than the incremental setting. Hence, it would be interesting to see an algorithm to maintain fully dynamic MIS in amortized update time, which can preferably also be extended to the distributed model with amortized round and adjustment complexity. On the other hand, we believe decremental unit capacity maximum flow and maximum cardinality matching would be harder than the incremental setting. Hence, stronger lower bounds for these problems in the decremental setting would be interesting.

References

  • [1] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computation algorithms. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1132–1139, 2012.
  • [2] Sepehr Assadi, Krzysztof Onak, Baruch Schieber, and Shay Solomon. Fully dynamic maximal independent set with sublinear update time. In

    Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018

    , pages 815–826, 2018.
  • [3] Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. The locality of distributed symmetry breaking. In 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, New Brunswick, NJ, USA, October 20-23, 2012, pages 321–330, 2012.
  • [4] Surender Baswana, Manoj Gupta, and Sandeep Sen. Fully dynamic maximal matching in o(log n) update time. SIAM J. Comput., 44(1):88–113, 2015.
  • [5] Surender Baswana, Ramesh Hariharan, and Sandeep Sen. Improved decremental algorithms for maintaining transitive closure and all-pairs shortest paths. J. Algorithms, 62(2):74–92, 2007.
  • [6] Aaron Bernstein and Cliff Stein. Faster fully dynamic matchings with small approximation ratios. In Robert Krauthgamer, editor, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 692–711. SIAM, 2016.
  • [7] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. New deterministic approximation algorithms for fully dynamic matching. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 398–411. ACM, 2016.
  • [8] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. Fully dynamic approximate maximum matching and minimum vertex cover in O(log n) worst case update time. In Philip N. Klein, editor, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 470–489. SIAM, 2017.
  • [9] Keren Censor-Hillel, Elad Haramaty, and Zohar Karnin. Optimal dynamic distributed mis. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC ’16, pages 217–226, 2016.
  • [10] Søren Dahlgaard. On the hardness of partially dynamic graph problems and connections to diameter. In 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy, pages 48:1–48:14, 2016.
  • [11] Camil Demetrescu and Giuseppe F. Italiano. A new approach to dynamic all pairs shortest paths. J. ACM, 51(6):968–992, 2004.
  • [12] Camil Demetrescu and Giuseppe F. Italiano. Mantaining dynamic matrices for fully dynamic transitive closure. Algorithmica, 51(4):387–427, 2008.
  • [13] Jack Edmonds. Paths, Trees, and Flowers, pages 361–379. Birkhäuser Boston, Boston, MA, 1987.
  • [14] David Eppstein, Zvi Galil, Giuseppe F. Italiano, and Amnon Nissenzweig. Sparsification - a technique for speeding up dynamic graph algorithms. J. ACM, 44(5):669–696, 1997.
  • [15] Shimon Even and Robert Endre Tarjan. Network flow and testing graph connectivity. SIAM J. Comput., 4(4):507–518, 1975.
  • [16] L. R. Ford and D. R. Fulkerson. Flows in Networks. Princeton University Press, Princeton, NJ, USA, 1962.
  • [17] Greg N. Frederickson. Data structures for on-line updating of minimum spanning trees, with applications. SIAM J. Comput., 14(4):781–798, 1985.
  • [18] Harold N. Gabow and Robert Endre Tarjan. A linear-time algorithm for a special case of disjoint set union. In Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing, STOC ’83, pages 246–251, 1983.
  • [19] Harold Neil Gabow. Implementation of Algorithms for Maximum Matching on Nonbipartite Graphs. PhD thesis, Stanford University, Stanford, CA, USA, 1974.
  • [20] Zvi Galil, Silvio Micali, and Harold N. Gabow. An O(EV log V) algorithm for finding a maximal weighted matching in general graphs. SIAM J. Comput., 15(1):120–130, 1986.
  • [21] Michael R. Garey and David S. Johnson. Computers and Intractability; A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1990.
  • [22] Mohsen Ghaffari. An improved distributed algorithm for maximal independent set. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 270–277, 2016.
  • [23] Andrew V. Goldberg, Sagi Hed, Haim Kaplan, Pushmeet Kohli, Robert Endre Tarjan, and Renato F. Werneck. Faster and more dynamic maximum flow by incremental breadth-first search. In Algorithms - ESA 2015 - 23rd Annual European Symposium, Patras, Greece, September 14-16, 2015, Proceedings, pages 619–630, 2015.
  • [24] Andrew V. Goldberg and Satish Rao. Beyond the flow decomposition barrier. J. ACM, 45(5), September 1998.
  • [25] Andrew V. Goldberg and Robert Endre Tarjan. Efficient maximum flow algorithms. Commun. ACM, 57(8):82–89, 2014.
  • [26] Manoj Gupta and Richard Peng. Fully dynamic -approximate matchings. In 54th Annual IEEE Symposium on Foundations of Computer Science, 2013.
  • [27] Monika Rauch Henzinger and Valerie King. Randomized fully dynamic graph algorithms with polylogarithmic time per operation. J. ACM, 46(4):502–516, 1999.
  • [28] Jacob Holm, Kristian de Lichtenberg, and Mikkel Thorup. Poly-logarithmic deterministic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, and biconnectivity. J. ACM, 48(4):723–760, 2001.
  • [29] Giuseppe F. Italiano. Amortized efficiency of a path retrieval data structure. Theor. Comput. Sci., 48(3):273–281, 1986.
  • [30] Peter Jeavons, Alex Scott, and Lei Xu. Feedback from nature: simple randomised distributed algorithms for maximal independent set selection and greedy colouring. Distributed Computing, 29(5):377–393, 2016.
  • [31] Bruce M. Kapron, Valerie King, and Ben Mountjoy. Dynamic graph connectivity in polylogarithmic worst case time. In SODA, pages 1131–1141, 2013.
  • [32] Jon Kleinberg and Eva Tardos. Algorithm Design. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2005.
  • [33] Pushmeet Kohli and Philip H. S. Torr.

    Dynamic graph cuts and their applications in computer vision.

    In Computer Vision: Detection, Recognition and Reconstruction, pages 51–108. Springer Berlin Heidelberg, 2010.
  • [34] S. Kumar and P. Gupta. An incremental algorithm for the maximum flow problem. J. Math. Model. Algorithms, 2(1):1–16, 2003.
  • [35] Yishay Mansour, Aviad Rubinstein, Shai Vardi, and Ning Xie. Converting online algorithms to local computation algorithms. In Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I, pages 653–664, 2012.
  • [36] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen. Dynamic minimum spanning forest with subpolynomial worst-case update time. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 950–961, 2017.
  • [37] Krzysztof Onak, Baruch Schieber, Shay Solomon, and Nicole Wein. Fully dynamic MIS in uniformly sparse graphs. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pages 92:1–92:14, 2018.
  • [38] James B. Orlin. Max flows in o(nm) time, or better. In Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages 765–774, 2013.
  • [39] Liam Roditty and Uri Zwick. A fully dynamic reachability algorithm for directed graphs with an almost linear update time. SIAM J. Comput., 45(3):712–733, 2016.
  • [40] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algorithms. In Innovations in Computer Science - ICS 2010, Tsinghua University, Beijing, China, January 7-9, 2011. Proceedings, pages 223–238, 2011.
  • [41] Piotr Sankowski. Dynamic transitive closure via dynamic matrix inverse (extended abstract). In 45th Symposium on Foundations of Computer Science (FOCS 2004), 17-19 October 2004, Rome, Italy, Proceedings, pages 509–517, 2004.
  • [42] Shay Solomon. Fully dynamic maximal matching in constant update time. In IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 325–334, 2016.
  • [43] Mikkel Thorup. Worst-case update times for fully-dynamic all-pairs shortest paths. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, May 22-24, 2005, pages 112–119, 2005.

Appendix A Tightness of Incremental Algorithms

We now present worst case examples demonstrating the tightness of the analysis of and our incremental MIS algorithm. These essentially highlight the difference between arbitrary removal and degree biased removal of an end vertex on insertion of an edge between two vertices of the MIS.

a.1 Arbitrary removal

We start with an empty graph where all the vertices are in MIS. Let the vertices be divided into two sets and (refer to Figure 1), where has vertices and has the remaining vertices.

Figure 1: Worst case example for arbitrary removal

We shall divide the insertion of edges into phases, where in the phase we add an edge from each vertex of to . And we always chose the vertex of to be removed from MIS . At the end of the phase, all vertices of are out of and are connected to . The phase ends with the addition of an edge between and , which removes from and hence all vertices of are moved back to . Since each vertex is allowed only neighbours, and each phase adds a neighbour to each vertex in , we stop after phases.

Hence, after phases we have added all the edges between and the first vertices in , and among the first adjacent vertices of . Overall we add edges, which equals to edges.

Using Theorem 3.2, the total edges processed during the phase, is the sum of the degrees of vertices that were removed from , i.e., all the vertices in and . Now, in the phase the degree of each vertex in is , being connected to and the degree of is . Hence, the total work in phase is . Thus, the total work done over all phases is

Thus, we have the following bound for in incremental (and hence fully dynamic) setting.

Theorem A.1.

For each value of and , there exists a sequence of edge insertions where the degree of each vertex is bounded by for which requires total time to maintain the MIS.

a.2 Degree biased removal

In this example, we consider our incremental MIS algorithm, which chooses the end vertex with the lower degree to be removed from when an edge is inserted between two vertices in . We essentially modify the previous example to make sure the vertices of necessarily fall when connected to the vertices in .

Let the vertices to be divided into two sets and as before, and an additional set of residual vertices to ensure that degrees of vertices in are sufficiently high (refer to Figure 2), where . We connect each vertex with some vertices in . Additionally, we have a vertex , connected to all the vertices in . We initialize the MIS with all vertices of , and in .

Figure 2: Tightness Example for degree biased removal

Again, we divide the insertion of edges into phases, where in the phase we add an edge from each vertex of to . Since the maximum degree of a vertex in is , and degree of each is , we always have the vertex of to be removed from MIS . At the end of the phase, all vertices of are out of and are connected to . The phase ends with the addition of , which removes from and hence all vertices of are moved back to .

Hence, after phases we have added all the edges between and , between each vertex of and . The initial graph already had each vertex of connected to some neighbours in and connected to all neighbours of . Overall we add edges, which equals to edges.

Again, using Theorem 3.2, the total edges processed during the phase, is the sum of the degrees of vertices that were removed from , i.e., all the vertices in and . Now, in the phase the degree of each vertex in is , being connected to and the degree of is . Hence, the total work done in phase is . Thus, the total work done over all phases is

Thus, we have the following bound for our incremental MIS algorithm.

Theorem A.2.

For each value of , there exists a sequence of edge insertions for which our incremental algorithm requires total time to maintain the MIS.

Remark: This example is similar to the previous example with an exception that in this case, the degree of vertices in should remain higher than the degree of a vertex in , i.e., at the end of all the phases. Hence, else the total edges in would be . Thus, the number of neighbours of in cannot be increased despite choosing any large value of . As evident from the previous example in absence of such a restriction, the amortized time can be raised to implying the significance of the degree biased removal in the incremental algorithm.

Appendix B Unit capacity Maximum Flow

The maximum flow problem is one of the most studied combinatorial optimization problem having a lot of practical applications (see [25] for a survey). Given a graph having vertices and edges where each edge has a capacity (positive real weight) associated to it. A flow from a source to a sink is an assignment of flow to each edge such that it satisfies the following two constraints. Firstly, the capacity constraints imply that the flow on each edge is limited by its capacity, i.e. . Secondly, the conservation constraints imply that for each non-terminal vertex (), the flow passing through is conserved, i.e. . The amount of flow leaving or entering is referred as the flow of the network, i.e., . Clearly, in a unit capacity simple graph , corresponding to each edge leaving (or entering ). The maximum flow problem evaluates the flow from to which maximizes the value of .

Orlin [38] presented an algorithm to find the maximum flow in time. For integral arc capacities (bounded by ), it can be evaluated time [24], which further reduces to time for unweighted (unit capacity) graphs [15]. In the dynamic setting, a few algorithms are known for maintaining maximum flow. Kumar and Gupta [34] showed that partially dynamic maximum flow can be maintained in update time, where is the number of vertices whose flow is affected. Other algorithms are by Goldberg et al. [23] and Kohli and Torr [33], which work very well in practice on computer vision problems but do not have strong asymptotic guarantees.

Recently, Dahlgaard [10] proved conditional lower bounds for partially dynamic problems including maximum flow under OMv conjecture. For directed and weighted sparse graphs, no algorithm can maintain partially dynamic max flow in amortized update time. For directed unweighted (unit capacity) graphs and undirected weighted graphs, no algorithm can maintain partially dynamic maximum flow in amortized update time. We report a trivial extension of the incremental reachability alogrithm [29] to solve the incremental unit capacity (unweighted) maximum flow problem matching its corresponding lower bound.

We shall now describe some simple dynamic algorithms for maintaining maximum flow in unit capacity graphs. The main idea behind such algorithms is based on the classical augmenting path approach of the standard Ford Fulkerson [16] algorithm. In the interest of completeness, before describing our incremental algorithm for maintaining maximum flow in (where ) amortized time per update, we present the folklore algorithm supporting fully dynamic edges updates in time. To describe these algorithms succinctly, we briefly describe the augmenting path approach and the concept of residual graphs.

Residual Graphs and Augmenting paths

Given an unweighted (unit capacity) directed graph , having flow function defined on each edge . The residual graph is computed by changing the direction of every edge , if . Note that this may create two edges in a graph between the same endpoints. This residual graph allows a simple characterization of whether the current flow is indeed a maximum flow because of the following property: If there exists an path in the residual graph, referred as augmenting path, the value of the flow can be increased by pushing flow along the augmenting path in the residual graph [16]. This results in reversing the direction of each edge on this path in the updated residual path. The flow reaches the maximum value when there does not exist any augmenting path, i.e. path, in the residual graph.

b.1 Fully Dynamic Maximum Flow

We now describe the folklore algorithm for maintaining Maximum Flow of a unit capacity graph under fully dynamic edge updates. The flow is updated by the computation of a single augmenting path in the residual graph after the corresponding edge update as follows.

  • Insertion of an edge: An edge insertion can either increase the maximum flow by one unit or leave it unchanged. Recall that when is a maximum flow, the sink is not reachable from the source in the residual graph. Hence if the inserted edge creates an path in the residual graph, the maximum flow increases by exactly one unit. Thus, the algorithm finds an path in the residual graph and pushes a flow of one unit along the path. If no such path is found, the maximum flow has not increased and the solution remains unchanged.

  • Deletion of an edge: If the deleted edge doesn’t carry any flow, the flow remains unchanged. Otherwise, the edge deletion can either decrease the maximum flow or simply make the current flow invalid requiring us to reroute the flow without changing its value. Hence, on deletion of an edge the algorithm first attempts to restore the flow by finding an alternate path from to in the residual graph. If such a path is found the algorithm pushes a flow of unit capacity along the path, restoring the maximum flow in the graph. Otherwise, we have to send back one unit of flow each, from to and from to , to reduce the maximum flow by one unit. For this an edge is added and a path from to is found, which necessarily exists containing the edge . Again, the algorithm pushes a flow of unit capacity along the path, restoring the maximum flow in the graph. After updating the residual graph, we remove the extra added edge from the graph.

Thus, each update can be performed in time, as it performs reachability queries (using BFS/DFS traversals) and updates the residual graph. Hence, we have the following theorem.

Theorem B.1 (Fully dynamic Unit Capacity Maximum Flow (folklore)).

Given an unweighted (unit-capacity) graph having vertices and edges, fully dynamic maximum flow under edge updates can be maintained in worst case time per update.

b.2 Incremental Maximum Flow

In the incremental setting, the unit-capacity maximum flow can be maintained using amortized update time. Recall that in the fully dynamic algorithm, on insertion of an edge the flow is increased only if the becomes reachable from in the residual graph as a result of the update. Trivially, verifying whether is reachable after each update requires time per update, even when the flow is not increased. However, this can be computed more efficiently using the single source incremental reachability algorithm [29] requiring total time for every increase in the value of flow. In the interest of completeness, let us briefly describe the incremental reachability algorithm [29] as follows.

Single source incremental reachability [29]

This algorithm essentially maintains a reachability tree from the source vertex . This tree is initialized with a single node and grown to include all the vertices reachable from . On insertion of an edge , an update is required only when and . In such a case, the edge is added making a child of . Further, the update algorithm processes every outgoing edge of , i.e., to find vertices , which are added to recursively using the same procedure. Thus, this process continues until all the vertices reachable from are added to . Clearly, the process takes total time as each edge is processed times, when it is inserted in the graph and when is added to .

Algorithm

The algorithm prominently uses the fact that the value of maximum flow in a unit capacity simple graph is since the number of outgoing edges from (or the incoming edges to ) is . Our algorithm is divided into stages where at the end of each stage the value of maximum flow increases by a single unit. Each stage starts by building an incremental single source reachability structure [29], i.e. the reachability tree , from on the residual graph. On insertion of an edge, the reachability tree is updated using the incremental reachability algorithm. The stage continues until becomes reachable from and added to . This gives the augmenting path and we push a unit of flow along the path and update the residual graph. Thus, each stage requires total time for maintaining incremental reachability structure [29], taking overall time (where ), giving us the following result.

Theorem B.2 (Incremental Unit Capacity Maximum Flow).

Given an unweighted (unit-capacity) graph having vertices and edges, incremental maximum flow can be maintained in amortized update time, where is the value of the maximum flow of the final graph.

Remark: At the end of a stage, it is necessary to rebuild the incremental reachability structure [29] from scratch as it does not support edge deletions or reversals. This is required because when the flow is pushed along the path the residual graph is updated by reversing the direction of each edge on the path.

Appendix C Maximum Cardinality Matching

Maximum Matching is one of the most prominently studied combinatorial graph problems having a lot of practical applications. For a given graph with vertices and edges, a set of edges is called a matching, if no two edges in share an end vertex in . In the Maximum Matching problem, the aim is to compute the matching of the maximum weight. In case the graph is unweighted, the problem is called Maximum Cardinality Matching. Micali and Vazirani presented an algorithm to compute maximum cardinality matching in time. For the weighted case, the fastest algorithm is by Galil et. al [20] requiring time which improves the time algorithm [19] for sparse graphs. In the dynamic setting, only the problem of computing -approximate matching (having cardinality ) has been extensively studied [42, 7, 6].

Recently, Dahlgaard [10] proved conditional lower bounds for partially dynamic problems including maximum matching under OMv conjecture. For bipartite graphs (and hence general graphs), no algorithm can maintain partially dynamic maximum cardinality matching in amortized update time. We report a trivial extension of the classical blossom’s algorithm [13, 19], to solve this problem for general unweighted graphs matching its corresponding lower bound.

We first like to point out that maximum bipartite matching can be easily solved by a maximum flow algorithm using the standard reduction [32]. Hence an incremental algorithm for maintaining unit capacity maximum flow, also solves the problem of maintaining incremental maximum matching in bipartite graphs in the same bounds, i.e., amortized update time. This matches the lower bound of given by Dahlgard [10] for bipartite matching. In this section, we show that the same upper bound can also be maintained for general graphs by using a trivial extension of the Blossoms algorithm [13, 19] for finding an augmenting path, which is defined differently for maximum matching as follows.

Augmenting paths

Given any graph , a matching is a set of edges such that no two edges in share an endpoint. The vertices on which some edge from (also called matched edge) is incident is called a matched vertex. The remaining unmatched vertices are called free vertices. A given matching is called the maximum matching if there does not exist an augmenting path, which is defined as follows: A simple

path which starts and ends at a free vertex such that every odd edge on the path is a

matched edge, and hence all the intermediate vertices are matched vertices. In case such a path exists, it can be used to increase the cardinality of the matching , by removing all the matched edges on from and adding all the unmatched edges on to .

c.1 Fully Dynamic Maximum Cardinality Matching

Computing the augmenting path starting from a free vertex requires a single BFS traversal from requiring time, where on even layers the algorithm only explores matched edges. This results in trivial folklore fully dynamic maximum matching algorithm requiring worst case update time as follows.

The algorithm essentially selects the end vertex based on the update. In case of vertex insertion, the inserted vertex is chosen as . In case of deletion of a vertex , is not updated in case was a free vertex. Else if was matched to , we start the computation of augmenting path from (as ). In case of insertion of an edge , is not updated if both and are matched. Else, if both are free we simply add to . Else the augmenting path is computed starting from the free vertex amongst and (as ). Finally, in case of deletion of an edge , is not updated if was not a matched edge. Else, the augmenting path is computed twice starting from and (as ) in the two computations. Thus, each update can be performed in time resulting in the following theorem.

Theorem C.1 (Fully dynamic Maximum Cardinality Matching (folklore)).

Given a graph having vertices and edges, fully dynamic maximum cardinality matching under edge or vertex updates can be maintained in worst case time per update.

c.2 Incremental Maximum Cardinality Matching

In the incremental setting Maximum Cardinality Matching can be maintained in amortized time per update, where . Note that it does not directly follow from the fully dynamic case, as we don’t know the corresponding starting vertex of the augmenting path. However, for computing augmenting paths from any starting vertex in time, we can use the standard Blossom’s algorithm [13, 19]. It turns out that the Blossom’s algorithm trivially extends to the incremental setting, such that it requires time per increase in the cardinality of maximum matching. In the interest of completeness, we briefly describe the Blossom’s algorithm as follows.

Blossom’s Augmenting Path algorithm [13, 19]

The algorithm essentially maintains a tree from each free vertex, where each node is either a single vertex or a set of vertices which occur in form of a blossom. Further, it ensures that each vertex of the graph occurs in only one such tree. The vertices are called as odd or even if they occur respectively on the odd level or the even level (including free vertices in level zero) of any tree, and unvisited otherwise. The children of even vertices are connected to them through unmatched edges, whereas those of odd vertices are connected through matched edges. Thus, all the paths from roots to leaves in such trees have alternating unmatched and matched edges, as required by an augmenting path.

The algorithm starts with all free vertices as even vertices on singleton trees, and the remaining vertices as unvisited. Then each edge is considered one by one (and hence naturally extends to incremental setting) updating the trees accordingly as follows: If no end point is even, ignore the edge. Else, without loss of generality let be an even vertex. If is an unvisited vertex, simply add as a child of . Else, if is an odd vertex, simply ignore it. However, if is an even vertex, the action depends on the trees to which and belong. In case they belong to the same tree we get a blossom, which is processed as follows. Find the lowest common ancestor of and , and shrink all the vertices on tree paths connecting with and to a single blossom vertex , to be placed in place of . The children of the even vertices in this blossom, which are not a part of this blossom, become the children of . Also, now each odd vertex in this blossom, have become even so all its edges that were previously ignored are explored recursively using the same procedure. Note that the compressed blossoms need to be maintained using modified Disjoint Set Union algorithm [18] (taking overall time), allowing a vertex to quickly identify the blossom (and hence node in the tree) they belong to. Thus, an insertion of is first evaluated to find nodes (vertices or blossoms) of the final tree and representing and , and the edge is processed accordingly.

Finally, in case both and are even and they belong to different trees, and we have found an augmenting path from the root of the first tree to , through , followed by the tree path from to its root. However, some vertices on this path may be the shrinked blossoms and hence to report the augmenting path they are needed to be un-shrinked. An alternate simpler way is to simply start the augmenting path computation using the simple algorithm from the root of the tree containing , as it is necessarily one end vertex of the augmenting path. Thus, both the blossom’s algorithm and augmenting path reporting can be performed in total time.

Algorithm

The algorithm prominently uses the fact that the maximum cardinality of a matching is , since the each vertex can have at most one matched edge incident to it. The algorithm is again divided into stages, where at the end of each stage the Blossom’s algorithm detects an augmenting path and hence increases the cardinality of by a single unit. Each stage starts by initiating the Blossom’s algorithm [13, 19] from scratch in the updated matching, and continues inserting edges until the algorithm detects the augmenting path. Thus, each stage requires total time for computing the augmenting path and updating , taking overall time where is the value of the maximum cardinality matching of the graph, giving us the following result.

Theorem C.2 (Incremental Maximum Cardinality Matching).

Given a graph having vertices and edges, incremental maximum cardinality matching can be maintained in amortized update time, where is the size of the maximum matching of the final graph.

Remark: The algorithm also works for the case of incremental vertex updates. Also, Blossom’s algorithm can also be used for fully dynamic updates having update time. However, the algorithm described in Section C.1 is simpler and more intuitive.