1 Introduction
The problem of maintaining a large matching in the dynamic setting has received significant attention over the last two decades (see [DBLP:conf/stoc/OnakR10, DBLP:journals/siamcomp/BaswanaGS18, DBLP:conf/stoc/NeimanS13, DBLP:conf/focs/GuptaP13, DBLP:journals/siamcomp/BhattacharyaHI18, DBLP:conf/stoc/BhattacharyaHN16, DBLP:conf/focs/Solomon16, DBLP:conf/soda/BhattacharyaHN17, DBLP:conf/ipco/BhattacharyaCH17, DBLP:conf/icalp/CharikarS18, DBLP:conf/icalp/ArarCCSW18, DBLP:conf/soda/BernsteinFH19] and the references therein). After a long line of work, we now know how to maintain a maximal matching in fully dynamic graphs (i.e. graphs that undergo both edge insertions and deletions) extremely fast, that is in polylogarithmic updatetime or better [DBLP:conf/focs/BaswanaGS11, DBLP:conf/focs/Solomon16, DBLP:conf/soda/BernsteinFH19]. This immediately gives a 2approximation of maximum matching. In a sharp contrast, however, we have little understanding of the updatetimecomplexity once we go below approximation. A famous open question of the area, asked first^{3}^{3}3The problem was also stated in multiple subsequent papers e.g. in [BhattacharyaArxiv, Section 4], [DBLP:conf/soda/BernsteinS16, Section 7] or [DBLP:conf/icalp/CharikarS18, Section 1]. in the pioneering paper of Onak and Rubinfeld [DBLP:conf/stoc/OnakR10] from 2010 is:
[width=enhanced, frame hidden, interior hidden, boxsep=0pt, left=10pt, right=0pt, top=0pt, bottom=0pt, boxrule=1pt, arc=0pt, colback=white, colframe=black, borderline west=3pt0ptquoteboxborder ] “Can the approximation constant be made smaller than 2 for maximum matching [while having polylogarithmic updatetime]?” [DBLP:conf/stoc/OnakR10]
A decade later, we are still far from achieving a polylogarithmic updatetime algorithm. The fastest current result for maintaining a matching with a betterthan2 approximation factor was presented by Bernstein and Stein [DBLP:conf/soda/BernsteinS16].^{4}^{4}4The algorithm of Bernstein and Stein remarkably achieves an (almost) approximation. Their algorithm handles updates in time where denotes the number of edges in the graph. However, a notable followup result of Bhattacharya, Henzinger, and Nanongkai [DBLP:conf/stoc/BhattacharyaHN16] hinted that we may be able to achieve a faster algorithm. They showed that in vertex bipartite graphs, for any constant , there is a deterministic algorithm with amortized updatetime that maintains a approximation of the size of the maximum matching (the algorithm maintains a fractional matching, but not an integral one).
In light of the result of Bhattacharya et al. [DBLP:conf/stoc/BhattacharyaHN16], two main questions remained open: First, is it possible to maintain the matching in addition to its size? Second, can the result be extended from bipartite graphs to general graphs? We resolve both questions in the affirmative:
[width=enhanced, frame hidden, boxsep=5pt, left=1pt, right=1pt, top=4pt, bottom=4pt, boxrule=1pt, arc=0pt, colback=mylightgray, colframe=black, breakable ]
Theorem 1.
For any constant , there is a randomized fullydynamic algorithm that with high probability maintains a approximate maximum matching in worstcase updatetime under the standard oblivious adversary assumption. Here, denotes the maximum degree in the graph. Also the precise approximation factor constant depends on .
Compared to the algorithm of Bhattacharya et al. [DBLP:conf/stoc/BhattacharyaHN16], our algorithm, at the expense of using randomization, maintains the matching itself, handles general graphs, and also improves the updatetime from amortized to worstcase. In addition, our algorithm is arguably simpler.
Similar to other randomized algorithms of the literature, we require the standard oblivious adversary assumption. The adversary here is all powerful and knows the algorithm, but his/her updates should be independent of the random bits used by the algorithm. Equivalently, one can assume that the sequence of updates is fixed adversarially before the algorithm starts to operate.
2 Our Techniques
In this section, we provide an informal overview of the ideas used in our algorithm for Theorem 1 and the challenges that arise along the way.
A main intuition behind the efficient (randomized) 2approximate algorithms of the literature is essentially “hiding” the matching from the adversary through the use of randomization. For instance if we pick the edges in the matching randomly from the dense regions of the graph where we have a lot of choices, it would then take the adversary (who recall is unaware of our random bits) a lot of trials to remove a matching edge. A natural algorithm having such behavior is random greedy maximal matching (RGMM) which processes the edges in a random order and greedily adds them to the matching if possible. Indeed it was recently shown by Behnezhad et al. [misfocs] that it takes only polylogarithmic time per edge update to maintain a RGMM.
Unfortunately, exactly the feature of RGMM (or of previous algorithms based on the same intuition) that it matches the dense regions first prevents it from obtaining a betterthan2 approximation. A simple bad example is a perfect matching whose one side induces a clique (formally, a graph on vertices , whose edge set consists of a clique induced on and edges for ). The RGMM algorithm, for instance, would pick almost all of its edges from the clique, and thus matches roughly half of the vertices while the graph has a perfect matching.
To break this 2 approximation barrier, our starting point is a slightly paraphrased variant of a streaming algorithm of Konrad et al. [streamingapprox]. The algorithm starts by constructing a RGMM of the input graph . Unless is significantly larger than half of the size of the maximum matching opt of , then nearly all edges of can be shown to belong to length3 augmenting paths in . Therefore to break 2 approximation, it suffices to pick a constant fraction of the edges in , and discover a collection of vertex disjoint length3 paths augmenting them. Konrad et al. [streamingapprox] showed that this can be done by finding another RGMM, this time on a subgraph of whose edges have one endpoint that is matched in and one endpoint that is unmatched (though for these edges to augment well, it is crucial that not all such edges are included in ).
The algorithm outlined above shows how to obtain a approximate maximum matching by merely running two instances of RGMM. Given that we know how to maintain a RGMM in polylogarithmic update time due to [misfocs], one may wonder whether we can also get a similar updatetime for this algorithm. Unfortunately, the answer is negative! The reason is that the second stage graph is adaptively determined based on matching . Particularly, a single edge update that changes matching may lead to deletion/insertion of a vertex (along with its edges) in the second stage graph . While we can handle edge updates in polylogarithmic time, the updatetime for vertex updates is still polynomial (in the degree of the vertex being updated). Therefore, the algorithm, as stated, requires an updatetime of up to .
To get around the issue above, our first insight is a parametrized analysis of the updatetime depending on the structure of the edges in . Suppose that matching is constructed by drawing a random rank independently on each edge and then iterating over the edges in the increasing order of their ranks. We show that the whole updatetime (i.e. that of both the first and the second stage matchings) can be bounded by , where
The reason is as follows. For an edge update , the probability that it causes an update to matching , is upper bounded by . (If rank of is larger than the highest rank ever in , then and thus its insertion/deletion causes no update to .) On the other hand, in case of an update to , the cost of a vertex update to the second stage graph can be bounded by its degree, which we show can be bounded by using a sparsification property of RGMM (Lemma 3.1) applied to the first stage matching . Thus, the overall update time is indeed .
The analysis highlighted above, shows that as the rank of edges in the first stage matching get closer to each other, our updatetime gets improved. In general, this ratio can be as large as . A natural idea, however, is to partition into subsets such that the edges in each subset, more or less, have the same ranks (i.e. a max over min rank ratio of roughly ). We can then individually construct a secondstage graph for each , find a RGMM of it and use it to augment (and thus ). Since there are only groups, there will be one that includes at least fraction of edges of . Therefore, augmenting a constant fraction of edges in this set alone would be enough to break 2 approximation.
However, another technical complication arises here. Once we choose to augment only a subset of the edges in , with say , we cannot bound the updatetime by anymore. (The argument described before only works if we consider all the edges in .) The reason is that, normally, in the second stage graph we would like to have edges that have one endpoint in and one endpoint that is unmatched in so that if both endpoints of an edge are matched in the matching of , then we get a length3 augmenting path of . This makes this second stage graph very sensitive to the precise set of vertices matched/unmatched in the whole matching (as opposed to only those matched in ) and this would prevent us from using the same argument to bound the updatetime by .
To resolve this issue, on a high level, we also consider any vertex that is matched in , but its matching edge has rank higher than those in , as “unmatched” while constructing graph (see Algorithm 4). This will allow us to argue that the updatetime is . The downside is that not all found length 3 paths will be actual augmenting paths of . Fortunately, though, we are still able to argue that the algorithm finds sufficiently many actual augmenting paths for and thus achieves a approximation (see Section 5).
3 Preliminaries
For a graph , we use to denote the size of its maximum matching. Moreover, as standard, for two matchings and of the same graph, we use to denote their symmetric difference, i.e., the graph including edges that appear in exactly one of and . For two disjoint subsets and of the vertex set of a graph , we use to denote the bipartite subgraph of whose vertex set is and includes an edge of if and only if has one endpoint in and one in . Also, generally for a subset of the edgeset of , we use to denote the set of vertices with at least one incident edge in .
Given a ranking that maps each edge of to a real in , the greedy maximal matching is obtained as follows: We iterate over the edges in the increasing order of their ranks; upon visiting an edge , if no edge incident to is already in the matching, joins the matching. Having this matching, for each edge we define the eliminator of , denoted by , to be the edge incident to that is in matching and has the lowest rank; if itself is in the matching, then .
The greedy maximal matching algorithm will be particularly useful if the ranking is random. We do this by picking each entry of the ranking (i.e., the rank of each edge) independently and uniformly at random from . Furthermore, it is not hard to see that these ranks need not be too long. Namely, the first bits of the ranks are enough to ensure that no two edges receive the same rank with high probability. From now on, whenever we use the term “random ranking” we assume bit ranks from are drawn independently and uniformly at random.
We will make use of the following, by now standard, sparsification property of the random greedy maximal matching algorithm—see e.g. [DBLP:conf/spaa/BlellochFS12, DBLP:conf/icml/AhnCGMW15, DBLP:conf/podc/GhaffariGKMR18, maximalmatchingfocs, assadisoda, misfocs, DBLP:conf/mfcs/Konrad18].
Lemma 3.1.
Fix an arbitrary graph and a randomranking on its edgeset. The following event holds w.h.p. over the randomization in : For every bit rank , the subgraph of including edges with , has maximum degree .
4 A Static Algorithm
In this section we describe a static algorithm for finding an approximate maximum matching. We show in Section 5 that the algorithm provides a approximation and show in Section 6 that it can be maintained in updatetime .
Intuitive explanation of the algorithm. We start with a RGMM . After that, we partition the edge set of into partitions such that roughly the maximum rank over the minimum rank in each partition is at most . Then, focusing on each partition , we try augmenting the edges of by finding two random greedy matchings of subgraphs and that are determined based on set . Roughly, each edge in (and similarly in ) has one endpoint that is matched in —this is the edge to be augmented—and one endpoint that is either unmatched or matched after the edges in are processed in the greedy construction of . Therefore if for an edge , its endpoint is matched via an edge in the matching of and is matched via an edge in the matching of , and in addition and are unmatched in then will be a length 3 augmenting path of .
[width=enhanced, float=h, boxsep=2pt, left=1pt, right=12pt, top=4pt, boxrule=1pt, arc=0pt, colback=white, colframe=black, unbreakable ] Algorithm 1. Meta algorithm for finding a better than 2 approximation of maximum matching.
5 Approximation Factor of Algorithm 4
In this section, we prove that the approximation factor of Algorithm 4 is at most given that is a constant.
We fix one arbitrary maximum matching of graph and denote it by opt; recall that . Having this matching opt, we now call an edge 3augmentable if it is in a length 3 augmenting path in . Observe that since opt cannot be augmented, this augmenting path should start and end with edges in opt and thus edge has to be in the middle.
The following lemma is crucial in the analysis of the approximation factor. Basically, it says that for any partition where most of edges in are 3augmentable (which in fact should be the case for most of the partitions if is close to ), roughly fraction of the edges in are in length 3 augmenting paths in . We emphasize that this does not directly prove the bound on the approx factor as these length 3 augmenting paths in may not necessarily be augmenting paths in .
Lemma 5.1.
For any and any parameter , if fraction of the edges in are 3augmentable, then in expectation, there are at least edges in where both of their endpoints are matched in .
In order to prove this lemma, in Lemma 5.2 we recall a property of the greedy maximal matching algorithm under vertex samplings originally due to [streamingapprox, streamingarXiv]. We note that the property that we need is slightly stronger than the one proved in [streamingarXiv, Theorem 3] but follows from a similar argument. Roughly, we need a lower bound on the number of vertices in a specific vertex subset that are matched, while the previous statement only lower bounded the overall matching size. We provide the complete proof in Appendix A.
Lemma 5.2.
Let be a bipartite graph, be an arbitrary permutation over , and be an arbitrary matching of . Fix any parameter and let be a subsample of including each vertex independently with probability . Define to be the number of edges in whose endpoint in is matched in ; then
Proof of Lemma 5.1.
Fix a partition which includes 3augmentable edges and denote by the subset of edges in that are 3augmentable; implying that
(1) 
Recall that each edge is the middle edge in a length 3 augmenting path in where opt is a fixed maximum matching of . Define set (resp. ) to be the subset of edges in opt where one of their endpoints, say, is in (resp. ) and the other endpoint is in set (resp. ).
We say an edge is good if is matched in and also is matched in ; and use to denote the subset of edges in that are good. We first claim that
(2) 
where observe that here the expectation is only taken over the randomization in partitioning into and . To see this, fix an edge and let and be the edges in opt that along with form a length 3 augmenting path. Observe that edge if and if . Since the partition of and is chosen independently and u.a.r., there is a probability that both these events occur, implying . Linearity of expectation over every edge in proves (2).
Now, consider graphs and and observe that is a matching of and is a matching of . One can confirm that is a random subsample of where for each , . More importantly, whether for a vertex the event holds is independent of which other vertices are in . (Though we note that is not independent of those vertices in .) Similarly, can be regarded as a random subsample of wherein the vertices appear independently from each other. As a result, graph (resp. ) is essentially obtained by retaining a random subsample of the vertices in the (resp. ) partition of graph (resp. ). We can thus use Lemma 5.2 while fixing matching to infer that
(3) 
Similarly,
(4) 
Observe that since each edge in has one endpoint in which is sampled to with probability . Combined with (3) this means that
(5) 
Similarly by (4),
(6) 
By (2) we have expected good edges. Out of these, each edge is sampled, i.e., and with probability . Therefore, in expectation, there are a total of sampled good edges. Say a sampled good edge is wasted if is unmatched in or is unmatched in . Combined with (5) and (6) there are at most wasted edges. This means that the expected number of sampled good edges that are not wasted is at least
Moreover, by (1), . Replacing this into the equation above, we get that there are, in expectation, at least good edges that are not wasted, i.e., both of their endpoints are matched in as claimed in the lemma. ∎
The following claim shows that there is a subset that is “large enough” compared to the size of matching and is much larger than the total number of edges in previous subsets .
Claim 5.3.
There exists an integer such that
Proof.
Let be the smallest integer in for which
(7) 
we show that both conditions should hold for . First, we have to prove that there is a choice of satisfying (7). Suppose for the sake of contradiction that this is not the case; then:
Observe that subsets partition the edges in and thus it should hold that ; implying that the equation above is indeed a contradiction, proving existence of .
The first inequality of the claim is automatically satisfied for due to (7) since
It thus only remains to prove the second inequality. For that, observe that since is the smallest integer satisfying (7), then for any we have This means that
Combining this with (7) we get
implying the second inequality of the claim as well. ∎
Let us recall a folklore property that if maximal matching is not already large enough, then most of the edges in it are 3augmentable.
Observation 5.4 (folklore).
If , then at least edges in are 3augmentable.
Proof.
See e.g. [streamingarXiv, Lemma 1] for a simple argument. ∎
We are now ready to analyze the approximation factor. We prove that for , the matching returned by Algorithm 4 has, in expectation, size at least . We first assume that as otherwise matching already achieves the desired approximation factor. By Observation 5.4, this means that at least edges of are 3augmentable; meaning that the number of edges in that are not 3augmentable is at most
(8) 
Let be the integer satisfying Claim 5.3. By (8) there are at most edges in and thus in that are not 3augmentable. Therefore,
# of 3augmentable edges in  
First inequality of Claim 5.3.  
Since fraction of the edges in are 3augmentable, by Lemma 5.1, there are at least
edges in whose both endpoints are matched in . We would like to argue that these form length 3 augmenting paths but note that an edge may have an endpoint that is already matched in subsets . However, the crucial observation here is that since by the second inequality of Claim 5.3, we have and each edge in can be connected to two edges in the number of these length 3 augmenting paths that are also augmenting paths in is at least
Each of these augmenting paths can be used to increase size of by one, therefore the final matching has size at least
which proves the approximation factor is so long as is a constant.
6 Dynamic Implementation of Algorithm 4
In this section, we describe how we can maintain Algorithm 4 in update time .
6.1 Tools
We borrow two blackbox tools from the previous works. The first one is a simple corollary of the algorithm of Gupta and Peng [DBLP:conf/focs/GuptaP13], see also [DBLP:conf/icalp/BernsteinS15] for a proof of this corollary.
Lemma 6.1 ([DBLP:conf/focs/GuptaP13]).
Let be a fixed upper bound on the maximum degree of a graph at all times. Then we can maintain a approximate matching deterministically under edge insertions and deletions in worstcase update time per update.
We use Lemma 6.1 only for the last step of Algorithm 4 in which we need to maintain a maximum matching of which is a graph with maximum degree .
The second blackbox result that we use is due to the recent algorithm of Behnezhad et al. [misfocs] that asserts a random greedy maximal matching can be maintained efficiently.
Lemma 6.2 ([misfocs]).
Let be a random ranking where for each edge is drawn uniformly at random upon its arrival. Then maximal matching can be maintained under edge insertions and deletions in expected time per update (without amortization). Furthermore, for each update, the adjustmentcomplexity is in expectation and w.h.p. .
6.2 Data Structures & Setup
Algorithm 4 computes two types of matchings: (1) Matching which is a standard random greedy maximal matching of the whole graph . (2) Matchings which are computed on specific subgraphs of . Observe in Algorithm 4 that each matching for is the union of two random greedy matchings and . A crucial observation here is that these two graphs and by definition are vertex disjoint. Therefore defining graph to be the union of these two graphs, would be equivalent to .
Now we have graphs and independently drawn rankings . Therefore, if a priori these graphs were fixed and remained unchanged after each edge insertion/deletion, we could use Lemma 6.2 to update each one of them in expected time requiring only a total updatetime of . However, as highlighted in Section 2 the challenge is that the vertex sets of graphs are adaptively determined based on matching . That is, a single edge update that changes matching may lead to many vertex insertions/deletions to graphs that are generally much harder to handle than edge updates. Therefore, we need to be careful about what to maintain and how to do it to ensure these vertex updates can be determined and handled efficiently.
Fixing the randomizations. To maintain the matching of Algorithm 4, we fix all the randomizations required. There are two types of randomizations involved: (1) Randomizations on the edges, such as the random rankings and edge samplings; as in Lines 4,4, and 4. (2) Randomizations on the vertices; as in Line 4. We reveal the randomizations on the vertex set in the preprocessing step as it is static. But we reveal the randomizations on the edges upon their arrival. For completeness, we mention the precise random bits drawn below.
For each edge , we draw the following upon its arrival:
[width=enhanced, boxsep=2pt, left=1pt, right=1pt, top=4pt, boxrule=0.8pt, arc=0pt, colback=verylightgray, colframe=black, unbreakable ]

: This is drawn for any independently. It is with probability and otherwise. It determines the outcome of edgesampling in Line 4 of the algorithm.

: The rank of in ranking . This is drawn for any .
And for each vertex , in the preprocessing step, we draw:
[width=enhanced, boxsep=2pt, left=1pt, right=1pt, top=4pt, boxrule=0.8pt, arc=0pt, colback=verylightgray, colframe=black, unbreakable ]

: This is drawn for any independently. It is with probability and otherwise. The value determines whether would join or if it is partitioned in Line 4 of the algorithm.
Data structures. Let us for simplicity define . For any vertex and any we maintain the following data structures:
[width=enhanced, boxsep=2pt, left=1pt, right=1pt, top=4pt, boxrule=0.8pt, arc=0pt, colback=verylightgray, colframe=black, unbreakable ]

: If is not part of graph or if it is unmatched in then . Otherwise, if is the edge incident to that is in matching then

: The set of neighbors of in graph . This set is stored as a selfbalancing binary search tree in which each neighbor of is indexed by . (If is not in the vertexset of then simply .)
6.3 The Update Algorithm
We run instances of Lemma 6.2 for maintaining greedy matchings of . Moreover, we run a single instance of Lemma 6.1 on the edges in union of matchings .
As mentioned previously, a single edge update to graph may change the structure of graphs and in particular may lead to vertex insertions or deletions in them. Therefore, our main focus in this section is to show how we can detect these vertices that join/leave graphs and their incident edges in these graphs efficiently. Before that, we need the following lemma. The proof is a simple consequence of Lemma 3.1 and thus we defer it to Appendix B.
Lemma 6.3.
Suppose that an edge is inserted to or deleted from a graph for some . After updating matching (e.g. by Lemma 6.2) and getting the list of edges that joined or left the matching, we can update and accordingly in expected time .
Consider insertion or deletion of an edge . We use the following procedure to maintain our data structures and finally the matching returned by Algorithm 4.
Step 1: Updating . We first update matching . This is done by Lemma 6.2 in expected time. After that, we also update data structures and where necessary using Lemma 6.3. There are two cases. If matching changes after the update, then we may have to update the vertex sets of graphs . This is the operation that is costly and we handle it in the next steps. If does not change, the only remaining update is to see if itself is part of a graph and reflect that. This only takes polylogarithmic time using Lemmas 6.2 and 6.3.
Step 2: Updating vertexsets of . The vertexset of each graph is composed of four disjoint subsets and . One can confirm from Algorithm 4 that whether a vertex belongs to one of these sets (and which one if so) can be uniquely determined by knowing the edge incident to that is in matching or knowing that no such edge exists. Therefore:
Observation 6.4.
If after the update, a vertex leaves or is added to the vertexset of a graph , then there must exist an edge connected to that either joined or left matching .
By Observation 6.4, to update the vertexsets, it suffices to only iterate over vertices whose matching edge in has changed and determine which graph they should belong to. The procedure is a simple consequence of the way Algorithm 4 constructs these graphs and also the randomizations fixed previously. We provide the details in Algorithm 6.3 for completeness.
It has to be noted that we are only updating the vertexsets in this step. In particular, for a vertex that e.g. joins graph , we do not construct its adjacency list yet. This is postponed to the next step after all the vertex sets are completely updated.
[width=enhanced, float=h, boxsep=2pt, left=1pt, right=12pt, top=4pt, boxrule=1pt, arc=0pt, colback=white, colframe=black, unbreakable ] Algorithm 2. Updating vertexsets of .
Step 3: Updating adjacency lists of and their matchings. The previous step updated the vertexsets. Here, we update the adjacency lists and the matchings . Precisely, we update data structure for each vertex and each where necessary. Note that for any vertex , both and adjacency list were already updated in Step 1.
First, for any vertex that leaves a graph , we immediately remove its incident edges from the graph one by one. Each one of these should be regarded as edge deletions and thus we can use Lemma 6.2 to update . We then update and data structures accordingly using Lemma 6.3.
Next, for any vertex that is added to the vertex set of a graph , we have to determine the set of its neighbors in this graph. To do so, we take the steps formalized as Algorithm 6.3. A crucial observation to note before reading the description of Algorithm 6.3 is stated below. The proof is a direct consequence of the greedy structure of RGMM, thus we defer it to Appendix B.
Claim 6.5.
Suppose that the edge that is being inserted to/deleted from is part of matching (if deleted before deletion and if inserted after insertion). Note that if this was not the case, then updating would not change the vertex sets of . Also assume that belongs to partition of matching in Algorithm 4. Then this update may only affect vertex sets of graphs . In particular, any graph with remains unchanged after insertion or deletion of .
Claim 6.5 is algorithmically useful in the following way. Suppose that and let be the minimum rank considered to be in in Algorithm 4. Then we can remove all edges whose eliminator ranks are less than from , and the remaining graph will include all edges that we have to consider for graphs . This, by Lemma 3.1 prunes the degrees to and helps reducing the running time. See Algorithm 6.3 and Lemma 6.6 for the details.
[width=enhanced, float=h, boxsep=2pt, left=1pt, right=12pt, top=4pt, boxrule=1pt, arc=0pt, colback=white, colframe=black, unbreakable ] Algorithm 3. Updating adjacency lists of .
Step 4: Updating the final matching. Finally, recall that we run multiple instances of Lemma 6.1 to maintain a approximate maximum matching of graph which will include our final matching. Throughout the updates above, we keep track of all edges that leave/join these matchings and for each one of them we update this final matching via Lemma 6.1.
6.4 Correctness & Running Time of Update Algorithm
In this section, as the title describes, we prove the correctness of the update algorithm above and analyze its running time. Namely, we prove the following lemma.
Lemma 6.6.
The update algorithm of previous section correctly updates all data structures and the matching and its expected running time per update without amortization is .
Before that, let us show how we can actually turn this updatetime to as claimed by Theorem 1. To do so, given , we consider a smaller value for , say . Then the updatetime would be . Now if , then we already have . Otherwise, is polylogarithmic and the whole updatetime is also polylogarithmic.
As another note, in Theorem 1 we state that the updatetime is worstcase but Lemma 6.6 bounds the expected updatetime. To turn this into a worstcase bound, we use the reduction of Bernstein et al. [DBLP:conf/soda/BernsteinFH19]. For the reduction to work, the crucial property is that the updatetime bound should hold in expectation but without any amortization, as is the case here.
Proof of Lemma 6.6.
It is easy to verify correctness of Steps 1, 2, and 4 which are actually quite fast and take only time in total. We do provide the necessary details for these steps at the end of this proof. However, the main component of the updatealgorithm is Step 3 which takes time. We thus first focus on this step and analyze its running time and correctness.
As before, assume that edge is updated. If matching does not change as a result of this update, then the vertex sets of all graphs will remain unchanged. However, if updating changes , then should be in once in the graph. As in Algorithm 6.3, we assume and let be the minimum possible rank in . By Claim 6.5, graphs remain unchanged. Also, one can confirm from Algorithm 4 that all edges in graphs have eliminator rank of at least in . Thus, for any vertex added to a graph , the set indeed includes all edges incident to that may belong to . Moreover, by Lemma 3.1 this set has size at most and that can be found in time since the neighbors of in are indexed by their eliminatorrank. The overall updatetime required for Step 3 is thus
By Lemma 6.2, the number of edges that are updated in is w.h.p. bounded by . It remains to determine the expected value of the second factor in the running time above. Let us use to denote the interval of ranks considered by Algorithm 4. That is, and for any , . For edge that is to be updated, probability that is in the th interval is upper bounded by . Moreover, given that is in the th interval, then would be at most . Thus:
This means that the overall updatetime required for Step 3 is .
Now, we focus on the other steps.
In Step 1, only matching as well as the data structures related to it are updated. These are correct and only take expected time by Lemmas 6.2 and 6.3.
In Step 2, we detect the updates to the vertex sets of . This is done in Algorithm
Comments
There are no comments yet.