1 Introduction
The maximum matching problem is one of the most widelystudied problems in computer science and operations research, with a long history and theory [5, 39]. On vertex and edge graphs, the stateofthe art maximum matching algorithms require and time [40, 42] (here is the matrix multiplication exponent [55]). For bipartite graphs, simpler algorithms with the same asymptotic running times are known [34, 42], as well as a faster, time algorithm, due to the recent breakthrough of Mądry [41] for the maximum flow problem. For approximate matchings, it is long and wellknown that a matching admitting no augmenting paths of length forms a approximate maximum matching (see [34]). The lineartime “blocking flow” subroutines of [34, 40] therefore result in an time approximate maximum matching.
The maximum weight matching (MWM) problem has also garnered much interest over the years. For general weights, the seminal work of Edmonds and Karp [22] shows how to reduce the problem on bipartite graphs to the solution of nonnegative singlesource shortest path instances. Relying on Fibonacci Heaps of Fredman and Tarjan [23], this approach yields the current fastest stronglypolynomial running time for the problem, . Gabow [24] later showed how to obtain the same running time for general graphs. For integer weights , algorithms nearly matching the stateofoftheart for the unweighted problem, with either logarithmic or linear dependence on , are known.^{1}^{1}1Indeed, a blackbox reduction of Pettie [47] from maximum weight matching to the maximum matching problem shows that a linear dependence in is the largest possible gap between these two problems. These include an time algorithm [25], an time algorithm [49] and a recent time algorithm for bipartite graphs [17]. For approximation algorithms, an algorithm nearly matching the unweighted problem’s guarantees is known, yielding a approximate maximum weight matching in time, [21].
All of the above results pertain to the static problem; i.e., where the input is given and we only need to compute a maximum matching on this given input. However, in many applications the graphs considered are inherently dynamic, with edges removed or added over time. One could of course address such changes by recomputing a solution from scratch, but this could be wasteful and timeconsuming, and such applications may require immediately updating the solution given, as having users wait on a solution to be recomputed may likely be unsatisfactory. Consider for example point to point shortest path computation, a problem routinely solved by navigation systems: for such an application, the temporary closure of some road due to construction should not result in unresponsive GPS applications, busy recomputing the relevant data structures (see e.g.,[6, 37, 33, 19, 20, 52, 53, 9, 26, 28, 2, 31, 29, 30, 32, 3, 4]). Therefore, for such applications we want to update our solution quickly for every update, using fast worstcase (rather than amortized) update time.
Returning to the maximum matching problem, we note that a maximum matching can be trivially updated in time. Sankowski [48] showed how to maintain the value of the maximum matching in update time.^{2}^{2}2We emphasize that this algorithm does not maintain an actual matching, but only the optimal value, and it seems unlikely to obtain such update times for maintaining a matching of this value. On the other hand, Abboud and Williams [1] and Kopelowitz et al. [38] presented lower bounds based on longstanding conjectures, showing that even maintaining the maximum matching value likely requires update time for some constant .
Given these hardness results for exact solutions, one is naturally inclined to consider fast approximate solutions. Trivially updating a maximal matching (and therefore a approximate maximum matching) can be done using worstcase update time. The goal is to obtain sublinear update times – ideally polylogarithmic (or even constant) – with as low an approximation ratio as possible.
The first nontrivial result for fullydynamic maximum matching is due to Ivkovic and Lloyd [35], who presented a maximal matching algorithm with amortized update time. Note that this bound is sublinear only for sufficiently sparse graphs. The problem of approximate maximum matchings remained largely overlooked until 2010, when Onak and Rubinfeld [45] presented a fullydynamic constantapproximate (amortized) update time algorithm. Additional results followed in quick succession.
Baswana et al. [7] showed how to maintain a maximal matching in expected update time, and update time w.h.p. This was recently improved by Solomon [50] who presented a maximal matching algorithm using update time w.h.p. For deterministic algorithms, Neiman and Solomon [43] showed how to maintain approximate matchings deterministically in update time, a result later improved by Gupta and Peng [27] to obtain approximate matchings in . This result was in turn refined by Peleg and Solomon [46], who obtained the same approximation ratio and update time as [27] with replaced by the maximum arboricity of the graph (which is always at most ). Bernstein and Stein [10, 11], Bhattacharya et al. [12] presented faster polynomial update time algorithms (with higher approximation ratios), and Bhattacharya et al. [13] presented a approximate algorithm with update time. See Table 1 for an indepth tabular exposition of previous work and our results.^{3}^{3}3For the sake of simplicity we only list bounds here given in terms of and . In particular, we do not state the results for arboricitybounded graphs, which in the worst case (when the arboricity of a graph is ) are all outperformed by algorithms in this table, with the aforementioned algorithm of Peleg and Solomon [46] being the lone exception to this rule. In §5 we discuss our results for MWM, also widely studied in the dynamic setting (see, e.g. [7, 27, 50, 51]).
Note that in the previous paragraph we did not state whether the update times of the discussed algorithms were worst case or amortized. We now address this point. As evidenced by Table 1, previous fullydynamic matching algorithms can be broadly divided into two classes according to their update times: polynomial update time algorithms and polylogarithmic amortized update time algorithms. The only related polylogarithmic worstcase update time algorithms known to date were fractional matching algorithms, due to Bhattacharya et al. [14]. We bridge this gap by presenting the first fullydynamic integral matching (and weighted matching) algorithm with polylogarithmic worstcase update times and constant approximation ratio. In particular, our approach yields a approximate algorithm, within the time bound of [14], but for integral matching.^{4}^{4}4Independently of our work, and using a different approach, Charikar and Solomon [16] obtained a approximate dynamic matching algorithm with worstcase update time. For fixed their algorithm is slower than ours, and is arguably more complicated than our approach.
Approx.  Update Time  det.  w.c.  notes  reference 


✗  ✗  Onak and Rubinfeld (STOC ’10) [45]  

✓  ✓  Bhattacharya et al. (SODA ’15) [12]  

✓  ✗  Bhattacharya et al. (SODA ’15) [12]  

✓  ✗  Bhattacharya et al. (STOC ’16) [13]  

✗  ✓  w.h.p  This work^{4}  

✓  ✗  Ivković and Lloyd (WG ’93) [36]  

✗  ✗  w.h.p  Baswana et al. (FOCS ’11) [8]  

✗  ✗  w.h.p  Solomon (FOCS ’16) [50]  

✓  ✓  bipartite only  Bernstein and Stein (ICALP ’15) [10]  

✓  ✗  Bernstein and Stein (SODA ’16) [11]  

✓  ✓  Neiman and Solomon (STOC ’13) [44]  

✓  ✓  Gupta and Peng (FOCS ’13) [27] 
(All references are to the latest publication, with the first publication venue in parentheses.)
1.1 Our Contribution
Our main technical result requires the following natural definition of approximatelymaximal fractional matchings.
Definition 1.1 (ApproximatelyMaximal Fractional Matching).
We say that a fractional matching is approximatelymaximal if every edge either has fractional value or one endpoint with sum of incident edges’ weights at least and moreover all edges incident on have .
Note that this definition generalizes maximal fractional matchings (for which ). The second condition required of above (i.e., having no incident edges with ) may seem a little puzzling, but will prove important later; it can be safely ignored until §2.1 and §3.
Our main qualitative result, underlying our quantitative result, is the following blackbox reduction from integral matching algorithms to approximatelymaximal fractional matching algorithms, as stated in the following theorem.
Theorem 1.2.
Let be a fullydynamic approximatelymaximal fractional matching algorithm with update time which changes at most edge weights per update, for some , with . Then, there exists a randomized fullydynamic approximate integral matching algorithm with update time
with high probability. Moreover, if
and are worstcase bounds, so is the update time of Algorithm .Now, one may wonder whether fullydynamic approximatelymaximal fractional matching algorithms with low worstcase update time and few edge weight changes exist for any nontrivial values of and . Indeed, the recent algorithm of Bhattacharya et al. [14] is such an algorithm, as the following lemma asserts.
Lemma 1.3 ([14]).
For all , there exists a fullydynamic approximatelymaximal fractional matching algorithm with worstcase update time, using at most edge weight changes per update in the worst case.
We highlight the general approach of the algorithm of Bhattacharya et al. [14] in §2.1 to substantiate the bounds given in creftypecap 1.3. Plugging the values of , and of creftypecap 1.3 into creftypecap 1.2 immediately yields our result, given in the following theorem.^{5}^{5}5We note that previously and independently from [14], we obtained similar results to those of Theorem 1.4. After receiving and reading a preprint of [14], we realized that using [14] and our simple reduction we can obtain this theorem in a much simpler way, and therefore present only this much simpler algorithm.
Theorem 1.4.
For all , there exists a randomized fullydynamic approximate integral matching algorithm with worstcase update time.
We recall that until now, for worstcase polylog update times only fractional algorithms – algorithms which only approximate the value of the maximum matching – were known for this problem.
Finally, combined with the recent blackbox reduction of Stubbs and Vassilevska Williams [51] from the weighted to the unweighted matching problem, our algorithm also yields the first fullydynamic constantapproximate maximum weight matching algorithm with polylogarithmic worstcase update time.
Theorem 1.5.
For all , there exists a randomized fullydynamic approximate maximum weight matching algorithm with worstcase update time.
1.2 Our Techniques
Our framework yielding our main result combines three ingredients: approximatelymaximal fractional matchings, kernels and fast matching algorithms for boundeddegree graphs. We give a short exposition of these ingredients and conclude with how we combine all three.
ApproximatelyMaximal Fractional Matchings.
The first ingredient we rely on is approximatelymaximal fractional matchings, introduced in the previous section. Recalling that for such solutions, each edge has value at least or one of its endpoints has sum of incident edge values at least . This approximate maximality condition implies this fractional matching has high value compared to the maximum matching size; specifically, this fractional matching’s size is at least a fraction of this value (easily verifiable using LP duality). As we shall show, approximate maximality also allows one to use these fractional values to sample a subgraph in the support of this fractional matching which contains a large integral matching compared to , with high probability. We discuss the dynamic fractional matching algorithm of Bhattacharya et al. [12] and show that it maintains an approximatelymaximal fractional matching in §2.1.
Kernels.
The second ingredient we rely on is the notion of kernels, introduced by [12]. Roughly speaking, a kernel is a lowdegree subgraph of such that each edge of not taken into has at least one endpoint whose degree in is at least times the maximum degree in . Relying on Vizing’s Theorem [54], we show in §2.2 that such a graph has maximum matching size at least of the matching size of , previously only known for kernels of bipartite graphs, where this is easily verifiable via LP duality.^{6}^{6}6As a byproduct of our proof, we show how the algorithms of Bhattacharya et al. [12] can be made approximate within the same time bounds. As this is tangential to our main result, we do not elaborate on this. Efficiently maintaining a large matching can therefore be reduced to maintaining a lowdegree kernel, given the last ingredient of our approach.
BoundedDegree matching.
Our approach in a nutshell.
Given the above ingredients, our framework is a simple and natural one. Throughout our algorithm’s run, we run a fullydynamic approximatelymaximal fractional matching algorithm with efficient worstcase update. Sampling edges independently according to this fractional value (times some logarithmic term in , to guarantee concentration) allows us to sample a kernel of logarithmic maximum degree, with each nonsampled edge having at least one endpoint with degree at least times the maximum subgraph degree, with high probability. As the obtained subgraph therefore has a maximum matching of size at least times the maximum matching in , a matching algorithm in yields a matching in . We then maintain a matching in (which by virtue of ’s bounded degree we can do in logarithmic worstcase time) following each update to incurred by a change of some edge’s fractional value by the dynamic fractional matching algorithm. The obtained integral algorithm’s update time is dominated by two terms: the running time of the fractional algorithm, and the number of edge weight updates per update, times . This concludes the highlevel analysis of the obtained approximation ratio and update time of our approach, as given in creftypecap 1.2.
Wider applicability.
We stress that our framework is general, and can use any approximatelymaximal fractional matching algorithm. Consequently, any improvement on the running time and number of edge value changes for maintaining approximatelymaximal fractional matchings yields a faster worstcase update time. Likewise, any approximatelyfractional matching algorithm which maintains a “more maximal” fractional solution yields better approximation ratios.
2 Preliminaries
In this section we introduce some previous results which we will rely on in our algorithm and its analysis. We start by reviewing the approach of Bhattacharya et al. [14] to obtain efficient fractional algorithms in §2.1. We then discuss the boundeddegree subgraphs we will consider, also known as kernels, in §2.2. Finally, we briefly outline the approximate worst case update time algorithms we will rely on for our algorithm, in §2.3.
2.1 Hierarchical Partitions
In this section we review the approximatelymaximal fractional matchings maintained by Bhattacharya et al. [14]. At a high level, this algorithm relies on the notion hierarchical partitions, in which vertices are assigned some level (the partition here is given by the level sets), and edges are assigned a fractional value based on their endpoints’ levels. Specifically, an edge is assigned a value exponentially small in its vertices’ maximum level. The levels (and therefore the edge weights) are updated in a way as to guarantee feasibility, as well as guaranteeing that a vertex of high level has high sum of incident edge weights, . These conditions are sufficient to guarantee approximate maximality, as we shall soon show.
The hierarchical partitions considered by Bhattacharya et al. [14], termed simply nice partitions, is described as follows. In the definition constants and a function are used, satisfying the following.
(1) 
In our case, for some , we will let . As we will be shooting for approximation algorithms with polylogarithmic update time and our reduction’s update time has polynomial dependence on , we will assume without loss of generality that , and so for large enough, we have .
Definition 2.1 (A nice partition [14]).
In a nice partition of a graph , each vertex is assigned an integral level in the set . In addition, for each vertex and edge the shadowlevel of with respect to , denoted by , is a (positive) integer satisfying . Moreover, for each vertex , we have
(2) 
The level of an edge is taken to be the maximum shadowlevel of an endpoint of with respect to ; i.e., . Let be the sum of weights of edges incident on a vertex . Then,

For every edge , it holds that .

For every node , it holds that .

For every node with level , it holds that .
The intuition behind this definition in Bhattacharya et al. [14] is to mimic the hierarchical partition of Bhattacharya et al. [12], termed decompositions there. decompositions are the special case of nice partitions where the shadowlevel of a vertex with respect to each edge is precisely equal to the vertex’s level; i.e, (with denoting ). The advantage of this more relaxed notion of shadowlevel is to allow a vertex to move between levels “slowly”, only notifying part of its incident edges of its level change between updates, and therefore only updating some of its edges’ weights. This allows for maintaining this partition with fast worstcase update time, as shown in Bhattacharya et al. [14] (more on this below).
This above intuition concerning nice partitions will not prove important for our analysis. The crucial property we will rely on is given by the following lemma, which asserts that the fractional matching associated with a nice partition is approximatelymaximal.
Lemma 2.2.
Let . Consider a nice partition with , and so . Then, the fractional matching associated with this nice partition is approximatelymaximal.
Proof.
Let , and let . For any edge , if , then by definition . Alternatively, if then and therefore by integrality of , we have . Now, let be . Then, by definition of shadowlevels and , we have and so by Property 3 of a nice partition we have (as ). But on the other hand, by Equation 2, we also know that for every edge ,
Therefore, by definition of the edge weights, each edge satisfies . ∎
The recent result of Bhattacharya et al. [14] for maintaining nice partitions in worstcase update time together with creftypecap 2.2 immediately implies creftypecap 1.3, restated below. We substantiate these bounds with the dependence on stated explicitly in §A, as Bhattacharya et al. [14] had and so their results do not state these dependencies explicitly.
See 1.3
As we shall show, approximatelymaximal fractional matchings allow us to sample a boundeddegree subgraph of containing a large matching compared to the maximum matching size in , . For this we will require the notion of kernels, defined in §2.2.
2.2 Kernels
In this section we review the concept of kernels, first introduced by Bhattacharya et al. [12].
Definition 2.3 (Kernels [12]).
A kernel of a graph is a subgraph of satisfying:

For each vertex , the degree of in is at most .

For each edge , it holds that .
The interest in finding a boundeddegree subgraph of may seem natural, as one may expect to be able to compute a matching quickly in due to its sparsity (we elaborate more on this point in §2.3). The interest in satisfying the second property, on the other hand, may seem a little cryptic. However, combining both properties implies that the matching number of , , is large in comparison with the matching number of , .
Lemma 2.4.
Let be a kernel of . Then .
Proof.
Consider the following fractional matching solution,
This is a feasible fractional matching due to the degree bound of and the fractional values assigned to edges of a vertex incident on an edge being at most . To show that this fractional matching has high value, consider the variables . On the one hand, by the handshake lemma, . On the other hand, each edge of has by construction and each edge of has at least one endpoint of degree , implying that for each . As each vertex neighbors at most one edge of , we obtain
Now, to show that contains a large integral matching, we rely on Vizing’s Theorem [54], which asserts that every multigraph of maximum degree and maximum edge multiplicity has a proper edgecoloring; i.e., a partition of the edge set into edgedisjoint matchings. To use this theorem, we construct a multigraph on the same vertex set with each edge replaced by parallel copies (note that is integral). By construction, the number of edges in this multigraph is . By feasibility of , we have that this multigraph has maximum degree . By Vizing’s Theorem, the simple subgraph obtained by ignoring parallel edges corresponding to edges in can be edge colored using colors. But for each edge , such a coloring uses at most distinct colors incident on or . To extend this coloring to a proper coloring of the multigraph, we color the multiple edges in this multigraph using some unused colors of the palette of size used so far. We conclude that the support of this multigraph (i.e., the graph ), which has edges, contains a matching of size at least
creftypecap 2.4 and the algorithm of §2.3 immediately imply that the algorithms of Bhattacharya et al. [12] can be made approximate within the same time bounds (up to terms). As this was previously also observed in Bhattacharya et al. [13], we do not elaborate on this point here.
2.3 NearlyMaximum Matchings in DegreeBounded Graphs
In this short subsection we highlight one final component we will rely on for our algorithm: fast nearlyoptimal matching algorithms with worstcase update time bounded by ’s maximum degree. Such algorithms were given by Gupta and Peng [27], Peleg and Solomon [46]. More precisely, we have the following lemma. The bound for the algorithm of Peleg and Solomon [46] follows as always, while the bound for the algorithm of Gupta and Peng [27] is immediate by inspecting this algorithm, as observed in [46].
3 Sampling Using ApproximatelyMaximal Matchings
In what follows we will show that sampling edges independently with probability roughly proportional to their assigned value according to an approximatelymaximal fractional matching yields a kernel of logarithmic maximum degree with high probability.
Lemma 3.1.
Let . Let be a approximatelymaximal fractional matching with . Then, sampling each edge independently with probability
(3) 
yields a subgraph which is a kernel of with high probability.
Proof.
For any vertex , denote by
the random variable which corresponds to
’s degree in . As before, denote by the sum of edge weights of edges incident on .First, we prove the degree upper bound; i.e., Property 1 of a kernel. As is a fractional matching, we know that . Therefore, by Equation 3, we have that . By standard Chernoff bounds, as , we find that
Next, we prove that any edge not sampled into will, with high probability, be incident on some highdegree vertex in ; i.e., we show that satisfies Property 2 of a kernel. First, note that an edge with will be sampled with probability one, given our sampling probability given in Equation 3, therefore trivially satisfying Property 2 of a kernel. Conversely, an edge with has some endpoint with and all edges incident on have , since is approximately maximal. Therefore, by Equation 3 each edge incident on is sampled with probability precisely . Consequently, we have that . By standard Chernoff bounds, as , we find that
Taking a union bound over the possible bad events corresponding to violating a property of a kernel, we find that with high probability

For each vertex , it holds that .

For each edge , it holds that .
In other words, is a kernel of with high probability. ∎
4 Our Reduction
Given the previous sections, we are now ready to describe our reduction from fullydynamic integral matching to approximatelymaximal fractional matching and analyzing its performance, given by creftypecap 1.2, restated here.
See 1.2
Proof.
Our reduction works as follows. Whenever an edge is added/removed from , we update the approximatelymaximal fractional matching, using algorithm , in time . We then sample each of the at most edges whose value is changed, independently, with probability given by Equation 3. To control the maximum degree in the sampled subgraph , every vertex maintains a list of at most sampled edges “allowable” for use in . (This list can be maintained dynamically in a straightforward manner by maintaining the list of all sampled edges of and having the shorter list consist of the first sampled edges of .) We let be the graph induced by the sampled edges “allowed” by both their endpoints. Finally, we use a matching algorithm as in creftypecap 2.5 to maintain a matching in the sampled subgraph .
By creftypecap 3.1 the subgraph is a kernel of with high probability (note that by the same lemma, all sampled edges will appear in our ). By creftypecap 2.4, this means that with high probability this kernel has matching number at least
where the second inequality follows from . Therefore, a approximate matching in is itself a approximate matching in . Now, each of the changes to edge weights of the fractional matching incurs at most three updates to the kernel : for every edge whose weight changes, this edge can be added/removed to/from if it is sampled in/out; in the latter case both of ’s endpoints can have a new edge added to their “allowable” edge list in place of , and therefore possibly added to , in case the endpoints had more than sampled edges. But on the other hand, the approximate matching algorithm implied by creftypecap 2.5 requires worstcase time per update in , by ’s worstcase degree bound. Consequently, our algorithm maintains a approximate integral matching w.h.p in update time; moreover, this update time is worst case if the bounds on and are both worst case. ∎
5 Applications to Maximum Weight Matchings
In this section we highlight the consequences of our results for fullydynamic maximum weight matching. First, we discuss a new reduction of Stubbs and Vassilevska Williams [51].
Lemma 5.1 ([51]).
Let be an fullydynamic approximate maximum cardinality matching algorithm with update time . Then, there exists a fullydynamic approximate maximum cardinality matching algorithm with update time . Furthermore, if Algorithm is deterministic, so is the new one, and if Algorithm ’s update time is worst case, so is the new algorithm’s update time.
This reduction (which we elaborate on shortly), together with the state of the art dynamic maximum matching algorithms, implies most of the best currently best bounds for dynamic maximum weight matching, in Table 2 below.
Approx.  Update Time  det.  w.c.  reference 


✗  ✗  5.1 + Solomon (FOCS ’16) [50]  

✓  ✗  5.1 + Bhattacharya et al. (STOC ’16) [13]  

✓  ✓  5.1 + Bhattacharya et al. (SODA ’15) [12]  

✗  ✓  5.1 + This work  

✓  ✓  Gupta and Peng (FOCS ’13) [27]  

✓  ✗  5.1 + Bernstein and Stein (SODA ’16) [11]  

✓  ✓  5.1 + Gupta and Peng (FOCS ’13) [27]  

✓  ✓  Gupta and Peng (FOCS ’13) [27]  

✓  ✓  Gupta and Peng (FOCS ’13) [27] 
A somewhat more involved and worse update time bound than that given in creftypecap 5.1 was presented in [51], as that paper’s authors sought to obtain a persistent matching, in a sense that this matching should not change completely after a single step (i.e., no more than changes to the matching per edge update, if is the algorithm’s update time). However, a simpler and more efficient reduction yielding a nonpersistent matching algorithm with the performance guarantees of creftypecap 5.1 is implied immediately from the driving observation of Stubbs and Vassilevska Williams [51] (and indeed, is discussed in [51]). This observation, previously observed by Crouch and Stubbs [18] in the streaming setting, is as follows: denote by the edges of weights in the range , and let be an approximate matching in . Then, greedily constructing a matching by adding edges from each in decreasing order of yields a approximate maximum weight matching. Adding to this observation the observation that if we are content with a approximate (or worse) maximum weight matching we may safely ignore all edges of weight less than of the maximum edge weight (a trivial lower bound on the maximum weight matching’s weight), we find that we can focus on the ranges , for some , noting that each edge belongs to at most two such ranges.
In each such range , the argument of [18, 51] implies that maintaining approximate matchings in the subranges for integral ranges and combining these greedily result in a approximate maximum weight matching in the range . Therefore, in the range containing a approximate MWM (such a range exists, by the above), this approach maintains a approximate MWM. The only possible difficulty is combining these matchings greedily dynamically. This is relatively straightforward to do in worstcase time per change of the approximate matching algorithm, however, implying the bound of creftypecap 5.1.
As seen in Table 2, this reduction of Stubbs and Vassilevska Williams [51] implies a slew of improved bounds for fullydynamic approximate maximum weight matching. Plugging in our bounds of creftypecap 1.4 for fullydynamic maximum matching into the reduction of creftypecap 5.1 similarly yields the first constantapproximate maximum weight matching with polylogarithmic worstcase update time, given in creftypecap 1.5 below.
See 1.5
6 Conclusion and Future Work
In this work we presented a simple randomized reduction from approximate fullydynamic matching to fullydynamic approximatelymaximal fractional matching with a slowdown of . Using the recent algorithm of Bhattacharya et al. [14], our work yields the first fullydynamic matching algorithms with fasterthanpolynomial worstcase update time for any constant approximation ratio; specifically, it yields a approximate matching with polylog update time. Our work raises several natural questions and future research directions to explore.
Faster/“More Maximal” Fractional Algorithms.
Our reduction yields approximate algorithms with polylogarithmic update times whose approximation ratios and update time are determined by the approximatelymaximal fractional matching algorithms they rely on. There are two venues to pursue here: the first, in order to improve the update time of our approximate matching algorithm, would be to improve the update time of fullydynamic approximatelymaximal fractional matching algorithm of Bhattacharya et al. [14]. Generally, faster approximately maximal fractional matching algorithms would imply faster randomized approximate matching integral algorithms.
More Efficient Reduction.
Another natural question is whether the dependence on may be removed from our reduction, yielding randomized integral matching algorithms with the same running time as their fractional counterparts.
Deterministic Reduction.
A further natural question would be whether or not one can obtain a deterministic counterpart to our blackbox reduction from integral matching to approximatelymaximal fractional matching. Any such reduction with polylogarithmic update time would yield deterministic algorithms with worstcase polylogarithmic update times.
Maximal Matching.
Finally, a natural question from our work and prior work is whether or not a maximal matching can be maintained in worstcase polylogarithmic time (also implying a approximate minimum vertex cover within the same time bounds). We leave this as a tantalizing open question.
Acknowledgments.
We thank Monika Henzinger for sharing a preprint of [14] with us, and Virginia Vassilevska Williams for sharing a preprint of [51] with us. The fifth author wishes to thank Seffi Naor for a remark which inspired creftypecap 1.1.
Appendix
Appendix A Properties of the Nice Partition of Bhattacharya et al. [14]
In this Section we justify the use of the fullydynamic algorithm of [14] for maintaining a nice partition as per Definition 2.1, where the worstcase update time for inserting or deleting an edge is for a fixed constant . Our goal here is twofold: first, to claim that the number of edge weight changes in each update operation is bounded by in the worst case and second, that the update time is in the worst case. Although the worstcase update time in [14] is (ignoring factors), the number of changes of the edge weights during an update could be much larger, potentially polynomial in . This can happen if, for example, changes of edge weights are maintained implicitly in an aggregated or lazy fashion. Specifically, perhaps the changes of edge weights are implicitly maintained by vertices changing their levels during an update call, so that a small change in the level of a vertex hides a polynomial number of changes of edge weights. Fortunately, this is not the case in [14], as we assert in the following two theorems, justifying our use of the result of [14] for dynamically maintaining a nice partition. We note that this is crucial for our needs, as the runtime of our algorithm depends on the number of edge weight changes.
First of all, it is easy to observe that the number of changes to edge weights is bounded by the worstcase runtime of an update procedure which is for a fixed constant . According to Section 3.4 in [14], every edge maintains the values of its weight and level , and thus it is straightforward that there are at most changes of edge weights per update, for constant . We further prove in Lemma A.1 that the number of changes of edge weights per insert/delete is actually bounded by per insert/delete for a constant .
Lemma A.1.
The algorithm in [14] dynamically maintains a nice partition while edges are inserted and deleted from the graph. Let be the edge weights just before an update (edge insertion or deletion), and let be the edge weights just after the update. Then the number of changes in edge weights (i.e., the number of edges such that ) is bounded by .
Proof.
The dynamic nice partition is maintained in [14] as follows: before and after an edge insertion/deletion each vertex is a “clean” vertex in exactly one of six states UP,DOWN,UPB,DOWNB,SLACK and IDLE. Each has several constraints associated with it, and it is immediate that if the constraints of the state of every vertex hold then the resulting partition and assigned levels and edge weights form a nicepartition as per Definition 2.1 (see [14, Lemma 3.2]).
Each insertion (or deletion) of an edge to (from) the graph increases (resp. decreases) the weight associated with each vertex touching the edge. The insertion/deletion of an edge may cause an endpoint of the edge to be marked as dirty based on a set of rules (rules 3.13.3 in [14]). Intuitively, a dirty vertex requires some fixing; for example, if a vertex has a large weight very close to then an insertion of an edge touching further increases the weight making it even further closer to (or even larger than ). To preserve the constraints of all the states, this requires fixing the dirty vertex, e.g., by increasing the shadow level of with respect to some edge touching it, which reduces the weight the edge and thus reduces the weight of the vertex . As a side effect of reducing the weight of , also the weight of the vertex reduces, possibly causing to become a dirty vertex which requires some fixing.
The above description forms a chain of activations (a vertex is said to be activated if its weight changes). Assume that the inserted/deleted edge touches a vertex and thus becomes activated as it has its weight changed. It may happen (according to rules 3.13.3 in [14]) that due to the change in the vertex becomes dirty and this requires fixing by calling the procedure FIXDIRTYNODE(). The procedure FIXDIRTYNODE() fixes the vertex so it becomes clean again and by that guarantees that it satisfies its state’s constraints.
During FIXDIRTYNODE() either none of the edges changed their weights (and thus the chain of activations terminates), or exactly one edge has its weight changed. In the latter case, we say that incurs an induced activation, that may cause to become dirty as per rules 3.13.3 in [14]. In this case, the chain of activations continues as we now call FIXDIRTYNODE(), which in turn may cause at most one edge to change its weight, and then vertex occurs an induced activation. The chain of activations continues, and as long as there is a dirty vertex the procedure FIXDIRTYNODE() is called, restoring to clean status but potentially changing the weight of at most one edge , which may cause to become a dirty vertex, an so on. However, as stated in the conference version in [14, Theorem 6.2] and proved in the full version (see [15, Theorem 6.4 and Lemma 8.1]), the chain of activations comprises at most activations; i.e., there are at most calls to FIXDIRTYNODE() in the chain until all vertices are clean. Furthermore, according to Assumptions 1 and 2 in [14, Section 6], an insertion or deletion of an edge may cause at most chains of activations, and thus following an edge insertion or deletion there are at most calls to FIXDIRTYNODE().
Observe that FIXDIRTYNODE is the only procedure that may cause a change in the weight of an edge (except for the insertion/deletion of the edge itself), and each call to FIXDIRTYNODE changes the weight of at most one edge. Thus, we conclude that the number of changes of edge weights during an insertion/deletion of an edge is at most the number of calls to FIXDIRTYNODE, and thus the number of changes of edge weights is bounded by . We mention that the level of a node may change due to the call of the subroutine UPDATESTATUS() (e.g., Cases 2a and 2b), but this does not cause a change in the weights of edges incident on . ∎
Finally, we furthermore state the worstcase update time of an update for maintaining the nice partition. While [14] proves that the worstcase update time is for constant , we also mention the dependency of the update time in which is implicit in [14].
Lemma A.2.
The algorithm in [14] dynamically maintains a nice partition in worstcase update time.
Proof.
The worstcase update time is bounded by the number of calls to FIXDIRTYNODE() times the runtime of FIXDIRTYNODE(). As mentioned in the proof of Theorem A.1, each edge insertion/deletion may invoke calls to FIXDIRTYNODE(), and the runtime of each FIXDIRTYNODE() is bounded by , since the runtime of FIXDIRTYNODE() in the worstcase is dominated by the runtime of the procedure FIXDOWN() which takes according to [14, Lemma 5.1 and Lemma 5.4]. Thus, the worstcast update time of an insertion/deletion of an edge is . ∎
Combining creftypecap A.2 and creftypecap A.1 we immediately obtain creftypecap 1.3, restated below.
See 1.3
References
 Abboud and Williams [2014] Abboud, A. and Williams, V. V. 2014. Popular conjectures imply strong lower bounds for dynamic problems. 434–443.
 Abraham and Chechik [2013] Abraham, I. and Chechik, S. 2013. Dynamic decremental approximate distance oracles with stretch. CoRR abs/1307.1516.
 Abraham et al. [2016] Abraham, I., Chechik, S., Delling, D., Goldberg, A. V., and Werneck, R. F. 2016. On dynamic approximate shortest paths for planar graphs with worstcase costs. In Proceedings of the Twentyseventh Annual ACMSIAM Symposium on Discrete Algorithms. SODA ’16. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 740–753.
 Abraham et al. [2017] Abraham, I., Chechik, S., and Krinninger, S. 2017. Fully dynamic allpairs shortest paths with worstcase updatetime revisited. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms. SODA ’17. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 440–452.
 Ahuja et al. [1993] Ahuja, R. K., Magnanti, T. L., and Orlin, J. B. 1993. Network flows: theory, algorithms, and applications.
 Ausiello et al. [1991] Ausiello, G., Italiano, G. F., MarchettiSpaccamela, A., and Nanni, U. 1991. Incremental algorithms for minimal length paths. J. Algorithms 12, 4, 615–638.
 Baswana et al. [2011] Baswana, S., Gupta, M., and Sen, S. 2011. Fully dynamic maximal matching in update time. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. 383–392.
 Baswana et al. [2015] Baswana, S., Gupta, M., and Sen, S. 2015. Fully dynamic maximal matching in update time. SIAM Journal on Computing 44, 1, 88–113.
 Baswana et al. [2007] Baswana, S., Hariharan, R., and Sen, S. 2007. Improved decremental algorithms for maintaining transitive closure and allpairs shortest paths. Journal of Algorithms 62, 2, 74–92. Announced at STOC’02.
 Bernstein and Stein [2015] Bernstein, A. and Stein, C. 2015. Fully dynamic matching in bipartite graphs. In International Colloquium on Automata, Languages, and Programming. Springer, 167–179.
 Bernstein and Stein [2016] Bernstein, A. and Stein, C. 2016. Faster fully dynamic matchings with small approximation ratios. In Proceedings of the TwentySeventh Annual ACMSIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 692–711.
 Bhattacharya et al. [2015] Bhattacharya, S., Henzinger, M., and Italiano, G. F. 2015. Deterministic fully dynamic data structures for vertex cover and matching. In Proceedings of the TwentySixth Annual ACMSIAM Symposium on Discrete Algorithms. SIAM, 785–804.
 Bhattacharya et al. [2016] Bhattacharya, S., Henzinger, M., and Nanongkai, D. 2016. New deterministic approximation algorithms for fully dynamic matching. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing. ACM, 398–411.
 Bhattacharya et al. [2017a] Bhattacharya, S., Henzinger, M., and Nanongkai, D. 2017a. Fully dynamic approximate maximum matching and minimum vertex cover in worst case update time. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms. SIAM, 470–489.
 Bhattacharya et al. [2017b] Bhattacharya, S., Henzinger, M., and Nanongkai, D. 2017b. Fully dynamic approximate maximum matching and minimum vertex cover in worst case update time. CoRR abs/1704.02844.
 Charikar and Solomon [2017] Charikar, M. and Solomon, S. 2017. Fully dynamic almostmaximal matching: Breaking the polynomial barrier for worstcase time bounds. arXiv preprint arXiv:1711.06883.
 Cohen et al. [2017] Cohen, M. B., Mądry, A., Sankowski, P., and Vladu, A. 2017. Negativeweight shortest paths and unit capacity minimum cost flow in time. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 752–771.
 Crouch and Stubbs [2014] Crouch, M. and Stubbs, D. M. 2014. Improved streaming algorithms for weighted matching, via unweighted matching. In LIPIcsLeibniz International Proceedings in Informatics. Vol. 28. Schloss DagstuhlLeibnizZentrum fuer Informatik.
 Demetrescu and Italiano [2004] Demetrescu, C. and Italiano, G. F. 2004. A new approach to dynamic all pairs shortest paths. J. ACM 51, 6, 968–992.
 Demetrescu and Italiano [2006] Demetrescu, C. and Italiano, G. F. 2006. Fully dynamic all pairs shortest paths with real edge weights. J. Comput. Syst. Sci. 72, 5, 813–837.
 Duan and Pettie [2014] Duan, R. and Pettie, S. 2014. Lineartime approximation for maximum weight matching. Journal of the ACM (JACM) 61, 1, 1.
 Edmonds and Karp [1972] Edmonds, J. and Karp, R. M. 1972. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM) 19, 2, 248–264.
 Fredman and Tarjan [1987] Fredman, M. L. and Tarjan, R. E. 1987. Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM (JACM) 34, 3, 596–615.
 Gabow [1990] Gabow, H. N. 1990. Data structures for weighted matching and nearest common ancestors with linking. In Proceedings of the first annual ACMSIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 434–443.
 Gabow and Tarjan [1989] Gabow, H. N. and Tarjan, R. E. 1989. Faster scaling algorithms for network problems. SIAM Journal on Computing 18, 5, 1013–1036.
 Grandoni and Williams [2012] Grandoni, F. and Williams, V. V. 2012. Improved distance sensitivity oracles via fast singlesource replacement paths. In Proceedings of the 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science. FOCS ’12. IEEE Computer Society, Washington, DC, USA, 748–757.
 Gupta and Peng [2013] Gupta, M. and Peng, R. 2013. Fully dynamic approximate matchings. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. IEEE, 548–557.
 Henzinger et al. [2013] Henzinger, M., Krinninger, S., and Nanongkai, D. 2013. Dynamic approximate allpairs shortest paths: Breaking the o(mn) barrier and derandomization. In Proceedings of the 54th Annual Symposium on Foundations of Computer Science. FOCS. 538–547.
 Henzinger et al. [2014a] Henzinger, M., Krinninger, S., and Nanongkai, D. 2014a. Decremental singlesource shortest paths on undirected graphs in nearlinear total update time. In Proceedings of the 55th Annual Symposium on Foundations of Computer Science. FOCS. 146–155.
 Henzinger et al. [2014b] Henzinger, M., Krinninger, S., and Nanongkai, D. 2014b. Sublineartime decremental algorithms for singlesource reachability and shortest paths on directed graphs. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC). 674–683.
 Henzinger et al. [2014c] Henzinger, M., Krinninger, S., and Nanongkai, D. 2014c. A subquadratictime algorithm for decremental singlesource shortest paths. In Proceedings of the TwentyFifth Annual ACMSIAM Symposium on Discrete Algorithms. SODA. 1053–1072.
 Henzinger et al. [2015] Henzinger, M., Krinninger, S., and Nanongkai, D. 2015. Improved algorithms for decremental singlesource reachability on directed graphs. In Proceedings of the 42nd International Colloquium, ICALP. 725–736.
 Henzinger and King [2001] Henzinger, M. R. and King, V. 2001. Maintaining minimum spanning forests in dynamic graphs. SIAM J. Comput. 31, 2, 364–374.
 Hopcroft and Karp [1971] Hopcroft, J. E. and Karp, R. M. 1971. An algorithm for maximum matchings in bipartite graphs. In Switching and Automata Theory, 1971., 12th Annual Symposium on. IEEE, 122–125.
 Ivkovic and Lloyd [1993] Ivkovic, Z. and Lloyd, E. L. 1993. Fully dynamic maintenance of vertex cover. In Proceedings of the 19th International Workshop on GraphTheoretic Concepts in Computer Science. SpringerVerlag, 99–111.
 Ivković and Lloyd [1994] Ivković, Z. and Lloyd, E. L. 1994. Fully dynamic maintenance of vertex cover. In GraphTheoretic Concepts in Computer Science. Springer, 99–111.
 King [1999] King, V. 1999. Fully dynamic algorithms for maintaining allpairs shortest paths and transitive closure in digraphs. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science. FOCS. 81–91.
 Kopelowitz et al. [2016] Kopelowitz, T., Pettie, S., and Porat, E. 2016. Higher lower bounds from the 3sum conjecture. In Proceedings of the TwentySeventh Annual ACMSIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 1272–1287.
 Lovász and Plummer [2009] Lovász, L. and Plummer, M. D. 2009. Matching theory. Vol. 367. American Mathematical Soc.
 Micali and Vazirani [1980] Micali, S. and Vazirani, V. V. 1980. An algoithm for finding maximum matching in general graphs. In Foundations of Computer Science, 1980., 21st Annual Symposium on. IEEE, 17–27.
 Mądry [2013] Mądry, A. 2013. Navigating central path with electrical flows: From flows to matchings, and back. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. IEEE, 253–262.
 Mucha and Sankowski [2004] Mucha, M. and Sankowski, P. 2004. Maximum matchings via gaussian elimination. In Foundations of Computer Science, 2004. Proceedings. 45th Annual IEEE Symposium on. IEEE, 248–255.
 Neiman and Solomon [2013] Neiman, O. and Solomon, S. 2013. Simple deterministic algorithms for fully dynamic maximal matching. In Proceedings of the fortyfifth annual ACM symposium on Theory of computing. ACM, 745–754.
 Neiman and Solomon [2016] Neiman, O. and Solomon, S. 2016. Simple deterministic algorithms for fully dynamic maximal matching. ACM Transactions on Algorithms (TALG) 12, 1, 7.
 Onak and Rubinfeld [2010] Onak, K. and Rubinfeld, R. 2010. Maintaining a large matching and a small vertex cover. In Proceedings of the fortysecond ACM symposium on Theory of computing. ACM, 457–464.
 Peleg and Solomon [2016] Peleg, D. and Solomon, S. 2016. Dynamic approximate matchings: a densitysensitive approach. In Proceedings of the TwentySeventh Annual ACMSIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 712–729.
 Pettie [2012] Pettie, S. 2012. A simple reduction from maximum weight matching to maximum cardinality matching. Information Processing Letters 112, 23, 893–898.
 Sankowski [2007] Sankowski, P. 2007. Faster dynamic matchings and vertex connectivity. 118–126.
 Sankowski [2009] Sankowski, P. 2009. Maximum weight bipartite matching in matrix multiplication time. Theoretical Computer Science 410, 44, 4480–4488.
 Solomon [2016] Solomon, S. 2016. Fully dynamic maximal matching in constant update time. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on. IEEE, 325–334.
 Stubbs and Vassilevska Williams [2017] Stubbs, D. and Vassilevska Williams, V. 2017. Metatheorems for dynamic weighted matching. In Proceedings of the 2017 ACM Conference on Innovations in Theoretical Computer Science. ACM.
 Thorup [2004] Thorup, M. 2004. Fullydynamic allpairs shortest paths: Faster and allowing negative cycles. In SWAT. 384–396.
 Thorup [2005] Thorup, M. 2005. Worstcase update times for fullydynamic allpairs shortest paths. In Proceedings of the Thirtyseventh Annual ACM Symposium on Theory of Computing (STOC). 112–119.

Vizing [1964]
Vizing, V. G. 1964.
On an estimate of the chromatic class of a pgraph.
Diskret analiz 3, 25–30.  Williams [2012] Williams, V. V. 2012. Multiplying matrices faster than coppersmithwinograd. In Proceedings of the fortyfourth annual ACM symposium on Theory of computing. ACM, 887–898.
Comments
There are no comments yet.