We present a new dynamic matching sparsification scheme. From this scheme we derive a framework for dynamically rounding fractional matchings against adaptive adversaries. Plugging in known dynamic fractional matching algorithms into our framework, we obtain numerous randomized dynamic matching algorithms which work against adaptive adversaries (the first such algorithms, as all previous randomized algorithms for this problem assumed an oblivious adversary). In particular, for any constant ϵ>0, our framework yields (2+ϵ)-approximate algorithms with constant update time or polylog worst-case update time, as well as (2-δ)-approximate algorithms in bipartite graphs with arbitrarily-small polynomial update time, with all these algorithms' guarantees holding against adaptive adversaries. All these results achieve polynomially better update time to approximation tradeoffs than previously known to be achievable against adaptive adversaries.

## Authors

• 17 publications
• ### Dynamic Matching: Reducing Integral Algorithms to Approximately-Maximal Fractional Algorithms

We present a simple randomized reduction from fully-dynamic integral mat...
11/17/2017 ∙ by Moab Arar, et al. ∙ 0

A forensics investigation after a breach often uncovers network and host...
12/20/2017 ∙ by Justin E. Doak, et al. ∙ 0

• ### Fully Dynamic Matching: Beating 2-Approximation in Δ^ε Update Time

In fully dynamic graphs, we know how to maintain a 2-approximation of ma...
11/05/2019 ∙ by Soheil Behnezhad, et al. ∙ 0

• ### Deterministic Rounding of Dynamic Fractional Matchings

We present a framework for deterministically rounding a dynamic fraction...
05/04/2021 ∙ by Sayan Bhattacharya, et al. ∙ 0

• ### A Taxonomy for Dynamic Honeypot Measures of Effectiveness

Honeypots are computing systems used to capture unauthorized, often mali...
05/26/2020 ∙ by Jason M. Pittman, et al. ∙ 0

• ### Constant-Time Dynamic (Δ+1)-Coloring and Weight Approximation for Minimum Spanning Forest: Dynamic Algorithms Meet Property Testing

With few exceptions (namely, algorithms for maximal matching, 2-approxim...
07/10/2019 ∙ by Monika Henzinger, et al. ∙ 0

• ### Fault Tolerant Max-Cut

In this work, we initiate the study of fault tolerant Max Cut, where giv...
05/03/2021 ∙ by Keren Censor-Hillel, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The field of dynamic graph algorithms studies the maintenance of solutions to graph-theoretic problems subject to graph updates, such as edge additions and removals. For any such dynamic problem, a trivial approach is to recompute a solution from scratch following each update, using a static algorithm. Fortunately, significant improvements over this naïve polynomial-time approach are often possible, and many fundamental problems admit polylogarithmic update time algorithms. Notable examples include minimum spanning tree and connectivity [47, 51, 46, 69] and spanners [9, 14, 34]. Many such efficient dynamic algorithms rely on randomization and the assumption of a weak, oblivious adversary, i.e., an adversary which cannot decide its updates adaptively based on the algorithm’s output. As recently pointed out by Nanongkai and Saranurak [58],

[hidealllines=true, backgroundcolor=gray!20] It is a fundamental question whether the true source of power of randomized dynamic algorithms is the randomness itself or in fact the oblivious adversary assumption.

Static implications. As Mądry [57] observed, randomized dynamic algorithms’ assumption of an oblivious adversary renders them unsuitable for use as a black box for many static applications. For example, [33, 36] show how to approximate multicommodity flows by repeatedly routing flow along approximate shortest paths, where edges’ lengths are determined by their current congestion. These shortest path computations can be sped up by a dynamic shortest path algorithm, provided it works against an adaptive adversary (since edge lengths are determined by prior queries’ outputs). This application has motivated much work on faster deterministic dynamic shortest path algorithms [12, 13, 11, 44, 43], as well as a growing interest in faster randomized dynamic algorithms which work against adaptive adversaries [25, 26, 42].

Dynamic implications. The oblivious adversary assumption can also make a dynamic algorithm unsuitable for use by other dynamic algorithms, even ones which themselves assume an oblivious adversary! For example, for dynamic algorithms that use several copies of whose inputs depend on each other’s output, the different copies act as adaptive adversaries for one another, as the behavior of copy can affect that of copy , which in turn can affect that of copy . (See [58].)

Faster algorithms that are robust to adaptive adversaries thus have the potential to speed up both static and dynamic algorithms. This motivated Nanogkai et al. [59], who studied dynamic MST, to ask whether there exist algorithms against adaptive adversaries for other well-studied dynamic graph problems, with similar guarantees to those known against oblivious adversaries.

In this paper we answer this latter question affirmatively for the dynamic matching problem, for which we give the first randomized algorithms that are robust to adaptive adversaries (and outperform known deterministic algorithms). As an application of these new algorithms, we present improved dynamic maximum weight matching algorithms.

### 1.1 Our Contributions

Our main contribution is a framework for dynamically rounding fractional matchings against adaptive adversaries. That is, we develop a method which given a dynamically-changing fractional matching (i.e., a point in the fractional matching polytope, ), outputs a matching of size roughly equal to the value of the fractional matching, . This framework allows us to obtain dynamic matching algorithms robust to adaptive adversaries, including adversaries that see the algorithms’ entire state after each update.

Key to our framework is a novel matching sparsification scheme, i.e., a method for computing a sparse subgraph which approximately preserves the maximum matching size. We elaborate on our sparsification scheme and dynamic rounding framework and their analyses in Section 1.2 and later sections. For now, we discuss some of the dynamic matching algorithms we obtain from applying our framework to various known dynamic fractional matching algorithms.

Our first result (applying our framework to [22]) is a -approximate matching algorithm with worst-case polylogarithmic update time against an adaptive adversary.

[hidealllines=true, backgroundcolor=gray!20, leftmargin=0cm,innerleftmargin=0.25cm,roundcorner=10pt]

###### Theorem 1.1.

For every , there exists a (Las Vegas) randomized -approximate algorithm with update time w.h.p. against an adaptive adversary.

All algorithms prior to this work either assume an oblivious adversary or have polynomial worst-case update time, for any approximation ratio.

Our second result (applying our framework to [23]) yields (amortized) constant-time algorithms with the same approximation ratio as creftypecap 1.1, also against an adaptive adversary.

[hidealllines=true, backgroundcolor=gray!20, leftmargin=0cm,innerleftmargin=0.25cm,roundcorner=10pt]
###### Theorem 1.2.
For every , there exists a randomized -approximate dynamic matching algorithm with amortized update time whose approximation and update time guarantees hold in expectation against an adaptive adversary.

No constant-time algorithms against adaptive adversaries were known before this work, for any approximation ratio. A corollary of creftypecap 1.2, obtained by amplification, is the first algorithm against adaptive adversaries with logarithmic amortized update time and -approximation w.h.p.

Finally, our framework also lends itself to better-than-two approximation. In particular, plugging in the fractional matching algorithm of [21] into our framework yields -approximate algorithms with arbitrarily-small polynomial update time against adaptive adversaries in bipartite graphs.

[hidealllines=true, backgroundcolor=gray!20, leftmargin=0cm,innerleftmargin=0.25cm,roundcorner=10pt,roundcorner=10pt]

###### Theorem 1.3.

For any constant , there exists a -approximate dynamic bipartite matching algorithm with expected update time against an adaptive adversary, for .

Similar results were recently achieved for general graphs by [10], though only assuming an oblivious adversary. All other -approximate algorithms are deterministic (and so do not need this assumption), but have update time.

As a warm-up to our randomized rounding framework, we get a family of deterministic algorithms with arbitrarily-small polynomial update time, with the following time-approximation tradeoff.

[hidealllines=true, backgroundcolor=gray!20, leftmargin=0cm,innerleftmargin=0.25cm,roundcorner=10pt]

###### Theorem 1.4.

For any , there exists a deterministic -approximate matching algorithm with worst-case update time.

These algorithms include the first deterministic constant-approximate algorithms with any worst-case update time. They also include the first deterministic -approximate algorithm with worst-case polylog update time. No deterministic algorithms with worst-case polylog update time were known for any sublinear approximation ratio.

##### Application to Weighted Matching.

Our dynamic matching algorithms have applications to dynamic maximum weight matching (MWM), by standard reductions (see [68, 3]). Since our matching algorithms work against adaptive adversaries, we can apply these reductions to our algorithms as a black box, and need not worry about the inner workings of these reductions. As an added bonus, the obtained MWM algorithms work against adaptive adversaries, since their constituent subroutines do. So, plugging any of our algorithms into the above reductions yields MWM algorithms against adaptive adversaries whose approximation ratio is roughly twice that of our dynamic matching algorithms, with a logarithmic slowdown. Many of the bounds obtained this way were not even known against oblivious adversaries.

### 1.2 Techniques

In this section we outline our sparsification scheme and framework for dynamic matching against adaptive adversaries. Specifically, we show how to use edge colorings—partitions of the edges into (few) matchings—to quickly round fractional matchings dynamically against adaptive adversaries.222An orthogonal approach was recently taken by Cohen et al. [27], who used matchings to round fractional edge colorings, in an online setting. Before detailing these, we explain why the work of Gupta and Peng [41] motivates the study of dynamic matching sparsification.

In [41], Gupta and Peng present a -approximate time algorithm, using a sparsifier and what they call the “stability” of the matching problem, which lends itself to lazy re-computation, as follows. Suppose we compute a matching of size at least times , the maximum matching size in . Then, regardless of the updates in the following period of steps, the edges of not deleted during the period remain a matching in the dynamic graph, since both the size of and can at most change by during such a period. So, for example, using a static -time -approximate matching algorithm [56] every updates yields a -approximate dynamic matching algorithm with amortized update time . To obtain better update times from this observation, Gupta and Peng apply this idea to a sparsifier of size which contains a maximum matching of and which they show how to maintain in update time, using the algorithm of [60]. From this they obtain a -approximate matching algorithm with update time . We note that this lazy re-computation approach would even allow for polylogarithmic-time dynamic matching algorithms with approximation ratio , provided we could compute -approximate matching sparsifiers of (optimal) size ,333We note that any sparsifier containing a constant-approximate matching must have size . in time .

In this work we show how to use edge colorings to sample such size-optimal matching sparsifiers in optimal time. For simplicity, we describe our approach in terms of the subroutines needed to prove creftypecap 1.1, deferring discussions of extensions to future sections.

Suppose we run the dynamic fractional matching algorithm of [22], maintaining a constant-approximate fractional matching in deterministic worst-case polylog time. Also, for some , we dynamically partition ’s edges into subgraphs , for , where is the subgraph induced by edges of -value . By the fractional matching constraint () and since for all edges , the maximum degree of any is at most . We can therefore edge-color each with colors in deterministic worst-case time per update in , using [18]; i.e., logarithmic time per each of the many changes which algorithm makes to per update. Thus, edge coloring steps take worst-case time per update. A simple averaging argument shows that the largest color in these different is an -approximate matching, which can be maintained efficiently. Extending this idea further yields creftypecap 1.4 (see Appendix A). So, picking a singe color yields a fairly good approximation/time tradeoff. As we show, randomly combining a few colors yields space- and time-optimal constant-approximate matching sparsifiers.

To introduce our random sparsification scheme, we start by considering the sample of a single color among the colors of the coloring of subgraph . For each edge , since , when sampling a random color among these colors, we sample the unique color containing

with probability proportional to

. Specifically, we have

 Pr[e∈M]=12(1+ϵ)i∈[xe2(1+ϵ),xe2).

Our approach will be to sample colors without replacement in . By linearity, this yields a subgraph of which contains each edge with probability roughly

 pe≜min{1,xe⋅lognϵ2}. (1)

As shown by Arar et al. [3], sampling a subgraph with each edge belonging to independently with probability as above, with taken to be the -approximate fractional matching output by [22], yields a -approximate matching sparsifier.444We note that a simple argument implying contains a -fractional matching with respect to only implies a -approximation. This is due to the integrality gap of the fractional matching polytope, and in particular the fact that fractional matchings may be times larger than the largest matching in a graph (see, e.g., a triangle). We prove that sampling in this dependent manner yields as good a matching sparsifier as does independent sampling.

To bound the approximation ratio of our dependent-sampling-based sparsifiers, we appeal to the theory of negative association (see Section 2.1

). In particular, we rely on sampling without replacement being a negatively-associated joint distribution. This implies sharp concentration of (weighted) degrees of vertices in

, which forms the core of our analysis of the approximation ratio of this sparsification scheme. In particular, we show that our matching sparsification yields sparsifiers with approximation ratio essentially equaling that of any “input” fractional matching in bipartite graphs, as well as a -approximate sparsifiers in general graphs, using the fractional matchings of [22, 23].

Finally, to derive fast dynamic algorithms from this sparsification scheme, we note that our matching sparsifier is the union of only many matchings, and thus has size . Moreover, sampling this sparsifier requires only random choices, followed by writing . Therefore, sampling can be done in time (given the edge colorings, which we maintain dynamically). The space- and time-optimality of our sparsification scheme implies that we can maintain a matching with approximation ratio essentially equal to that of the obtained sparsifier, in worst-cast update time. In particular, we can re-sample such a sparsifier, and compute a -approximate matching in it, in time, after every period of steps. This results in an amortized time per update (which can be easily de-amortized). Crucially for our use, during such periods, and do not change by much, as argued before. In particular, during such short periods of few updates, an adaptive adversary—even one which sees the entire state of the algorithm after each update—cannot increase the approximation ratio by more than a factor compared to the approximation quality of the sparsifier. The above discussion yields our result of creftypecap 1.1—a -approximate dynamic matching algorithm with worst-case polylogarithmic update time against adaptive adversaries. Generalizing this further, we design a framework for dynamically rounding fractional matchings against adaptive adversaries, underlying our results of Theorems 1.1, 1.2 and 1.3.

### 1.3 Related Work

Here we discuss the dynamic matching literature in more depth, contrasting it with the results obtained from our dynamic rounding framework.

The first non-trivial result for dynamic matching was given by Ivkovic and Lloyd [49], who showed how to maintain a maximal matching (which is a 2-approximate matching) in amortized update time. Much later, Sankowski [64] presented an update time algorithm for maintaining the value (size) of a maximum matching. Both algorithms above are only sublinear in for sufficiently sparse graphs, and are both far from the gold standard for data structures – polylog update time. On the other hand, several works show that for exact maximum matching, polylog update time is impossible, assuming several widely-held conjectures, including the strong exponential time hypothesis and the 3-SUM conjecture [2, 45, 53, 1, 29]. A natural question is then whether polylog update time suffices to maintain an approximate maximum matching.

##### Polylog-time algorithms.

In seminal work, Onak and Rubinfeld [61] presented the first polylog-time algorithm for constant-approximate matching. Baswana et al. [7] improved this with an -time maximal (and thus 2-approximate) matching algorithm. Some years later a deterministic -approximate matching algorithm with amortized update time algorithm was presented by Bhattacharya et al. [21]. Solomon [66] then gave a randomized maximal matching algorithm with constant amortized time. Recently, several randomized -approximate/maximal matching algorithms with worst-case polylog time were developed, with either the approximation ratio or the update time holding w.h.p. [24, 3, 14]. All prior randomized algorithms assume an oblivious adversary. Obtaining the same guarantees against an adaptive adversary remained open. Another line of work studied the dynamic maintenance of large fractional matchings in polylog update time, thus maintaining a good approximation of the maximum matching’s value (though not a large matching) [19, 40, 22, 17, 23]. The best current bounds for this problem are deterministic -approximate fractional matching algorithms with worst-case and amortized update times [22, 23]. Our randomized algorithms of Theorems 1.1 and 1.2 match these bounds, for integral matching, against adaptive adversaries.

##### Polytime algorithms.

Many sub-linear time dynamic matching algorithms have been developed over the years. These include -approximate algorithms with worst-case update time algorithms [41, 63] (the former building on a maximal -time algorithm of [60]), and -approximate algorithms with worst-case update time [20]. The fastest polytime algorithm to date with worst-case update time is a -approximate -time algorithm in bipartite graphs [15] (similar amortized bounds are known in general graphs [16]). In contrast, we obtain algorithms with arbitrarily-small polynomial update time, yielding a constant approximation deterministically (creftypecap 1.4), and even better-than-2 approximation in bipartite graphs against adaptive adversaries (creftypecap 1.3). This latter bound matches bounds previously only known for dynamic fractional matching [21], and nearly matches a recent -time algorithm for general graphs, which assumes an oblivious adversary [10].

##### Matching sparsifiers.

Sparsification is a commonly-used algorithmic technique. In the area of dynamic graph algorithms it goes back more than twenty years [32]. For the matching problem in various computational models, multiple sparsifiers were developed [67, 6, 20, 41, 63, 5, 15, 16, 39, 54]. Unfortunately for dynamic settings, all these sparsifiers are either polynomially larger than , the maximum matching size in , or were not known to be maintainable in time against adaptive adversaries. In this paper we show how to efficiently maintain a generalization of matching kernels of [20] of size , efficiently, against adaptive adversaries.

## 2 Preliminaries

A matching in a graph is a subset of vertex-disjoint edges . The cardinality of a maximum matching in is denoted by . An edge coloring of a graph is a partition of the edge set of into matchings. That is, it is an assignment of colors to the edges of such that each vertex is incident on at most one edge of each color. A fractional matching

is a non-negative vector

satisfying the fractional matching constraint, .

In a fully-dynamic setting the input is a dynamic graph , initially empty, on a set of fixed vertices , subject to edge updates (additions and removals). An -approximate matching algorithm maintains a matching of size at least . If the algorithm is deterministic, holds for any sequence of updates. If is randomized, this bound on ’s size can hold in expectation or w.h.p., though here one must be more careful about the sequence of updates. The strongest guarantees for randomized algorithms are those which hold for sequences generated by an adaptive adversary.

##### Dynamic Edge Coloring.

An important ingredient in our matching algorithms are algorithms for the “complementary” problem of edge coloring, i.e., the problem of covering the graph’s edge-set with matchings (colors). Vizing’s theorem [70] asserts that colors suffice to edge color any graph of maximum degree . (Clearly, at least colors are needed.) In dynamic graphs, a deterministic -edge-coloring algorithm with worst-case update time is known [18]. Also, a -edge-coloring can be trivially maintained in expected update time against an adaptive adversary, by picking random colors for each new edge until an available color is picked.555Dynamic algorithms using fewer colors are known, though they are slower [30]. Moreover, as the number of colors used only affects our update times by a factor of (and does not affect our approximation ratio), the above simple - and -edge-coloring algorithms will suffice for our needs.

### 2.1 Negative Association

For our randomized sparsification algorithms, we sample colors without replacement. To bound (weighted) sums of edges sampled this way, we rely on the following notion of negative dependence, introduced by Joag-Dev and Proschan [50] and Khursheed and Lai Saxena [52].

###### Definition 2.1 (Negative Association).

We say a joint distribution is negatively associated (NA)

, or alternatively that the random variables

are NA, if for any non-decreasing functions and and disjoint subsets we have

 Cov(g(Xi:i∈I),h(Xj:j∈J))≤0. (2)

One trivial example of NA variables are independent variables, for which Inequality (2) is satisfied with equality for any functions and . A more interesting example of NA distributions are permutation distributions, namely a joint distribution where takes on all permutations of some vector with equal probability [50]. More elaborate NA distributions can be constructed from simple NA distributions as above by several NA-preserving operations, including scaling of variables by positive constants, and taking independent union [52, 50, 31]. (That is, if the joint distributions and are both NA and are independent of each other, then the joint distribution is also NA.)

An immediate consequence of the definition of NA is that NA variables are negatively correlated. A stronger consequence is that NA variables satisfy (see [31]), implying applicability of Chernoff-Hoeffding bounds to sums of NA variables.

###### Lemma 2.2 (Chernoff bounds for NA variables [31]).

Let be the sum of NA random variables with for each . Then for all , and ,

 Pr[X≤(1−δ)⋅E[X]]≤exp(−E[X]⋅δ22),
 Pr[X≥(1+δ)⋅κ]≤exp(−κ⋅δ23).

Another tail bound obtained from

for NA variables is Bernstein’s Inequality, which yields stronger bounds for sums of NA variables with bounded variance. See, e.g.,

[28].

###### Lemma 2.3 (Bernstein’s Inequality for NA Variables).

Let be the sum of NA random variables with for each . Then, if , then for all ,

 Pr[X>E[X]+a]≤exp(−a22(σ2+aM/3)).

## 3 Edge-Color and Sparsify

In this section we present our edge-coloring-based matching sparsification scheme, together with some useful properties of this sparsifier necessary to bound its quality. We then show how to implement this scheme in a dynamic setting against an adaptive adversary with loss in the approximation ratio. We start by defining our sparsification scheme in a static setting.

### 3.1 The sparsification scheme

Our edge-coloring-based sparsification scheme receives a fractional matching as an input, as well as parameters and integer . It assumes access to a -edge-coloring algorithm for graphs of maximum degree . For some logarithmic number of indices , our algorithm considers subgraphs induced by edges with -value in the range , and -edge-colors each such subgraph . It then samples at most colors without replacement in each such . The output matching sparsifier is the union of all these sampled colors. The algorithm’s pseudocode is given in Algorithm 1, below.

We note the obvious sparsity of this matching sparsifier , implied by its being the union of matchings in , all of which have size at most , by definition.

###### Observation 3.1.

The number of edges in output by Algorithm 1 is at most

 |E(H)|=O(log(n/ϵ)ϵ⋅γ⋅d⋅μ(G)).

### 3.2 Basic properties of Algorithm 1

As we shall see in Section B.2, the subgraph output by Algorithm 1 is a good matching sparsifier, in the sense that for some small , provided the fractional matching is a good approximate fractional matching. We will refer to this as the approximation ratio of . Our analysis of the approximation of will rely crucially on the following lemmas of this section.

Throughout our analysis we will focus on the run of Algorithm 1 on some fractional matching with some parameters and , and denote by the output of this algorithm. For each edge , we let

be an indicator random variable for the event that

belongs to this random subgraph . We first prove that the probability of this event occurring nearly matches given by Equation (1) with replaced by . Indeed, the choice of numbers of colors sampled in each was precisely made with this goal in mind. The proof of the corresponding lemma below, which follows by simple calculation, is deferred to Appendix B.

###### Lemma 3.2.

If and , then for every edge , we have

 min{1,xe⋅d}/(1+ϵ)2≤Pr[e∈H]≤min{1,xe⋅d}⋅(1+ϵ). (3)

Moreover, if , then .

Crucially for our analysis, where we concern ourselves with bounding weighted vertex degrees, the variables for edges of any given vertex are NA, as we establish in the following key lemma.

###### Lemma 3.3 (Negative Association of edges).

For any vertex , the variables are NA.

To prove the above lemma, we will need to consider the following NA distributions.

###### Proposition 3.4.

Let be some ordering of elements in a universe of size . For each , let be an indicator for elements being sample in a sample of random elements without replacement from . Then are NA.

###### Proof.

Randomly sampling elements from without replacement is equivalent to letting the vector take on all permutations of a vector with ones, with equal probability. The proposition therefore follows from NA of permutation distributions [50]. ∎

###### Proof of creftypecap 3.3.

For all , add a dummy edge to for each color not used by (non-dummy) edges of in . Randomly sampling colors in the coloring without replacement induces a random sample without replacement of the (dummy and non-dummy) edges of in . By creftypecap 3.4, the variables are NA (since subsets of NA variables are themselves NA). The sampling of colors in the different is independent, so by closure property of NA under independent union the variables are indeed NA. ∎

The negative correlation implied by negative association of the variables also implies that conditioning on a given edge being sampled into only decreases the probability of any other edge being sampled into . So, from lemma 3.2 and 3.3 we obtain the following.

###### Corollary 3.5.

For any vertex and edges , we have

 Pr[e∈H∣e′∈H]≤Pr[e∈H]≤min{1,xe⋅d}⋅(1+ϵ).

Finally, we will need to argue that the negative association of edges incident on any vertex holds even after conditioning on some edge appearing in .

###### Lemma 3.6.

For any vertex and edge , the variables are NA.

This lemma’s proof is essentially the same as that of creftypecap 3.3, while noting that if is in , then the unique matching containing in the edge coloring of must be sampled. This implies that the remaining colors sampled from the coloring of also constitute a random sample without replacement, albeit a smaller sample from a smaller population (both smaller by one than their unconditional counterparts).

### 3.3 The Dynamic Rounding Framework

Here we present our framework for dynamically rounding fractional matchings.

Key to this framework is creftypecap 3.1, which implies that we can sample using Algorithm 1 and compute a -approximate matching in in time. This allows us to (nearly) attain the approximation ratio of this subgraph dynamically, against an adaptive adversary.

[hidealllines=true, backgroundcolor=gray!20,roundcorner=10pt]

###### Theorem 3.7.

Let , and . Let be a constant-approximate dynamic fractional matching algorithm with update time . Let be the approximation ratio of the subgraph output by Algorithm 1 with parameters and when run on the fractional matching of . Let be a dynamic -edge-coloring algorithm with update time . If the guarantees of and hold against an adaptive adversary, then there exists an -approximate dynamic matching algorithm against an adaptive adversary, with update time

 O(Tf(n,m)⋅Tc(n,m)+log(n/ϵ)⋅γ⋅d/ϵ3).

Moreover, if and have worst-case update times, so does , and if the approximation ratio given by is w.h.p., then so is the approximation ratio of .

Our approach for creftypecap 3.7 is to maintain a fractional matching using , and edge colorings of the subgraphs using . This requires time for each of the changes to by in each update, or a total of time per update. In addition, we use Algorithm 1 to sample a good matching sparsifier and -approximate matching in using a static -time algorithm. By creftypecap 3.1, both these steps take time. If , (which we can verify using our constant-approximate fractional matching) we simply spend this time for this computation each update and just use this -approximate matching in as our matching. Otherwise, we “spread” the above computation over periods of updates, incurring a further time per update. As each update can only affect the approximation ratio by a multiplicative factor, the matching computed at the beginning of the previous period yields an approximation. The formal description and analysis of this framework can be found in Section B.1.

Remark. We note that a factor in the above running time is due to the size of sampled at the beginning of a period being and the number of subgraphs being . For some of the fractional matchings we apply our framework to, the sparsifier has a smaller size of , and we only need to sample colors from edge colorings to sample this subgraph. For these fractional matchings the update time of the above algorithm therefore becomes .

creftypecap 3.7 allows us to obtain essentially the same approximation ratio as that of computed by Algorithm 1 in a static setting, but dynamically, and against an adaptive adversary. The crux of our analysis will therefore be to bound the approximation ratio of , which we now turn to.

## 4 Overview of analysis of Algorithm 1

In order to analyze the approximation ratio of the subgraph output by Algorithm 1 (i.e., the ratio ), we take two approaches, yielding different (incomparable) guarantees. One natural approach, which we take in Section 4.2, shows that Algorithm 1 run on an -approximate fractional matching outputs a subgraph which itself contains a fractional matching which is -approximate in . For bipartite graphs this implies contains an -approximate integral matching. For general graphs, however, this only implies the existence of a -approximate integral matching in , due to the integrality gap of the fractional matching polytope in general graphs. Our second approach does not suffer this deterioration in the approximation ratio compared to the fractional matching, for a particular (studied) class of fractional matchings. We start with this second approach.

### 4.1 Sparsifiers from Approximately-Maximal Fractional Matchings

Here we show how to avoid the multiplicative factor of implied by the integrality gap when sparsifying using (particularly well-structured) fractional matchings . To prove this improved approximation ratio we generalize the notion of kernels, introduced in [20] and later used by [21, 3]. In particular, we extend this definition to allow for distributions over subgraphs, as follows.

###### Definition 4.1.

(Kernels) A -kernel of a graph is a (random) subgraph of satisfying:

1. For each vertex , the degree of in is at most always.

2. For each edge with , it holds that .

If is a deterministic distribution, we say is a deterministic kernel.

Such a graph is clearly sparse, containing at most edges. (Crucially for our needs, the kernels we compute even have size .) As shown in [3], deterministic -kernels have approximation ratio . (Coincidentally, this proof also relies on edge colorings.) Generalizing this proof, we show that a randomized -kernel has approximation ratio in expectation. The key difference is that now rather than comparing to the value of some fractional matching in , we compare to some fractional matching’s expected value. As this generalization is rather minor, we defer its proof to Section B.2.1.

###### Lemma 4.2.

Let be a -kernel of for some . Then .

As we show, the subgraph output by Algorithm 1, when run on well-structured fractional matchings, contains such a kernel. Specifically, we show that contains a kernel, provided the input fractional matching is approximately maximal, as in the following definition of Arar et al. [3].

###### Definition 4.3 (Approximately-Maximal Fractional Matching [3]).

A fractional matching is -approximately-maximal if every edge either has fractional value or it has one endpoint with with all edges incident on this having value .

Some syntactic similarity between definitions 4.1 and 4.3 should be apparent. For a start, both generalize maximal (integral or fractional) matchings, which is just the special case of . Both require an upper bound on the (weighted) degree of on any vertex, and stipulate that some edges have an endpoint with high (weighted) degree. Indeed, this similarity does not stop there, and as shown in [3], sampling each edge of a -approximately-maximal fractional matching independently with probability as in (1) yields a deterministic -kernel w.h.p. As we show, sampling each edge with probability roughly , such that the edges are NA, as in Algorithm 1, yields the same kind of kernel, w.h.p.

###### Lemma 4.4.

Let , and . If is a -approximately-maximal fractional matching, then the subgraph output by Algorithm 1 when run on with and as above is a deterministic -kernel, w.h.p.

The proof broadly relies on Chernoff bounds for degrees of vertices, which are sums of NA variables, by creftypecap 3.3, as follows. The first kernel constraint () is rather direct. For the second constraint, we rely on the approximate-maximality of , noting that any edge not sampled in must have by creftypecap 3.2, and so one of this edge’s endpoints, , must satisfy and for all . This implies that . Therefore, by creftypecap 3.2, the expected degree of is at least , and this is sharply concentrated, by creftypecap 2.2. (See Section B.2.2 for a full proof.)

In light of creftypecap 4.4, we now turn to discussing further implications of creftypecap 3.7.

As shown in [3], the output fractional matching of [22] is -approximately-fractional, for some large enough to satisfy the conditions of creftypecap 4.4. Therefore, plugging in this worst-case update time deterministic algorithm into creftypecap 3.7 in conjunction with the deterministic -time -edge-coloring algorithm of [20], we only obtain a Monte Carlo algorithm with guarantees similar to that of creftypecap 1.1. However, since we can efficiently verify the high-probability events implying that is a kernel, we can re-sample if ever it is not a kernel. From this we recover the guarantees of creftypecap 1.1 — a Las Vegas randomized dynamic -approximate matching algorithm with update time w.h.p, which works against adaptive adversaries.

To obtain the constant-time algorithm of creftypecap 1.2, we rely on the constant-time fractional matching algorithm of Bhattacharya and Kulkarni [23], which we show outputs a -approximately-maximal matching for any (see Section B.2.3). We show that these fractional matchings also only define subgraphs , as they only assign one of -values to all edges. This implies in particular that Algorithm 1 can sample from such using only random choices, yielding a subgraph of size . Using a simple constant-expected-time -edge-coloring algorithm, this improves the update time to . Taking , this yields the first -approximate algorithms with amortized logarithmic update time against adaptive adaptive adversaries. Pleasingly, we can improve this bound further, and obtain a constant-time such algorithm. To this end, we show that sampling using a -approximately-maximal fractional matching with and removing high-degree vertices yields a (randomized) -kernel. (See Section B.2.3.) From the above we thus obtain the first constant-time -approximate algorithm, as stated in creftypecap 1.2.

### 4.2 Fractional Matching Sparsifiers

The approach we apply in this section to analyze Algorithm 1 consists of showing that the subgraph obtained by running Algorithm 1 on any fractional matching with appropriate choices of and supports a fractional matching with . That is, we prove is a near-lossless fractional matching sparsifier.

###### Lemma 4.5.

(Algorithm 1 Yields Fractional Matching Sparsifiers) Let and . If is a subgraph of output by Algorithm 1 when run on a fractional matching with parameters and as above, then supports a fractional matching of expected value at least

 E[∑eye]≥∑exe(1−4ϵ).

The full proof of this lemma is deferred to Section B.2.4. We briefly sketch it here. We first define a fractional matching in with each edge assigned value . By creftypecap 3.2 this immediately implies that . Next, we define a fractional matching , letting , or if and some endpoint has it fractional matching constraint violated by , i.e., . The only delicate point of the analysis is showing that for edge , conditioning on being sampled into , with probability both endpoints have their fractional matching constraint satisfied by . For this, we note that the fractional degree of conditioned on , is the sum of variables , which by closure of NA under scaling by positive constants and creftypecap 3.6 these variable are NA. Therefore, applying Bernstein’s Inequality (creftypecap 2.3) to these low-variance NA variables, we find that with probability , if is sampled, then both endpoints of have , in which case , and so we have , as claimed.

For bipartite graphs, whose fractional matching polytopes are integral, creftypecap 4.5 implies that running Algorithm 1 on an -approximate fractional matching yields an -approximate subgraph . For example, plugging the better-than-two approximate fractional matching algorithm of Bhattacharya et al. [21] into our dynamic matching framework, we obtain the first -approximate algorithms with arbitrarily-small polynomial update time against adaptive adversaries in bipartite graphs, as stated in creftypecap 1.3.

## 5 Conclusions and Open Questions

This paper provides the first randomized dynamic matching algorithms which work against adaptive adversaries and outperform deterministic algorithms for this problem. It obtains a number of such algorithmic results, based on a new framework for rounding fractional matchings dynamically against an adaptive adversary. Our results suggest a few follow up questions, of which we state a few below.

##### Maximum Weight Matching (MWM).

The current best approximation for dynamic MWM with polylog worst-case update time against adaptive adversaries is , obtained by applying the reduction of [68] to our algorithm of creftypecap 1.1. This is far from the more familiar ratios of or known to be achievable efficiently for this problem in other models of computation, such as streaming [62, 38] and the CONGEST model of distributed computation [55, 37]. Attaining such bounds in a dynamic setting in polylog update time (even amortized and against an oblivious adversary) remains a tantalizing open problem.

##### Better Approximation.

To date, no efficient (i.e., polylog update time) dynamic matching algorithm with approximation better than two is known. As pointed out by Assadi et al. [4], efficiently improving on this ratio of two for maximum matching has been a longstanding open problem in many models, and has recently been proven impossible to do in an online setting [35]. Is the dynamic setting “easier” than the online setting, or is an approximation ratio of the best approximation achievable in polylog update time?

##### Acknowledgements.

This work has benefited from discussions with many people. In particular, the author would like to thank Anupam Gupta, Bernhard Haeupler, Seffi Naor and Cliff Stein for helpful discussions. The author would also like to thank Naama Ben-David, Bernhard Haeupler and Seffi Naor for comments on an earlier draft of this paper which helped improve its presentation. This work was supported in part by NSF grants CCF-1527110, CCF-1618280, CCF-1814603, CCF-1910588, NSF CAREER award CCF-1750808 and a Sloan Research Fellowship.

## Appendix A Warm Up: Deterministic Algorithms

Here we discuss our deterministic matching algorithms obtained by generalizing the discussion in Section 1.1. First, we note that the -edge-coloring algorithm of [18] works for multigraphs.

###### Lemma A.1 ([18]).

For any dynamic multigraph with maximum degree , there exists a deterministic -edge-coloring algorithm with worst-case update time .

Broadly, the algorithm of [18] relies on binary search, relying on the following simple observation. For colors, if we add an edge , then the total number of colors used by and for all their (at most ) edges other than , even counting repetitions, is at most . That is, fewer than the number of colors in the entire palette, . Consequently, either the range or has a smaller number of colors used by and (again, counting repetitions). This argument continues to hold recursively in this range in which and have used fewer colors than available. With the appropriate data structures, this observation is easily implemented to support worst-case update time for both edge insertions and deletions (see [18] for details). As the underlying binary-search argument above did not rely on simplicity of the graph, this algorithm also works for multigraphs.

We now show how to use this simple edge-coloring algorithm in conjunction with dynamic fractional matching algorithms to obtain a family of deterministic algorithms allowing to trade off approximation ratio for worst-case update time.

See 1.4

###### Proof.

We maintain in the background a -approximate fractional matching using a deterministic algorithm with worst-case polylogarithmic update time, such as that of [22] run with . Letting , we define multigraphs whose union contains all edges in . Specifically, for each we let be a multigraph whose edges are the edges of of -value , with each such edge having parallel copies in . So, for example, an edge with -value of will have a single parallel copy in , and an edge wit -value of will have parallel copies in . By the fractional matching constraint (), the maximum degree in each graph is at most . Therefore, using the edge coloring algorithm of [18] we can maintain a edge coloring in each deterministically in worst-case time per edge update in . Since for any edge a change to causes at most parallel copies of to be added to or removed from multigraphs , we find that each -value changes performed by the fractional matching algorithm require worst-case time. As the fractional algorithm has polylogarithmic update time (and therefore at most that many -value changes per update), the overall update time of these subroutines is therefore at most . Our algorithm simply maintains as its matching the largest color class in any of these multigraphs. It remains to bound the approximation ratio of this approach.

First, we note that all edges not in any , i.e., of -value at most , contribute at most to . So, as is a -approximate fractional matching, we have that

 ∑e∈⋃Gixe≥12.5⋅μ(G)−14≥1O(1)⋅μ(G),

where as before, is the maximum matching size in . (Note that if any algorithm is trivially -approximate.) Therefore, as at least one of these multigraphs must have total -value at least

 ∑e∈Gixe≥1O(K)⋅1O(1)⋅μ(G)=1O(K)⋅μ(G).

But, as this multigraph has at least edges, one of the colors (matchings) in must have size at least

 |E(Gi)|2Δ(Gi)−1

As this algorithm’s matching is the largest color class in all the edge colorings of all the different , it is approximate, as claimed. ∎

###### Corollary A.2.

There exists a deterministic -approximate matching algorithm with worst-case update time.

##### Remark 1:

We note that the algorithm of creftypecap 1.4 requires space to store the multigraphs and their relevant data structures maintained by the algorithm, since each edge in a graph may have -value precisely , which means we represent this edge using parallel edges in . It would be interesting to see if its approximation to worst-case update time tradeoff can be matched by a deterministic algorithm requiring space.

##### Remark 2:

We note that the matching maintained by our deterministic algorithms can change completely between updates. For applications where this is undesirable, combining this algorithm with a recent framework of Solomon and Solomon [65] yields a dynamic matching of roughly the same size while only changing edges of per update.

## Appendix B Deferred proofs of Section 3

Here we provide complete proofs of lemmas deferred from the main paper body, restated below for convenience.

First, we show that Algorithm 1 samples each edge into with the probability given by (1) with replaced by , up to multiplicative terms.

See 3.2

###### Proof.

Let be the integer for which . That is, the for which .

If , implying that , then Algorithm 1 samples all of the colors in the edge coloring of . Consequently, the edge is sampled with probability one. On the other hand, also implies that and therefore that . Thus, the edge is sampled with probability at most

 Pr[e∈H]=1≤min{1,xe⋅d}⋅(1+ϵ),

and trivially sampled with probability at least

 Pr[e∈H]=1≥min{1,xe⋅d}/(1+ϵ)2.

Moreover, if , then , or put otherwise , and so we find that every edge with is sampled with probability . It remains t consider edges with , for which , and which in particular belong to subgraphs with satisfying .

Now, if satisfies , then we sample some colors in the edge coloring of . As such, the probability of appearing in is precisely the probability that the color containing is one of the sampled colors in , which by linearity of expectation happens with probability precisely

 Pr[e∈H]=γ⌈d⌉γ⌈(1+ϵ)i⌉=⌈d⌉⌈(1+ϵ)i⌉.

Now, since implies that , the probability of (which has