Dynamic Approximate Matchings with an Optimal Recourse Bound

03/15/2018 ∙ by Shay Solomon, et al. ∙ 0

In the area of dynamic matching algorithms, the number of changes to the matching per update step, or the recourse bound, is an important measure of quality. Nevertheless, the worst-case recourse bounds of almost all known dynamic approximate matching algorithms are prohibitively large, significantly larger than the corresponding update times. In this paper we fill in this gap via a surprisingly simple observation: Any dynamic algorithm for maintaining a β-approximate maximum matching with update time T, for any β> 1, T and ϵ > 0, can be transformed into an algorithm for maintaining a (β(1 +ϵ))-approximate maximum matching with update time T + O(1/ϵ) and recourse bound O(1/ϵ). If the original update time T is amortized/worst-case, so is the resulting update time T + O(1/ϵ), while the recourse bound O(1/ϵ) holds in the worst-case. This transformation applies to the fully dynamic setting under edge updates and vertex updates. We complement this positive result by showing that, for β = 1+ϵ, the worst-case recourse bound provided by our transformation is optimal: For any ϵ = Ω(1/n), maintaining a (1+ϵ)-approximate maximum matching in an n-vertex graph requires Ω(1/ϵ) changes to the matching per step, even in the amortized sense and even in the incremental and decremental settings. As a corollary, several key dynamic approximate matching algorithms, with low update time bounds but poor recourse bounds, are strengthened to achieve near-optimal recourse bounds with no loss in the update time. Furthermore, although a direct application of this transformation may only hurt the update time, we demonstrate the usefulness of this transformation in achieving low update time bounds in some natural settings (where we might not care about recourse bounds).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The study of graph algorithms is mostly concerned with the measure of (static) runtime. Given a graph optimization problem, the standard objective is to come up with a fast algorithm (possibly an approximation algorithm), and ideally complement it with a matching lower bound on the runtime of any (approximation) algorithm for solving the problem. As an example, computing (from scratch) a 2-approximate minimum vertex cover can be done trivially in linear time, whereas a better-than-2 approximation for the minimum vertex-cover cannot be computed in polynomial time under the unique games conjecture [33].

The current paper is motivated by a natural need arising in modern real-life networks, which are prone to either temporary or permanent changes. Such changes are sometimes part of the normal behavior of the network, as is the case, e.g., in dynamic networks, but this is not always the case, as in faulty networks where temporary and even permanent changes could be the result of failures of nodes and edges. Consider a large-scale network for which we need to solve, perhaps approximately, some graph optimization problem, and the underlying (perhaps approximate) solution (e.g., a minimum spanning tree) is being used for some practical purpose (e.g., for performing efficient broadcast protocols) throughout a long span of time. If the network changes over time, the quality of the used solution may degrade with time until it is too poor to be used in practice and it may even become infeasible.

Thus instead of the standard objectives of optimization and approximation, the questions that arise here concern reoptimization and reapproximation: Can we “efficiently” transform one given solution (the source) to a better one (the target) under real-life constraints? The efficiency of the transformation may be measured in terms of running time, but in some applications making (even small) changes to the currently used solution may incur huge costs, possibly much higher than the running time cost of computing (even from scratch) a better solution. In particular, this is often the case whenever the edges of the currently used solution are “hard-wired” in some physical sense (as in road networks). Various real-life constraints may be studied, and the one we focus on in this work is that at any step (or every few steps) throughout the transformation process, the current solution should be both feasible and of quality no worse (by much) than the source solution. This constraint is natural as it might be prohibitively expensive and even infeasible to carry out the transformation process instantaneously. Instead, the transformation can be broken into phases each performing at most changes to the transformed solution, where is an arbitrary parameter, so that the solution obtained at the end of each phase (which is to be used instead of the source solution) is both feasible and of quality no (much) worse than the source. The transformed solution is to eventually coincide with the target solution.

One may further strengthen the above constraint by requiring that the overall running time of the transformation process given the source and target solutions will be low, ideally near-linear in the size of the underlying graph and perhaps even in the size of the source and target solutions.

The arising reoptimization meta-problem is interesting not just from a practical viewpoint, but also from a theoretical perspective, since none of the known algorithms and hardness results apply to it directly, hence even the most basic and well-understood optimization problems become open in this setting. For example, for the vertex cover problem, if we are given a better-than-2 approximate target vertex cover, can we transform to it efficiently from a naive 2-approximate cover subject to the above constraints? This is an example for a problem that is computationally hard in the standard sense, which might be easy from a reoptimization perspective. In contrast, perhaps computationally easy problems, such as approximate maximum matching, are hard from a reoptimization perspective?

This meta-problem captures tension between (1) the global objective of transforming one global solution to another (possibly very different) global solution, and (2) the local objective of transforming gradually subject to the constraint of having a feasible and high quality solution throughout the process. A similar tension is captured by other models of computation that involve locality, including dynamic graph algorithms, distributed computing, property testing and local computation algorithms. It is therefore natural to expect that the rich body of work on these related areas of research could greatly advance our understanding on the meta-problem presented in this work. In light of the above, we anticipate that a thorough investigation of this meta-problem would be of both practical and theoretical importance.

In this work we study (approximate) maximum cardinality matching, maximum weight matching, and minimum spanning forest under this framework. For each of these problems we provide either an optimal or a near-optimal transformation, as summarized in the following theorems. The most technically challenging part of this work is the transformation for approximate maximum weight matching (Theorem 1.2 below).

[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Theorem 1.1 (Maximum cardinality matching).

For any source and target matchings and , one can gradually transform into (a possibly superset of) via a sequence of constant-time operations, grouped into phases consisting of at most 3 operations each, such that the matching obtained at the end of each phase throughout this transformation process is a valid matching for of size at least .

Moreover, the runtime of this transformation is bounded above by .

[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Theorem 1.2 (Maximum weight matching).

For any source and target matchings and such that 111This assumption is merely for simplicity, since in case , we can gradually transform into , and finally reverse the transformation; the weight throughout the process would be at least with ., and for any , one can gradually transform into (a possibly superset of) via a sequence of constant-time operations, grouped into phases consisting of operations each, such that the matching obtained at the end of each phase throughout this transformation process is a valid matching for of weight at least , where .

Moreover, the runtime of this transformation is bounded above by .

[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Theorem 1.3 (Minimum spanning forest).

For any source and target spanning forests and , one can gradually transform into via a sequence of constant-time operations, grouped into phases consisting of two operations each, such that the spanning forest obtained at the end of each phase throughout this transformation process is a valid spanning forest for of weight at most .

Moreover, the runtime of this transformation is bounded above by .

Although our positive results may lead to the impression that there exists an efficient gradual transformation process to any optimization graph problem, we briefly discuss in Section 7 two trivial hardness results for the minimum vertex cover and maximum independent set problems.
1.1  A worst-case recourse bound for dynamic matching algorithms.  In the standard fully dynamic setting we start from an initially empty graph on fixed vertices , and at each time step a single edge is either inserted to the graph or deleted from it, resulting in graph . There is also a similar setting, where we have vertex updates instead of (or in addition to) edge updates; the vertex update setting was mostly studied for bipartite graphs (see [21, 22, 12], and the references therein).

The problem of maintaining a large matching in fully dynamic graphs was subject to extensive interest in the last decade, see, e.g., [37, 9, 36, 30, 38, 41, 17, 12]. The basic goal is to devise an algorithm for maintaining a large matching while keeping a tab on the update time, where the update time is the time required by the algorithm to update the matching at each step. One may try to optimize the amortized (i.e., average) update time of the algorithm or its worst-case (i.e., maximum) update time, where both these measures are defined with respect to a worst-case sequence of graphs.

“Maintaining” a matching with update time translates into maintaining a data structure with update time , which answers queries regarding the matching with a low, ideally constant, query time . For a queried vertex the answer is the only matched edge incident on , or null if is free, while for a queried edge the answer is whether edge is matched or not. All queries made following the same update step should be answered consistently with respect to the same matching, hereafter the output matching (at step ), but queries made in the next update step may be answered with respect to a completely different matching. Indeed, even if the worst-case update time is low, the output matching may change significantly from one update step to the next; some natural scenarios where the matching may change significantly per update step are discussed in Appendix A.

The number of changes (or replacements) to the output matching per update step is an important measure of quality, sometimes referred to as the recourse bound, and the problem of optimizing it has received growing attention recently [27, 26, 22, 23, 12, 13]. Indeed, in some applications such as job scheduling, web hosting, streaming content delivery, data storage and hashing, a replacement of a matched edge by another one may be costly, possibly much more than the runtime of computing these replacements. Moreover, whenever the recourse bound is low, one can efficiently output all the changes to the matching following every update step, which is important for various applications. In particular, a low recourse bound is important when the matching algorithm is used as a black-box subroutine inside a larger data structure or algorithm [15, 1]; see Appendix A for more details. We remark that the recourse bound (generally defined as the number of changes to some underlying structure per update step) has been well studied in the areas of dynamic and online algorithms for a plethora of optimization problems (besides graph matching), such as MIS, set cover, flow and scheduling; refer to [29, 8, 5, 24, 28, 39], and the references therein.

While a low worst-case bound immediately yields the same or better amortized bound, either in terms of update time or recourse bound, the converse is not true, and indeed there is a strong separation between the state-of-the-art amortized versus worst-case bounds for dynamic matching algorithms. A similar separation exists for various other dynamic graph problems, such as spanning tree, minimum spanning tree and two-edge connectivity. In many practical scenarios, particularly in systems designed to provide real-time responses, a strict tab on the worst-case update time or on the worst-case recourse bound is crucial, thus an algorithm with a low amortized guarantee but a high worst-case guarantee is useless.

We shall focus on algorithms for maintaining large matchings with worst-case bounds. Despite the importance of the recourse bound measure, almost all known algorithms in the area of dynamic matchings (described in Section 1.1.1) with strong worst-case update time bounds provide no nontrivial worst-case recourse bounds whatsoever! In this paper we fill in this gap via a surprisingly simple yet powerful black-box reduction (throughout -MCM is a shortcut for -approximate maximum cardinality matching): [backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Theorem 1.4.

Any dynamic algorithm for maintaining a -MCM with update time , for any , and , can be transformed into an algorithm for maintaining a -MCM with an update time of and a worst-case recourse bound of . If the original time bound is amortized/worst-case, so is the resulting time bound of , while the recourse bound always holds in the worst-case. This applies to the fully dynamic setting (and thus to the incremental and decremental settings), under both edge and vertex updates.

The proof of Theorem 1.4 is carried out in two steps. In the first step we prove Theorem 1.1 by showing a simple transformation process for any two matchings and of the same static graph. If is larger than , then the size of the transformed matching may go down, but never below the original size , and eventually it goes up to . The second step of the proof, which is also the key insight behind it, is that the gradual transformation process can be used essentially as is in the fully dynamic setting, both under edge updates and under vertex updates, while incurring a negligible loss to the size of the transformed matching, and moreover, to its approximation guarantee.

In Section 4 we complement the positive result provided by Theorem 1.4 by demonstrating that the recourse bound is optimal (up to a constant factor) in the regime . In fact, the lower bound on the recourse bound holds even in the amortized sense and even in the incremental (insertion only) and decremental (deletion only) settings. For larger values of , taking to be a sufficiently small constant gives rise to an approximation guarantee arbitrarily close to with a constant recourse bound.

A natural assumption.  We assume that at any update step and for any integer , arbitrary edges of the output matching at step can be output within time (nearly) linear in ; in particular, the entire matching at step can be output within time (nearly) linear in the matching size. (Recall that the output matching at step is the one with respect to which all queries following update step are answered.) This assumption is natural, and indeed, all known algorithms in the area of dynamic matchings satisfy it. Moreover, this assumption trivially holds when the maximum matching size is (nearly) linear in , which is the case, e.g., in near-regular graphs and sufficiently dense graphs.
1.1.1  Previous work and a new corollary.  In this section we provide a concise literature survey on dynamic approximate matching algorithms. (For a more detailed account, see [37, 9, 38, 41, 17], and the references therein.)

There is a large body of work on algorithms for maintaining large matchings with low amortized update time, but none of the papers in this context provides a low worst-case recourse bound. For example, in FOCS’14 Bosek et al. [21] showed that one can maintain a -MCM in the incremental vertex update setting with a total of time and matching replacements, where and denote the number of edges and vertices of the final graph, respectively. While the recourse amortized bound provided by [21] is , no nontrivial (i.e., ) worst-case recourse bound is known for this problem. As another example, in STOC’16 Bhattacharya et al. [16] presented an algorithm for maintaining a -MCM in the fully dynamic edge update setting with an amortized update time . While the amortized recourse bound of the algorithm of [16] is dominated by its amortized update time, , no algorithm for maintaining -MCM with similar amortized update time and nontrivial worst-case recourse bound is known.

We next focus on algorithms with low worst-case update time.

In STOC’13 [36] Neiman and Solomon presented an algorithm for maintaining a maximal matching with a worst-case update time , where is the dynamic number of edges in the graph. A maximal matching provides a 2-MCM. [36] also provides a 3/2-MCM algorithm with the same update time. The algorithms of [36] provide a constant worst-case recourse bound. Remarkably, all other dynamic matching algorithms for general and bipartite graphs (described next) do not provide any nontrivial worst-case recourse bound.

In FOCS’13, Gupta and Peng [30] presented a scheme for maintaining approximate maximum matchings in fully dynamic graphs, yielding algorithms for maintaining -MCM with worst-case update times of and in general graphs and in graphs with degree bounded by , respectively. The scheme of [30] was refined in SODA’16 by Peleg and Solomon [38] to provide a worst-case update time of for graphs with arboricity bounded by . (A graph has arboricity if , where .) Since the arboricity of any -edge graph is and at most twice the maximum degree , the result of [38] generalizes [30]. In ICALP’15 Bernstein and Stein [14] gave an algorithm for maintaining a -MCM for bipartite graphs with a worst-case update time of ; to achieve this result, they employ the algorithm of [30] for graphs with degree bounded by , for .

The algorithms of [36, 30, 14, 38] are deterministic. Charikar and Solomon [25] and Arar et al. [2] independently presented randomized algorithms for maintaining a -MCM with a worst-case update time of . The algorithm of [2] employs the algorithm of [30] for graphs with degree bounded by , for .

The drawback of the scheme of [30] is that the worst-case recourse bounds of algorithms that follow it may be linear in the matching size. The algorithms of [30, 14, 38, 2] either follow the scheme of [30] or employ one of the aforementioned algorithms provided in [30] as a black-box, hence they all suffer from the same drawback. Although the algorithm of [25] does not use [30] in any way, it also suffers from this drawback, as discussed in Appendix A.
A corollary of Theorem 1.4 As a corollary of Theorem 1.4, all the aforementioned algorithms [30, 14, 38, 25, 2] with low worst-case update time are strengthened to achieve a worst-case recourse bound of with only an additive overhead of to the update time. Since the update time of all these algorithms is larger than , in this way we achieve a recourse bound of with no loss whatsoever to the update time! Moreover, all known algorithms with low amortized update time can be strengthened in the same way; e.g., for the incremental vertex update setting, the algorithm of [21] is strengthened to maintain a -MCM with a total runtime of and a worst-case (rather than amortized) recourse bound of . Since the recourse bound is an important measure of quality, this provides a significant contribution to the area of dynamic approximate matchings.
1.1.2  A generalization for weighted matchings.  The result of Theorem 1.4 can be generalized seamlessly for approximate maximum weight matching in graphs with bounded aspect ratio ,222The aspect ratio of a weighted graph is defined as . by using the transformation provided by Theorem 1.2 rather than 1.1, as summarized in the following theorem. [backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Theorem 1.5.

Any dynamic algorithm for maintaining a -approximate maximum weight matching (shortly, -MWM) with update time in a dynamic graph with aspect ratio always bounded by , for any , and , can be transformed into an algorithm for maintaining a -MWM with an update time of and a worst-case recourse bound of . If the original time bound is amortized/worst-case, so is the resulting time bound of , while the recourse bound always holds in the worst-case. This applies to the fully dynamic setting (and thus to the incremental and decremental settings), under both edge and vertex updates.


1.1.3  Scenarios with high recourse bounds.  There are various scenarios where high recourse bounds may naturally arise. In such scenarios our reductions (Theorems 1.4 and 1.5) can come into play to achieve low worst-case recourse bounds. Furthermore, although a direct application of our reductions may only hurt the update time, we demonstrate the usefulness of these reductions in achieving low update time bounds in some natural settings (where we might not care at all about recourse bounds); this, we believe, provides another strong motivation for our reductions. The details are deferred to Appendix A.
1.2  Related work.  Since “reoptimiziation” may be interpreted broadly, there is an extensive and diverse body of research devoted to various notions of reoptimization; see [42, 7, 20, 19, 6, 10, 11, 39, 18], and the references therein. The common goal in all previous work on reoptimization is to (efficiently) compute an exact or approximate solution to a new problem instance by using the solution for the old instance, where typically the solution for the new instance should be close to the original one under certain distance measure. Our paper is inherently different than all previous work, and in a sense complementary to it, since our starting point is that some solution to the new problem instance is given, and the goal is to compute a gradual transformation process (subject to some constraints) between the two given solutions.
1.3  Organization.  The first few sections (Sections 2-4) are devoted to achieving a tight worst-case recourse bound for dynamic approximate maximum matching. We start (Section 2) by describing the scheme of [30] for dynamic approximate matchings. The proofs of Theorems 1.4 and 1.5 are in Section 3; in Section 3.1 we present a simple transformation process for maximum cardinality matching in static graphs, thus proving Theorem 1.1, and we adapt it to fully dynamic graphs in Sections 3.2 and 3.3 for proving Theorems 1.4 and 1.5, respectively. A lower bound of on the recourse bound of -MCMs is provided in Section 4. In Section 5 we present a rather intricate transformation process for maximum weight matching, thus proving Theorem 1.2; the transformation for maximum cardinality matching can be viewed as a “warm up” for this one. The transformation process for minimum spanning forest is presented in Section 6, thus proving Theorem 1.3. A discussion appears in Section 7.

Various scenarios where high recourse bounds may naturally arise are discussed in Appendix A.

2 The scheme of [30]

This section provides a short overview of the scheme of [30]. Although such an overview is not required for proving Theorems 1.4 and 1.5, it is instructive to provide it, as it shows that the scheme of [30] is insufficient for providing any nontrivial worst-case recourse bound. Also, the scheme of [30] exploits a basic stability property of matchings, which we use for proving Theorems 1.4 and 1.5, thus an overview of this scheme may facilitate the understanding of our proof.
2.1  The amortization scheme of [30] The stability property of matchings used in [30] is that the maximum matching size changes by at most 1 following each update step. Thus if we have a -MCM, for any , the approximation guarantee of the matching will remain close to throughout a long update sequence. Formally, the following lemma is a simple adaptation of Lemma 3.1 from [30]; its proof is given in Appendix B for completeness. (Lemma 3.1 of [30] is stated for approximation guarantee and for edge updates, whereas Lemma 2.1 here holds for any approximation guarantee and also for vertex updates.)

Lemma 2.1.

Let . Suppose is a -MCM for , for any . For , let denote the matching after removing from it all edges that got deleted during updates . Then is a -MCM for .

For concreteness, we shall focus on the regime of approximation guarantee , and sketch the argument of [30] for maintaining a -MCM in fully dynamic graphs. (As Lemma 2.1 applies to any approximation guarantee , it is readily verified that the same argument carries over to any approximation guarantee.)

One can compute a -MCM at a certain update step , and then re-use the same matching throughout all update steps (after removing from it all edges that got deleted from the graph between steps and ). By Lemma 2.1, assuming , provides a -MCM for all graphs . Next compute a fresh -MCM following update step and re-use it throughout all update steps , and repeat. In this way the static time complexity of computing a -MCM is amortized over update steps. As explained in Appendix C, the static computation time of an approximate matching is , where is the arboricity bound. (This bound on the static computation time was established in [38]; it reduces to and for general graphs and graphs of degree bounded by , respectively, which are the bounds provided by [30].)
2.2  A Worst-Case Update time.  In the amortization scheme of [30] described above, a -MCM is computed from scratch, and then being re-used throughout additional update steps. The worst-case update time is thus the static computation time of an approximate matching, namely, . To improve the worst-case guarantee, the tweak used in [30] is to simulate the static approximate matching computation within a “time window” of consecutive update steps, so that following each update step the algorithm simulates only steps of the static computation. During this time window the gradually-computed matching, denoted by , is useless, so the previously-computed matching is re-used as the output matching. This means that each matching is re-used throughout a time window of twice as many update steps, hence the approximation guarantee increases from to , but we can reduce it back to by a straightforward scaling argument. Note that the gradually-computed matching does not include edges that got deleted from the graph during the time window.
2.3  Recourse bounds.  Consider an arbitrary time window used in the amortization scheme of [30], and note that the same matching is being re-used throughout the entire window. Hence there are no changes to the matching in the “interior” of the window except for those triggered by adversarial deletions, which may trigger at most one change to the matching per update step. On the other hand, at the start of any time window (except for the first), the output matching is switched from the old matching to the new one , which may require replacements to the output matching at that time. Note that the amortized number of replacements per update step is quite low, being upper bounded by . In the regime of approximation guarantee , we have , hence the amortized recourse bound is bounded by . For a general approximation guarantee , the naive amortized recourse bound is .

On the negative side, the worst-case recourse bound may still be as high as , even after performing the above tweak. Indeed, that tweak only causes the time windows to be twice longer, and it does not change the fact that once the computation of finishes, the output matching is switched from the old matching to the new one instantaneously, which may require replacements to the output matching at that time.

3 Proof of Theorems 1.4 and 1.5

This section is devoted to the proof of Theorem 1.4. At the end of this section we sketch the adjustments needed for deriving the result of Theorem 1.5, whose proof follows along similar lines to those of Theorem 1.4.
3.1  A simple transformation in static graphs.  This section is devoted to the proof of Theorem 1.1, which provides the first step in the proof of Theorem 1.4. We remark that this theorem can be viewed as a “warm up” for Theorem 1.2 for maximum weight matchings, which is deferred to Section 5, and is considerably more technically involved.

Let and be two matchings for the same graph . Our goal is to gradually transform into (a possibly superset of) via a sequence of constant-time operations to be described next, such that the matching obtained at any point throughout this transformation process is a valid matching for of size at least . It is technically convenient to denote by the transformed matching, which is initialized as at the outset, and being gradually transformed into ; we refer to and as the source and target matchings, respectively. Each operation starts by adding a single edge of to and then removing from the at most two edges incident on the newly added edge. It is instructive to assume that , as the motivation for applying this transformation, which will become clear in Section 3.2, is to increase the matching size; in this case the size of the transformed matching never goes below the size of the source matching .

We say that an edge of that is incident on at most one edge of is good, otherwise it is bad, being incident on two edges of . Since has to be a valid matching throughout the transformation process, adding a bad edge to must trigger the removal of two edges from . Thus if we keep adding bad edges to , the size of may halve throughout the transformation process. The following lemma shows that if all edges of are bad, the transformed matching is just as large as the target matching .

Lemma 3.1.

If all edges of are bad, then .

Proof:  Consider a bipartite graph , where each vertex in corresponds to an edge of and each vertex in corresponds to an edge of , and there is an edge between a vertex in and a vertex in iff the corresponding matched edges share a common vertex in the original graph. If all edges of are bad, then any edge of is incident on two edges of , and since is a valid matching, those two edges cannot be in . In other words, the degree of each vertex in is exactly 2. Also, the degree of each vertex in is at most 2, as is a valid matching. It follows that , or in other words , yielding .    

The transformation process is carried out as follows. At the outset we initialize and compute the sets and of good and bad edges in within time in the obvious way, and store them in doubly-linked lists. We keep mutual pointers between each edge of and its at most two incident edges in the corresponding linked lists and . Then we perform a sequence of operations, where each operation starts by adding an edge of to , giving precedence to good edges (i.e., adding a bad edge to only when there are no good edges to add), and then removing from the at most two edges incident on the newly added edge. Following each such operation, we update the lists and of good and bad edges in within constant time in the obvious way. This process is repeated until , at which stage we have . Note that the number of operations performed before emptying is bounded by , since each operation removes at least one edge from . It follows that the total runtime of the transformation process is bounded by .

It is immediate that remains a valid matching throughout the transformation process, as we pro-actively remove from it edges that share a common vertex with new edges added to it. To complete the proof of Theorem 1.1 it remains to prove the following lemma.

Lemma 3.2.

At any moment in time we have

.

Proof:  Suppose first that we only add good edges to throughout this process. In this case every edge addition to triggers at most one edge removal from it, hence the size of never goes down, and so it is always at least as large as that of the source matching .

We henceforth assume that at least one bad edge is added to . Recall that a bad edge is added to only when there are no good edges to add, and consider the last moment throughout the transformation process in which a bad edge addition occurs. Just before this addition we have by Lemma 3.1, thus we have after adding that edge to and removing the two edges incident on it from there. At any subsequent moment in time only good edges are added to , each of which may trigger at most one edge removal from , so the size of cannot decrease from that moment onwards.    

Remark 3.3.

When , it is possible to gradually transform to without ever being in deficit compared to the initial value of , i.e., throughout the transformation process. However, if , this no longer holds true. To see this, consider and so that is a simple alternating cycle of any positive length. It is easy to verify that throughout any transformation process and until treating the last edge of the cycle, it must be that ,


3.2  The Fully Dynamic Setting.  In this section we provide the second step in the proof of Theorem 1.4, showing that the simple transformation process described in Section 3.1 for static graphs can be generalized for the fully dynamic setting, thus completing the proof of Theorem 1.4.

Consider an arbitrary dynamic algorithm, Algorithm , for maintaining a -MCM with an update time of , for any and . The matching maintained by Algorithm , denoted by , for , may change significantly following a single update step. All that is guaranteed by Algorithm is that it can update the matching following every update step within a time bound of , either in the worst-case sense or in the amortized sense, following which queries regarding the matching can be answered in (nearly) constant time. Recall also that we assume that for any update step and for any integer , arbitrary edges of the matching provided by Algorithm at step can be output within time (nearly) linear in ; in particular, the entire matching can be output within time (nearly) linear in the matching size.

Our goal is to output a matching , for , possibly very different from , which changes very slightly from one update step to the next. To this end, the basic idea is to use the matching provided by Algorithm at a certain update step, and then re-use it (gradually removing from it edges that get deleted from the graph) throughout a sufficiently long window of consecutive update steps, while gradually transforming it into a larger matching, provided again by Algorithm at some later step.

The gradual transformation process is obtained by adapting the process described in Section 3.1 for static graphs to the fully dynamic setting. Next, we describe this adaptation. We assume that ; the case of a general is addressed in Section 3.2.1.

Consider the beginning of a new time window, at some update step . Denote the matching provided by Algorithm at that stage by and the matching output by our algorithm by . Recall that the entire matching can be output in time (nearly) linear in its size, and we henceforth assume that is given as a list of edges. (For concreteness, we assume that the time needed for storing the edges of in an appropriate list is .) While is guaranteed to provide a -MCM at any update step, including , the approximation guarantee of may be worse. Nevertheless, we will show (Lemma 3.4) that provides a -MCM for . Under the assumption that , we thus have . The length of the time window is , i.e., it starts at update step and ends at update step . During this time window, we gradually transform into (a possibly superset of) , using the transformation described in Section 3.1 for static graphs; recall that the matching output throughout this transformation process is denoted by . We may assume that , where the constant hiding in the -notation is sufficiently large; indeed, otherwise and there is no need to apply the transformation process, as the trivial worst-case recourse bound is .

We will show (Lemma 3.4) that the output matching provides a -MCM at any update step . Two simple adjustments are needed for adapting the transformed matching of the static setting to the fully dynamic setting:

  • To achieve a low worst-case recourse bound and guarantee that the overhead in the update time (with respect to the original update time) is low in the worst-case, we cannot carry out the entire computation at once (i.e. following a single update step), but should rather simulate it gradually over the entire time window of the transformation process. Specifically, recall that the transformation process for static graphs consists of two phases, a preprocessing phase in which the matching and the sets and of good and bad edges in are computed, and the actual transformation phase that transforms , which is initialized as , into (a possibly superset of) . Each of these phases requires time . The first phase does not make any replacements to , whereas the second phase consists of a sequence of at most constant-time operations, each of which may trigger a constant number of replacements to . The computation of the first phase is simulated in the first update steps of the window, performing computation steps and zero replacements to following every update step. The computation of the second phase is simulated in the second update steps of the window, performing computation steps and replacements to following every update step.

  • Denote by the matching output at the th update step by the resulting gradual transformation process, which simulates computation steps and replacements to the output matching following every update step. While is a valid matching for the (static) graph at the beginning of the time window, some of its edges may get deleted from the graph in subsequent update steps . Consequently, the matching that we shall output for graph , denoted by , is the one obtained from by removing from it all edges that got deleted from the graph between steps and .

Once the current time window terminates, a new time window starts, and the same transformation process is repeated, with serving as and serving as . Since all time windows are handled in the same way, it suffices to analyze the output matching of the current time window, and this analysis would carry over to the entire update sequence.

It is immediate that the output matching is a valid matching for any . Moreover, since we make sure to simulate computation steps and replacements following every update step, the worst-case recourse bound of the resulting algorithm is bounded by and the update time is bounded by , where this time bound is worst-case/amortized if the time bound of Algorithm is worst-case/amortized.

It is left to bound the approximation guarantee of the output matching . Recall that , and write , with . (We assume that is sufficiently small so that . We need this restriction on to apply Lemma 2.1.)

Lemma 3.4.

and provide a -MCM for and , respectively. Moreover, provides a -MCM for , for any .

Proof:  First, we bound the approximation guarantee of the matching , which is obtained from by removing from it all edges that got deleted from the graph throughout the time window. By the description of the transformation process, is a superset of , hence is a superset of the matching obtained from by removing from it all edges that got deleted throughout the time window. Since is a -MCM for , Lemma 2.1 implies that is a -MCM for . More generally, this argument shows that the matching obtained at the end of any time window is a -MCM for the graph at that step.

Next, we argue that the matching obtained at the start of any time window (as described above) is a -MCM for the graph at that step. This assertion is trivially true for the first time window, where both the matching and the graph are empty. For any subsequent time window, this assertion follows from the fact that the matching at the start of a new time window is the one obtained at the end of the old time window, for which we have already shown that the required approximation guarantee holds. It follows that is a -MCM for .

Finally, we bound the approximation guarantee of the output matching in the entire time window. (It suffices to consider the interior of the window.) Lemma 3.2 implies that , for any . We argue that is a -MCM for . If , then this assertion follows from the fact that provides such an approximation guarantee. We henceforth assume that . Recall that , where the constants hiding in the -notation are sufficiently large, hence removing a single edge from cannot hurt the approximation guarantee by more than an additive factor of, say , i.e., less than . Since provides a -MCM for , it follows that is indeed a -MCM for , which completes the proof of the above assertion. Consequently, Lemma 2.1 implies that , which is obtained from by removing from it all edges that got deleted from the graph between steps and , is a -MCM for .    


3.2.1  A general approximation guarantee.  In this section we consider the case of a general approximation parameter . The bound on the approximation guarantee of the output matching provided by Lemma 3.4, namely , remains unchanged. Recalling that , it follows that the size of cannot be larger than that of by more than a factor of . Consequently, the number of computation steps and replacements performed per update step, namely, , is no longer bounded by , but rather by . To achieve a bound of for a general , we shall use a matching different from , which includes a possibly small fraction of the edges of . Recall that we can output arbitrary edges of the matching in time (nearly) linear in , for any integer . Let be a matching that consists of (up to) arbitrary edges of ; that is, if , consists of arbitrary edges of , otherwise . We argue that is a -MCM for . Indeed, if the approximation guarantee follows from the approximation guarantee of and the fact that is twice larger than , whereas in the complementary case the approximation guarantee follows from that of . In any case it is immediate that . (For concreteness, we assume that the time needed for storing the edges of in an appropriate list is .) We may henceforth carry out the entire transformation process with taking the role of , and in this way guarantee that the number of computation steps and replacements to the output matching performed per update step is reduced from to .
3.3  Proof of Theorem 1.5 The proof of Theorem 1.5 is very similar to the one of Theorem 1.4. Specifically, we derive Theorem 1.5 by making a couple of simple adjustments to the proof of Theorem 1.4 given above, which we sketch next. First, instead of using the transformation of Theorem 1.1, we use the one of Theorem 1.2, whose proof appears in Section 5. Second, the stability property of unweighted matchings used in the proof of Theorem 1.4 is that the maximum matching size changes by at most 1 following each update step. This stability property enables us in the proof of Theorem 1.4 to consider a time window of update steps, so that any -MCM computed at the beginning of the window will provide (after removing from it all the edges that get deleted from the graph) a -MCM throughout the entire window, for any . It is easy to see that this stability property generalizes for weighted matchings, where the maximum matching weight may change by an additive factor of at most . (Recall that the aspect ratio of the dynamic graph is always bounded by .) In order to obtain a -MWM throughout the entire time window, it suffices to consider a time window of , i.e., a time window shorter than that used for unweighted matchings by a factor of , and as a result the update time of the resulting algorithm will grow from to and the worst-case recourse bound will grow from to .

4 Optimality of the Recourse Bound

In this section we show that an approximation guarantee of requires a recourse bound of , even in the amortized sense and even in the incremental (insertion only) and decremental (deletion only) settings. We only consider edge updates, but the argument extends seamlessly to vertex updates. This lower bound of on the recourse bound does not depend on the update time of the algorithm in any way. Let us fix to be any parameter satisfying , where is the (fixed) number of vertices.

Consider a simple path of length , for an integer such that and is a sufficiently small constant. (Thus spans at least two but no more than vertices.) There is a single maximum matching for , of size , which is also the only -MCM for . After adding the two edges and to , the maximum matching for the old path does not provide a -MCM for the new path, , which we may rewrite as . The only way to restore a -approximation guarantee is by removing all edges of and adding the remaining edges instead, which yields . One may carry out this argument repeatedly until the length of the path reaches, say, . The amortized number of replacements to the matching per update step throughout this process is . Moreover, the same amortized bound, up to a small constant factor, holds if we start from an empty path instead of a path of length . We then delete all edges of the final path and start again from scratch, which may reduce the amortized bound by another small constant. In this way we get an amortized recourse bound of for the fully dynamic setting.

To adapt this lower bound to the incremental setting, we construct vertex-disjoint copies of the aforementioned incremental path, one after another, in the following way. Consider the th copy , from the moment its length becomes and until it reaches . If at any moment during this gradual construction of , the matching restricted to is not the (only) maximum matching for , we halt the construction of and move on to constructing the th copy , and then subsequent copies, in the same way. A copy whose construction started but was halted is called incomplete; otherwise it is complete. (There are also empty copies, whose construction has not started yet.) For any incomplete copy , the matching restricted to it is not the maximum matching for , hence its approximation guarantee is worse than ; more precisely, the approximation guarantee provided by any matching other than the maximum matching for is at least , for a constant that can be made as large as we want by decreasing the aforementioned constant , or equivalently, . (Recall that .) If the matching restricted to is changed to the maximum matching for at some later moment in time, we return to that incomplete copy and resume its construction from where we left off, thereby temporarily suspending the construction of some other copy . The construction of may get halted again, in which case we return to handling the temporarily suspended copy , otherwise we return to handling only after the construction of is complete, and so forth. In this way we maintain the invariant that the approximation guarantee of the matching restricted to any incomplete copy (whose construction is not temporarily suspended) is at least , for a sufficiently large constant . While incomplete copies may get completed later on, a complete copy remains complete throughout the entire update sequence. At the end of the update sequence no copy is empty or temporarily suspended, i.e., any copy at the end of the update sequence is either incomplete or complete. The above argument implies that any complete copy has an amortized recourse bound of , over the update steps restricted to that copy. Observe also that at least a constant fraction of the copies must be complete at the end of the update sequence, otherwise the entire matching cannot provide a -MCM for the entire graph, i.e., the graph obtained from the union of these copies. It follows that the amortized recourse bound over the entire update sequence is .

The lower bound for the incremental setting can be extended to the decremental setting using a symmetric argument to the one given above.

5 Proof of Theorem 1.2

The setup is as follows. Let and be two matchings for the same weighted graph . We denote

and assume in what follows that is an improvement over , i.e., that . Our goal is to gradually transform into (a possibly superset of) via a sequence of constant-time operations to be described next, such that the matching obtained at any point throughout this transformation process is a valid matching for of weight at least , where , and also at least . It is technically convenient to denote by the transformed matching, which is initialized as at the outset, and being gradually transformed into ; we refer to and as the source and target matchings, respectively.

We achieve this goal in two steps. In the first step (Theorem 5.1) we show that the weight of the transformed matching never goes below , and in the second step (Theorem 1.2) we show that the weight never goes below .

Though the proof of Theorem 1.2 is technically involved, the idea behind it is simple enough, and we believe it is instructive to give a high-level overview of it, before getting into the actual technical details of the proof. The first observation is that is a relatively easy-to-analyze graph. Specifically, it is a union of vertex-disjoint union of alternating333A path in is alternating if it consists of an edge in followed by an edge in and so on, or vice versa. simple paths and cycles, except for some possible isolated vertices, which do not affect the gradual transformation in question. As is assumed to be greater than , we can then find an “improving path” in , in the sense that the total weight of the edges in is greater than the total weight of the edges in . We then show that it is easy to find a “minimum vertex” on with the following property. We can walk in one direction from that minimum vertex in a cyclic order (even if is not a cycle) along , deleting the edges of along and adding the edges of along essentially one-by-one444This process of deleting the edges of along and adding the edges of along essentially one-by-one will be explained in detail in the proof.. This will only increase the weight of , except for a possible small loss, which we refer to as the deficit, and is bounded above by . We are now ready to state and prove Theorem 1.2, which we will do in steps.

Theorem 5.1.

One can gradually transform into (a possibly superset of) via a sequence of phases, each running in constant time (and thus making at most a constant number of changes to the matching), such that the matching obtained at the end of each phase throughout this transformation process is a valid matching for of weight at least , where .

Proof:  Recall that the transformed matching, denoted by , is initialized to the source matching . An edge in is good if the sum of the weights of the edges in that are adjacent to it is smaller than its own weight; otherwise it is bad. (These notions generalize the respective notions from Section 3 for unweighted matchings.) Handling good edges is easy: For any good edge , move it from to , and then delete from the at most two edges adjacent to ; thus the weight of may only increase as a result of handling a good edge. Since every edge in is deleted at most once from , and every edge in is added at most once to , the total time of handling the good edges throughout the algorithm is .

Next we describe an algorithm that proves Theorem 5.1. During the execution of this algorithm, some edges are moved from to , which triggers the removal of edges from , and as a result some bad edges become good. Similarly to the treatment of Section 3 for the unweighted case, we can use a data structure for maintaining the good edges in , so that testing if there is a good edge in and returning an arbitrary one if exists can be done in time.

Just as in the unweighted case, here too we always try to handle good edges as described above, as long as one exists. The difference between the unweighted case and the weighted one is in how bad edges are handled: In the unweighted case bad edges are handled greedily (in an obvious way), whereas the weighted case calls for a much more intricate treatment, as described next.

We believe it is instructive to refer to the edges of as red edges and to those of as blue edges, and our transformation will delete blue edges from (recall that is initialized to ) and copy red edges from to so that the invariant in Theorem 5.1 is always maintained.

We denote the symmetric difference of two sets and by , i.e., . We use the following well-known observation of Hopcroft and Karp [31].

Lemma 5.2.

is a union of vertex-disjoint alternating blue-red (simple) paths and alternating blue-red (simple) cycles.

The colored weight of a subgraph , denoted by , is defined as the difference between the sum of weights of the red edges in and the sum of weights of the blue edges in .

Since , we have , hence the sum of the colored weights of the alternating blue-red paths and cycles in is positive. The following lemma shows a reduction from a general to the case where is a simple blue-red path or cycle of positive colored weight.

Lemma 5.3.

If , we may assume that is an alternating blue-red path or cycle of positive colored weight.

Proof:  Let denote the simple alternating blue-red paths or cycles in ordered so that the paths and cycles of positive colored weight appear first, and only then the paths or cycles of non-positive colored weight. In other words, the paths and cycles of positive colored weight have smaller indices than those of non-positive colored weight. As and are ordered positive first, it easily follows that

(1)

Therefore, after treating , the weight of has increased by , which is a positive value. When treating the next path in , we add the non-negative value to the weight of the first red edge in , which allows us to view ashaving a positive colored weight. To complete the proof of this lemma, we note that the total transformation is obtained by concatenating all the transformations of in the order in which they appear.    

By Lemma 5.3, we may assume that is a path consisting of pairs of blue-red edges , where , for , and we allow for or (or both) to be empty; we will make this assumption henceforth.

The algorithm iteratively changes by deleting from , in iteration , the blue edges and (if not previously deleted from ), for , thus allowing for an addition of the red edge to . As this basic procedure is used repeatedly below, we include its pseudo-code, and note that implementing it can be easily done in time . We also note that Procedure changes the auxiliary matching , and by keeping all the intermediate values of throughout the process, over all the paths in , we obtain the entire transformation of into .
[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt] Procedure :
For to :

  1. Delete the blue edge (if exists) and the blue edge (if exists), from .

  2. Add the edge to .

For a path , we denote by the alternating blue-red -prefix of , for any , which is the subpath of that consists of the first pairs of blue-red edges. The colored weight of the alternating blue-red -prefix of is given by

for , with the convention that is the empty path and . Let

(2)

First case: .

In this case we can add the red edges of and delete the blue edges of one after another by making a call to Procedure . Concretely, after iterations of the for loop in Procedure , for , the value of is changed by

hence the value of never decreases by more than . Moreover, by our assumptions that and , at the end of the execution of the procedure the value of is changed by . This shows that the invariant of Theorem 5.1 is always maintained.

Second case: .

Since , it follows that . Let denote the alternating blue-red -prefix of , namely . Similarly, we define the alternating blue-red -suffix of , for any , as the subpath of that consists of the last pairs of blue-red edges, and let denote the alternating blue-red -suffix of , namely,