1 Introduction
We initiate the study of parameterized distributed algorithms for graph optimization problems, which are fundamental both in the sequential and distributed settings. Broadly speaking, in these problems we aim to find some underlying graph structure (e.g., a set of nodes) that abides a set of constraints (e.g., cover all edges) while minimizing the cost.
While parameterized algorithms have received much attention in the sequential setting, not even the most fundamental problems (e.g., Vertex Cover) have a distributed counterpart. We present parameterized upper and lower bounds for fundamental problems such as Minimum Vertex Cover, Maximum Matching and many more.
1.1 Motivation
In the sequential setting, some combinatorial optimization problems are known to have polynomial time algorithms (e.g., Maximum Matching), while others are NPcomplete (e.g., Minimum Vertex Cover). To deal with the hardness of finding an optimal solution to these problems, the field of parameterized complexity asks what is the best running time we may achieve with respect to some parameter of the problem, instead of the size of the input instance. This parameter, usually denoted by
, is typically taken to be the size of the solution. Thus, even a running time that is exponential in may be acceptable for small values of .In the distributed setting, a network of nodes, which is represented by a communication graph , aims to solve some graph problem with respect to . Computation proceeds in synchronous rounds, in each of which every vertex can send a message to each of its neighbors. The running time of the algorithm is measured as the number of communication rounds it takes to finish. There are two primary communication models: LOCAL, which allows sending messages of unbounded size, and CONGEST, which limits messages to bits (where ).
The above definition implies that the notion of "hardness" in the distributed setting is different. Because we do not take the computation time of the nodes into account, we can solve any problem in rounds of the LOCAL model (where is the diameter of the graph), and rounds of the CONGEST model. This is achieved by having every node in the graph learn the entire graph topology and then solve the problem. Indeed, there exist many "hard" problems in the distributed setting, where the dependence on and is rather large.
There are several lower bounds for distributed combinatorial optimization problems for both the LOCAL and the CONGEST models. For example, [SHK11] provides lower bounds of (Where hides polylogarithmic factors in ), for a range of problems including MST, mincut, min st cut and many more. There are also bounds in the CONGEST model for problems such as approximating the network’s diameter [ACK16] and finding weighted allpairs shortest paths [CKP17]. Recently, the first nearquadratic (in ) lower bounds were shown by [CKP17] for computing exact Vertex Cover and Maximum Independent Set.
The above shows that, similar to the sequential setting, the distributed setting also has many "hard" problems that can benefit from the parameterized complexity lens. Recently, the study of parameterized algorithms for and was also initiated in the streaming environment by [CCHM15, CCE16]. his provides further motivation for our work, showing that indeed nonstandard models of computation can benefit from parameterized algorithms.
1.2 Our results
Given the above motivation we consider the following fundamental problems (See section 2 for formal definitions): Minimum Vertex Cover (), Maximum Independent Set (), Minimum Dominating Set (), Minimum Feedback Vertex Set (), Maximum Matching (), Minimum Edge Dominating Set (), Minimum Feedback Edge Set (). We use to denote this problem set.
The problems are considered in both the LOCAL and CONGEST model, where we present lower bounds for the round complexity of solving parameterized problems in both models, together with optimal and nearoptimal upper bounds. We also extend existing results [KMW16, CKP17] to the parameterized setting. Some of these extensions are rather direct, but are presented to provide a complete picture of the parameterized distributed complexity landscape.
Perhaps the most surprising fact about our contribution, is that our results extend beyond the scope of parameterized problems. We show that any LOCAL approximation algorithm for the above problems must take rounds. Joined with the algorithm of [GKM17] and the lower bound of [KMW16], this settles the complexity of approximating , and at . We also show that our parameterized approach reduces the runtime of exact and approximate CONGEST algorithms for and if the optimal solution is small, without knowing its size beforehand. Finally, we propose the first deterministic rounds CONGEST algorithms that approximate and within a factor strictly smaller than . Further, for , no such randomized algorithm is known either.
We note that considering parametrized algorithms in the distributed setting presents interesting challenges and unique opportunities compared to the classical sequential environment. In essence, we consider the communication cost of solving a parametrized problem on a network. On one hand, we have much more resources (and unlimited computational power), but on the other we need to deal with synchronizations and bandwidth restrictions.
Parametrized problems
We consider combinatorial minimization (maximization) problems where the size of the solution is upper bounded (lower bounded) by . A parameterized distributed algorithm is given a parameter (which is known to all nodes), and must output a solution of size at most (at least for maximization problems) if such a solution exists. Otherwise, all vertices must output that no such solution exists. A similar definition is given for parametrized approximation problems (see section 2 for more details).
1.2.1 Lower bounds
We show that the problem of can be reduced to and via standard reductions which also hold in the CONGEST model. There are no known results for and in the distributed setting, so the above reductions, albeit simple, immediately imply that all existing lower bounds for also apply for and . Using the fact that and have a global nature, we can achieve stronger lower bounds for the problems. Specifically, we show that no reasonable approximation can be achieved for and in rounds in the LOCAL model. This is formalized in the following theorem.

For any , any algorithm that solves or on a graph with a solution of size to within an additive error of must take rounds in the LOCAL model.
Our main result is a novel lower bound for approximation for all problems in . Our lower bound states that any approximation (deterministic or randomized) algorithm in the LOCAL model for any problem in requires rounds. Usually, lower bounds in the distributed setting are given as a function of the input (size, max degree), and not as a factor of approximation ratio. Our lower bound also applies to ^{1}^{1}1In the problem, we wish to divide the vertices into two sets such that the number of edges whose endpoints are in different sets is maximized. and ^{2}^{2}2In the problem the graph is directed and we wish to divide the vertices into two sets such that the number of directed edges from to is maximized., whose parametrized variants are not considered in this paper, and thus are not in . We state the following theorem.

For any and , any MonteCarlo LOCAL algorithm that computes a
multiplicative approximation with probability at least
for a problem requires rounds.
Our lower bound also has implications for nonparameterized algorithms. The problem of finding a maximum matching in the distributed setting is a fundamental problem in the field that received much attention [LPR09, LPP15, Fis17, BCGS17]. Despite the existence of a variety of approximation algorithms for the problem, no nontrivial result is known for computing an exact solution.
Our lower bounds also has implications for computing a approximation of , and in the LOCAL model. Combined with the lower bound of [KMW16], we can express a lower bound to the problem as . Together with the result of [GKM17], which presents an upper bound for the problem, this implies that the complexity of computing a approximation is given by .
Finally, we show a simple and generic way of extending lower bounds to the parameterized setting. The problem with many of the existing lower bounds (e.g., [KMW16, CKP17]) is that the size of the solution is (linear up to polylogarithmic factors). Thus, it might be the case that if the solution is substantially smaller than the input we might achieve a much faster running time. We show that by simply attaching a large graph to the lower bound graph we can achieve the same lower bounds as a function of , rather than . This allows us to restate our approximation lower bound and the bounds of [KMW16] and [CKP17] as a function of . We also show that these lower bounds hold for parameterized problems as defined in this paper.

There exists a family of graphs , such that for any and , any MonteCarlo LOCAL algorithm that computes a multiplicative approximation with probability for some requires rounds. Here, can be arbitrarily larger than .

There exists such that for any , there exists a family of graphs , such that any algorithm that solves on in the CONGEST model requires rounds, where can be arbitrarily larger than .

There exists a family of graphs , for sufficiently large , such that any algorithm that computes a constant approximation for or for in the LOCAL model requires rounds, where can be arbitrarily larger than .
1.2.2 Upper bounds
We first define the family of problems (, see section 2 for a formal definition) whose optimal solution is lower bounded by the graph diameter. If the optimal solution size () is small, then for minimization DLB problems we can learn the entire graph in LOCAL rounds. The problem is actually for the case when the optimal solution is large, and all vertices in the graph must output that no sized solution exists. Here we introduce an auxiliary result which we use as a building block for all of our algorithms.

There exists an rounds deterministic algorithm in the CONGEST model that terminates with all vertices outputting SMALL if the diameter is bounded by , and LARGE if the diameter is larger than . If the diameter is between and , the vertices answer unanimously, but may return either SMALL or LARGE.
Using the above, we can check the diameter, have all vertices reject if it is too large, and otherwise have a leader learn the entire graph in rounds. As for maximization problems (such as and ) the challenge is somewhat different, as the parameter does not bound the diameter for legal instances. We first check whether the diameter is at most or at least . If it is bounded by , we can learn the entire graph. Otherwise, we note that any maximal solution has size at least and is a legal solution to the parameterized problem. Thus, we can efficiently compute a maximal solution by having every node/edge which is a local minimum (according to id) enter the matching/independent set. We repeat this times and finish (this also works in the CONGEST model). We formalize this in the following theorem.

There exist an rounds LOCAL algorithms for , , and any minimization problem for .
Next, we consider the problems of and as case studies for the CONGEST model. We show deterministic upper bound of for and (For this is near tight according to Theorem LABEL:thm:ckp_small_k). We also note that as the complement of an is a we have a near tight upper bound of for the problem. This means that if is large (e.g., ) the problem is easy to solve. In the CONGEST model, we first verify that the diameter is indeed small. If it is large, we proceed as we did in the LOCAL model for both problems. For , we use a standard kernelization procedure to reduce the size of the graph. This is done by adding every node of degree larger than into the cover. The remaining graph has a bounded diameter and a small number of edges; thus we use a leader node to collect the entire graph. The problem of is more challenging, as we do not use existing kernelization techniques. Instead, we introduce a new augmentationbased approach for the parameterized problem.
We then show how with the help of randomization we can achieve a running time of for both problems. Note that for this can be a substantial, up to quadratic, improvement. Further, for it brings our the round complexity to within a factor from the lower bound.
Approximations
We also consider approximation algorithms, in the CONGEST model, for parameterized and . We make nontrivial use of the Fidelity Preserving Transformation framework [FKRS18] and simultaneously apply multiple reduction rules that reduce the parameter from to . Using this technique, we derive approximations that run faster than our exact algorithms for any . We summarize our other results in Table 1.
Variant  Upper Bound  Lower Bound  

LOCAL  CONGEST  LOCAL  CONGEST  


* only  





Applications to nonparameterized algorithms
We show that our algorithms can also imply faster nonparameterized algorithms if the optimal solution is small, without needing to know its size. Specifically, we combine our exact and approximation algorithms for parameterized and with doubling and a partial binary search for the value of . Additionally, our solutions can determine whether to run the existing nonparameterized algorithm or follow the parameterized approach. This results in an algorithm whose runtime is the minimum between current approaches and the number of rounds required for the binary search. Our results are presented in Table 2.
We also present deterministic algorithms for in the CONGEST model with an approximation ratio strictly better than . Namely, our algorithms terminate in rounds and provide an approximation ratio of . Here, is the size of the optimal solution and is not known to the algorithm. These are the first nontrivial approximation results for these problems.
Exact  approx.  approx.  approx.  

1.3 Related work
Distributed Matching and Covering
Both and have received significant attention in the distributed setting. We survey on the results relevant to this paper. We start with existing lower bounds. In [CKP17] a family of graphs of increasing size is presented, such that computing an for any graph in the family requires rounds in the CONGEST model. In [KMW16] a family of graphs is introduced such that any constant approximation for requires rounds in the LOCAL model. Both bounds hold for deterministic and randomized algorithms.
For , no nontrivial exact distributed algorithms are known. As for approximations, an optimal (for constant values of ) approximate deterministic algorithm (for the weighted variant) in the CONGEST running in rounds is given in [BCS17]. [BEKS18] then improved the dependency on to , which also results in a faster 2approximation algorithm by setting . In the LOCAL model, a randomized approximation in rounds is due to [GKM17].
For , there are no known lower bounds for the exact problem, while for approximations, the best known lower bound is due to [KMW16]. No exact nontrivial solution is known for the problem in both the LOCAL and CONGEST models. As for approximations, much is known. We survey the results for the unweighted case. An optimal randomized approximation in the CONGEST model, running in rounds for constant is given in [BCGS17]. As for deterministic algorithms, the best known results in the LOCAL model are due to [Fis17], presenting a maximal matching algorithm running in rounds and a approximate algorithm running in . As for a deterministic maximal matching in the CONGEST, to the best of our knowledge, the best known approach is to color the edges and go over each color class, resulting in a running time of as proposed in [BCGS17].
Distributed parameterized algorithms
Parameterized distributed algorithms were previously considered for detection problems. Namely, in [KR17] it was shown the detecting paths and trees on nodes can be done deterministically in rounds of the broadcast CONGEST model. Similar, albeit randomized, results were obtained independently by [EFF17] in the context of distributed property testing [CFSV16].
2 Preliminaries
In this paper, we consider the classic and parameterized variants of several popular graph packing and covering problems, in the LOCAL and CONGEST models. A solution to these problems is either a vertexset or an edgeset. In vertexset solutions, we require that each vertex will know if it is in the solution or not. For edgeset problems, each vertex must know which of its edges are in the solution, and both endpoints of an edge must agree. Computation takes place in synchronous rounds during which each node first receives messages from its neighbors, then perform local computation, and finally send messages to its neighbors. Each of the messages sent to a node neighbor may be different from the others, while the size of messages is unbounded in the LOCAL model or of size in the CONGEST. In both models, the communication graph is identical to the graph on which the problem is solved. That is, two nodes may send messages to each other only if they share an edge.
Let (Directed graph for ) denote the target graph. We consider to be a feasible solution to the following vertexset problems if:

Minimum Vertex Cover (). .

Maximum Independent Set (). .

Minimum Dominating Set (). .

Minimum Feedback Vertex Set (). is acyclic.
Next, we call a feasible solution to the following edgeset problems if:

Maximum Matching (). .

Minimum Edge Dominating Set (). .

Minimum Feedback Edge Set (). is acyclic.
We use to denote the above set of problems. Given a parameter , a parameterized algorithm computes a solution of size bounded by if such exists; otherwise, the nodes must report that no such solution exists. For a problem , we denote by (e.g., ) the parameterized variant of the problem. We use to denote all these problems. We note that all nodes know when the algorithm starts and that the result may not be the optimal solution to the problem.
Definition 2.1.
An algorithm for a minimization (respectively, maximization) problem must find a solution of size at most (respectively, at least) if such exists. If no such solution exists then all nodes must report so when the algorithm terminates.
We now generalize our definition to account for randomized and approximation algorithm.
Definition 2.2.
For , an approximation algorithm for a minimization (respectively, maximization) problem must find a solution of size at most (respectively, at least ) if a solution of size exists. Otherwise, all nodes must report that no sized solution exists.
Definition 2.3.
Given some and , an approximation Monte Carlo algorithm for a problem terminates with an approximate solution to with probability at least .
We now define the notion of DiameterLowerBounded () problems. Intuitively, this class contains all problems whose optimal solution size is , which allows efficient algorithms for their parameterized variants. For example, includes , , , , and , but not and . Roughly speaking, these problems admit efficient LOCAL parameterized algorithms as the parameter limits the radius that each node needs to see for solving the problem.
Definition 2.4.
An optimization problem is in if for any input graph of diameter , the size of an optimal solution to is of size .
3 Lower Bounds
In this section, we present lower bounds for a large family of classical and parameterized distributed graph problems.
3.1 Nonparametrized distributed problems
Here, we provide a construction that implies lower bounds for approximating all problems in . Our lower bounds dictate that any algorithm that computes a approximation, for , requires at least rounds in the LOCAL model, even for randomized algorithms. We note that for all these problems, no superlogarithmic lower bound (which we can get, e.g., for ) was known for approximations. Further, for some problems, such as , no such lower bound is known even for exact solutions in the CONGEST model.
We then generalize our approach and show that even in the parameterized variants of the problems (where the optimal solution is bounded by ), rounds are needed for an arbitrarily large graphs (where ). Our approach is based on the observation that for any round algorithm, there exists an input graph of many distinct long paths, such that the algorithm has an additive error on each of the paths, which accumulates over all paths in the construction. Intuitively, we show that for any set of node identifiers and any algorithm that takes rounds, it is possible to assign identifiers to nodes such that the approximation ratio would be larger than . Our goal in this section is to prove the following theorem.
Theorem 3.1.
For any and , any MonteCarlo LOCAL algorithm that computes a multiplicative approximation with probability at least for a problem requires rounds.
3.1.1 Basic Construction
We start with lower bounds for the nonparameterized variants of the problems, where the optimal solution may be of size . For integer parameters , the graph
consists of disjoint paths of length , whose initial nodes are connected to a central vertex . We also consider the digraph in which each edge is oriented away from ; i.e.,
We present our construction in Figure 1.
has vertices and a diameter (as ) of . Observe that the optimal solutions of , , , , and on (for ) have values of . For every path , let and
denote the longest odd and even length subpaths that do not include
and . Given a path of vertices with assigned identifiers, we denote by a reversal in the order of identifiers. For example, if the identifiers assigned to were , then those of would be . This reversal of identifiers along a path plays a crucial role in our lower bounds. Intuitively, if the number of rounds is less than and we reverse or , the output of the middle vertices would change to reflect the mirror image they observe. We show that this implies that on either the original identifier assignment or its reversal, the algorithm must find a suboptimal solution to the ’th path (where the choice of whether to flip or depends if the output is a vertex set or edge set). In turn, this would sum up to a solution that is far from the optimum by at least an additive factor. As the optimal solution is of size , this implies a multiplicative error of .We show that for arbitrarily large graphs with an optimal solution of size , any round algorithm must have an additive error of .
Lemma 3.2.
Let be integers larger than , and let be a set of node identifiers. For any deterministic LOCAL algorithm for , , , , , , or that terminates in rounds on , there exists an assignment of vertex identifiers for which the algorithm has an additive error of .
Proof.
First, let us characterize the optimal solutions for each of the problems on (or for ). For simplicity of presentation, we assume that and although the result holds for any and . We have
(1)  
(2)  
(3)  
(4)  
(5) 
For example, this means that the only optimal solution to picks all vertices whose distance from is odd.
Next, consider a path and consider the case where every vertex has identifier . From the point of view of the node with identifier (which is in this assignment), in its hop neighborhood it has nodes with identifiers on one port (side) and identifiers on the other. On the other hand, if we reverse (i.e., assign with identifier ), the view of remains exactly the same. That is, the node observes the exact same topology and vertex identifiers in both cases. Since the algorithm is deterministic, the output of must remain the same for both identifier assignments, even though now it is placed in ! Similarly, reversing would mean that the node with identifier (which changes places from to after the reversal) also provides the same output in both cases. Therefore, reversing would switch the outputs of and . This implies that the output of the algorithm is suboptimal for either or for , , , , and . We illustrate this reversal on path in Figure 2. Repeating this argument for , we get that its reversal would switch the outputs of and , making the algorithm err in (as is in the optimal cover while is not).
For MEDS, every that share an edge must agree whether it is in the solution or not. In an optimal solution the edge must be in the dominating set while must not. However, by reversing the identifiers of and switch, changing edge added from to or vice versa, implying an error for or .
As we showed that there exists an identifier assignment that “fools” the algorithm on every path , we conclude that the algorithm has an additive error of at least .∎
Lower bounds for MFVS and MFES follow from the reduction to in Theorem LABEL:thm:_mvc_reduction.
Since the optimal solution to all problems on is of size , we have that the algorithms have an approximation ratio of . Plugging we conclude the following.
Corollary 3.3.
For any , any deterministic LOCAL algorithm that computes a multiplicative approximation for any requires rounds.
Next, we use Yao’s Minimax Principle [Yao77] to extend the lower bound to randomized algorithms and prove Theorem 3.1.
Proof of Theorem 3.1.
In the proof of Lemma 3.2, we showed that there is an identifier assignment that forces any deterministic algorithm to solve each path suboptimally. For randomized algorithms, this does not necessarily work. However, we show that by applying Yao’s Minimax Principle [Yao77]
we can get similar bounds for Monte Carlo algorithms. That is, we show a probability distribution over inputs such that every deterministic algorithm which executes for
rounds incurs a multiplicative error with high probability.Intuitively, we consider an input distribution that randomly selects for each path whether to use the original ordering or whether to reverse or (this depends on the problem at hand, as in Lemma 3.2). In essence, this gives the algorithm a chance of at most half of finding the optimal solution to each path.
Formally, fix an initial vertex identifier assignment. For the problems of , and (respectively, and ) consider the inputs obtained by reversing any subset of (respectively,
). Next, consider the uniform distribution that gives each of these inputs probability
. A similar argument to that of Lemma 3.2 shows that the probability an algorithm computes the optimal solution to each path is at most half. We now apply a simplified version of the Chernoff Bound that states that for any ,, a binomial random variable
satisfies . Plugging we getThat is, any deterministic algorithm does errs on at least of the paths with probability at least (for inputs chosen according to the above distribution). Next, we restrict the value of the error probability to , which guarantees that
This means that on at least of the paths the algorithm fails to find the optimal solution and adds an additive error of at least one. Therefore, since the optimal solution is of size , the approximation ratio obtained by the deterministic algorithm is
Thus, by the Minimax principle we have that any Monte Carlo algorithm with less than rounds also have an approximation . Using , we established the correctness of theorem. ∎
3.2 Lower bounds for and
We first consider the problems of finding a and in a graph.
4 Upper Bound Warmup – Parameterized Diameter Approximation
In this section, we illustrate the concept of parameterized algorithms with the classic problem of diameter approximation. This procedure will also play an important role in all our algorithms. Computing the exact diameter of a graph in the CONGEST model is costly. Specifically, it is known that computing a approximation of the diameter, or even distinguishing between diameter or , requires CONGEST rounds [ACK16, BK18]. Computing a approximation for the diameter is straightforward in rounds, by finding the depth of any spanning tree. However, we wish to devise algorithms whose round complexity is bounded by some function , even if no solution of size exists. Therefore, we now show that it is possible to compute a approximation for the parameterized version of the diameter computation problem.
Theorem 4.1.
There exists an rounds deterministic algorithm in the CONGEST model that terminates with all vertices outputting SMALL if the diameter is bounded by , and LARGE if the diameter is larger than . If the diameter is between and , the vertices answer unanimously, but may return either SMALL or LARGE.
Proof.
Our algorithm starts with rounds, such that in every round each vertex broadcasts the minimal identifier it has learned about (initially its own identifier). After this stage terminates, each vertex has learned the minimal identifier in its hop neighborhood.
Next follows rounds such that in each round each vertex broadcasts and , which are the minimal and maximal identifier it has seen so far. When this ends, each vertex returns SMALL if and LARGE otherwise. Clearly, the entire execution takes rounds.
For correctness, observe that if the diameter is bounded by then all ’s are identical to the globally minimal identifier. Next, assume that the diameter is at least , and fix some vertex . This means that there exist a vertex that whose distance is exactly with respect to , and thus at most from . Since the first stage of the algorithm runs for rounds, we have that . Therefore, after rounds of the second stage we have that , and after additional rounds and thus outputs LARGE. Finally, if the diameter is between and , then all vertices have the same and values and thus answer unanimously. ∎
5 Parameterized Problems Upper Bounds
In this section, we discuss algorithms for the parameterized variant of many optimization problems.
5.1 LOCAL Algorithms
Our first result is for diameter lower bounded problem in the LOCAL model. We show that any minimization problem can be solved in LOCAL rounds. To that end, we first use Theorem 4.1 to check whether the diameter is at most , or at least , where is a constant such that a diameter of implies that no solution of size exists. If the diameter is larger than , the algorithm terminates and reports that no sized solution exists. Otherwise, we collect the entire graph at a leader vertex which computes the optimal solution. If the solution is of size at most , sends it to all vertices. If no solution of size exists, notifies the other vertices.
The above approach does not necessarily work for maximization problems as the existence of a sized solution does not imply a bounded diameter. Nevertheless, we now show that and have rounds algorithms. For this, we first check whether the diameter is at most or at least . If the diameter is small, we can still collect the graph and solve the problem locally. Otherwise, we use the fact that any maximal matching or Independent Set in a graph with a diameter larger than must be of size at least . Since the maximal matching or independent set may be too large, we run just iterations of extending the solution. For , at each iteration, any edge that neither of its endpoints is matched and that is a local minimum (with respect to the identifiers of its endpoints) joins the matching. We are guaranteed that the matching grows by at least a single edge at each round and thus after iterations the algorithm terminates. Similarly, for , at each iteration, every vertex that neither of its endpoints is in the independent set and is a local minimum enters the set. We summarize this in the following theorem.
Theorem 5.1.
There exist an rounds LOCAL algorithms for , , and any minimization problem for .
5.2 CONGEST algorithms for
Our first algorithm is deterministic and aims to solve the exact variant of . Intuitively, it works in two phases; first, it checks that the diameter is , if not the algorithm rejects. Knowing that the diameter is bounded by , we proceed by calculating a solution assuming there exists a solution of size at most . If this assumption holds, we are guaranteed to find such a solution. We run the above for just enough rounds to guarantee that if a sized solution exists we will find such. Finally, we check that the size of the solution returned by the algorithm is indeed bounded by .
We first show a procedure that solves the problem, if a solution of size exists. Note that if no such solution exists, this procedure may not terminate in time or compute a cover larger than .
Lemma 5.2.
There exists a deterministic algorithm that if a sized cover exists: (1) terminates in CONGEST rounds and (2) finds such a cover.
Proof.
Given that there exists a sized cover, the diameter of the graph is bounded by . Therefore, we can compute a unique leader and a BFS tree rooted at that leader in rounds. Our first observation is that every with a degree larger than must be in any sized cover. Thus, every such vertex immediately goes into the cover and gets removed together with all of its adjacent edges. If a vertex has degree , it terminates (without entering the cover). Denote the remaining graph by .
For our analysis, let us fix some vertex cover of size and denote the remaining vertices by . We note that the set is an independent set. Thus, all edges in the graph are either between vertices in or between and . We note that , and now we aim to bound the number of remaining edges. We now show that .
As all vertices with degrees greater than have been added to the cover and removed, all remaining vertices have a degree of at most . Because all remaining edges in the graph are of the form or , we may immediately bound the number of remaining edges, .
We now can just learn the entire graph in time using pipelining. The leader vertex computes the optimal cover for and notifies all vertices in whether they should join it. ∎
Theorem 5.3.
There exists a deterministic algorithm for that terminates in rounds.
Proof.
Our algorithm first uses Theorem 4.1
to estimate the diameter. Specifically, we first apply it for
. If the vertices report LARGE, we follow the same approach as in the LOCAL algorithm and reject. Otherwise, we proceed knowing the diameter is bounded by . For the case the diameter is bounded we can compute a unique leader and a BFS tree rooted at that leader in rounds.Let such that the algorithm provided in Lemma 5.2 is guaranteed to terminate in rounds for any graph with a cover of size . We run the algorithm for rounds; if the procedure did not terminate, all vertices report that no sized cover exists.
Finally, we count the number of nodes in the cover using the BFS tree and verify that indeed the size of the solution in is bounded by . If any node in the tree sees more than identifiers of vertices that joined the cover, it notifies all vertices that the solution is invalid and thus no sized solution exists. ∎
A Randomized Algorithm
While we show a deterministic LOCAL algorithm for that is optimal even if randomization is allowed, we have a gap of in our CONGEST round complexity. We now present a randomized algorithm with a round complexity, thereby reducing the gap to . This is achieved by the observation that while node identifiers are of length , we can replace each node identifier with an bit fingerprint. Specifically, since there are at most vertices in (after our reduction rule), we can use bit fingerprints, for some , and get that the probability of collision (that two vertices have the same fingerprint) is at most Next, we run our deterministic algorithm, where each vertex considers its fingerprint as an identifier. Observe that since and each edge encoding now requires bits (for ), the overall amount of bits sent to the leader is . Since the diameter of the graph is , and bits may be transmitted on every round on each edge, we use pipelining to get the round complexity below. Note that we only use fingerprints for the part of the algorithm which requires time quadratic in . That is, checking the size of the diameter and validating the size of the solution are still done using the original identifiers.
Theorem 5.4.
For any , there exists a randomized algorithm for that terminates in rounds, while being correct with probability at least .
Approximations
As we may add all nodes of degree more than to the cover, this bounds , the maximum degree in the remaining graph, by . We can now apply the algorithm of [BEKS18] which runs in and achieves a approximation. This immediately results in a deterministic round approximation algorithm in the CONGEST model. Further, since there exists a cover of size , setting implies that the resulting cover is of size . Thus, we conclude that our algorithm computes a approximation for the problem in rounds. Unfortunately, while this succeeds if there indeed exists a solution of size , validating the size of the solution takes rounds.
We now expand the discussion and propose an algorithm that computes a approximation. For , this gives a better round complexity than our exact algorithm, while for it improves the approximation ratio of the above approximation while still terminating in rounds. In Section 6, we use this algorithm to derive the first algorithm for the (nonparametric) problem that operate in rounds.
Theorem 5.5.
, there exists a deterministic CONGEST algorithm for that computes a approximation in rounds. For any , there also exists a randomized algorithm that terminates in rounds and errs with probability .
Proof.
We utilize the framework of Fidelity Preserving Transformations [FKRS18]. Intuitively, if there exists a vertex cover of size in the original graph, and we remove two vertices that share an edge, then the new graph must have a cover of size at most . This allows us to reduce the parameter at the cost of introducing error (we add both and to the cover, while an optimal solution may only include one of them). This process is called reduction step as it reduces the parameter by and increases the size of the cover (compared with an optimal solution) by at most . Roughly speaking, we repeat the process until the parameter reduces to , at which point we run an exact algorithm on the remaining graph.
In [FKRS18], it is proved that for any , repeating a reduction step until the parameter reduces to allows one to compute an approximation by finding an exact solution to the resulting subgraph and adding all vertices that have an edge that was reduced in the process. For our purpose, we set ; thus, the exact algorithms only need to find a cover of size .
Our algorithm begins by checking the diameter is and finding a leader vertex . This is doable in rounds as having a vertex cover of size guarantees that the diameter is . We proceed with applying the reduction steps. To that end, we compute a maximal matching and send it to , which requires rounds. If , instructs all matched vertices to enter the cover, and the algorithm terminates with a solution of size at most , as needed. If , the leader selects an arbitrary submatching of size and the reduction rules are simultaneously applied for every . The remaining graph has a cover of size at most , at which point we apply the exact algorithms. Finally, we validate the size of the solution as in the above algorithms. By theorems 5.3 and 5.4, we get the stated runtime and establish the correctness of our algorithms. ∎
5.3 CONGEST algorithms for
Similar to , we first design a deterministic algorithm for that terminates in rounds. Our algorithm first uses Theorem 4.1 to estimate the diameter. Specifically, we first apply it for . If the vertices report LARGE (which means that the diameter is at least ), we follow the same approach as in the LOCAL algorithm and make iterations of the maximal matching algorithm. The large diameter ensures that any maximal matching is of size at least and thus we can terminate after these iterations.
In case the output of the diameter approximation was SMALL, we know that the diameter is at most . This allows us to compute a leader in rounds. In this case, we also run the maximal matching algorithm for iterations, but now this might not be sufficient. That is, the size of the maximal matching might be smaller than , while the size of the maximum matching is or larger. To address this issue, we augment [LP09] the matching until it is of size at least , or we cannot augment the matching any further, at which point we conclude that no matching of size exists.
Augmenting a maximal matching
We prove that given some maximal matching we can find a matching of size at least , or determine that such does not exist, in rounds. If we reached a maximum matching of a size smaller than , the vertices report that no solution to the instance exists.
We first note the subgraph induced by the nodes in has at most nodes and edges. Also note that because the diameter is bounded, a leader can be elected in rounds, when the communication is done over . Now we state our algorithm.
Every node picks unmatched neighbors from and sends them to the leader node. If it has less than 2, it sends . The leader node then decides upon an augmenting path and augments accordingly. This process repeats at most times. If the matching is sufficiently large, we output the matching; otherwise, all nodes output that no solution exists.
As for correctness we first note that every augmenting path in has two endpoints in and the rest of the nodes are in . This implies that apart from the endpoint edges, all edges of the augmenting path are in . We prove the following lemma:
Lemma 5.6.
If there exists an augmenting path in , then the algorithm finds such.
Proof.
Fix some augmenting path in , and denote by , its endpoints. Let be the nodes adjacent to on the path, and let be the sets of unmatched neighbors chosen by the nodes in the algorithm. Note that these sets cannot be empty. If , then the nodes chosen are exactly those of the augmenting path and we may augment. Otherwise, we have