1 Introduction
The maximal matching (MM) problem has been studied extensively in the field of distributed graph algorithms. In the LOCAL model of distributed computing [Linial1992, Peleg2000], the recent breakthrough by Balliu2019 provided lower bounds for the complexity of MM that are asymptotically tight in the maximum degree . More precisely, for node graphs, the authors prove lower bounds of rounds for deterministic algorithms and rounds for randomized algorithms, while an upper bound of is known since almost two decades due to a result by panconesi01simple.
In other words, it is possible to solve MM in linearin time by paying a small dependency on , and we cannot do better as a function of , unless we pay a high dependency on . The lower bound results have been obtained by using the socalled round elimination technique to show that on infinite regular colored trees the MM problem requires rounds. This way of proving the lower bounds guarantees that they hold even for the bipartite maximal matching (BMM) problem.
Our contributions
In this work, we prove truly tight bounds for the complexity of BMM, i.e., we prove that exactly rounds are required. Moreover, we also prove tight bounds for natural variants of MM by showing that our lower bound technique is robust to changes in the description of the considered problem, and providing optimal algorithms to obtain tight upper bounds. As a byproduct, we obtain a much simplified version of the proof for the celebrated lower bounds presented in [Balliu2019], both in terms of the technical issues and the intuition behind the proof. Finally, we consider our work as an important step towards the automatization of lower bounds: we introduce the notion of bounded automatic round elimination, a special variant of the round elimination technique amenable to automatization, and show that our bounds can be obtained in a (semi)automatable fashion via bounded automatic round elimination. This also provides another step in better understanding the round elimination technique—a tool that is responsible for most of the lower bounds in the LOCAL model [Brandt2019, Balliu2019hardness, Linial1992, Naor1991, Balliu2019, Brandt2016, chang16exponential, binary, chang18complexity], but still poorly understood.
Maximal matching and its variants
In this work, we will consider the following natural problem family (in the bipartite setting).
Definition 1.1.
(maximal matching) Given a graph , a set is an maximal matching if the following conditions hold:

Every node is incident to at most edges of ;

If a node is not incident to any edge of , then at least neighbors of are incident to at least one edge of .
This family of problems contains MM (by setting and ), and many interesting variants of it, obtained by relaxing the covering or the packing constraint of MM. While the results presented in [Balliu2019] showed asymptotically tight bounds for MM as a function of , no tight bounds are provided for relaxed variants of matchings.
How much easier does MM become if we allow each node to be matched with a constant number of neighbors, instead of just one? We know, for example, that in the nonbipartite setting rounds are required to solve MM [Linial1992], while as soon as we allow nodes to have two incident edges in the matching, the problem becomes solvable without any dependency on [suomelabook], but it is not clear how this affects the dependency on . How much easier does the problem become if we don’t require strict maximality? We will address these kinds of questions in this work and prove truly tight bounds for MM and the whole family of maximal matching problems, in the bipartite setting. Note that tightness results of this kind require that the considered problems can be solved independently of , which is the reason for our restriction to the bipartite setting.
Round elimination
In order to prove our lower bounds, we will make use of the round elimination technique, which works as follows. Start from a locally checkable problem , i.e., a problem where all nodes must output labels from some finite set, subject to some local constraints. Try to define a problem that is at least one round easier than (i.e., can be solved strictly faster in the LOCAL model). Then, repeat the process. If we prove that the result that we get after steps of this process cannot be solved in rounds of communication, then we directly obtain a lower bound of rounds for the original problem .
This is not a new technique: it has been used by Linial roughly 30 years ago to prove a lower bound for coloring a cycle [Linial1992]. However, after Linial’s result, the technique was apparently forgotten until it reemerged in 2016, when it was used to prove lower bounds for sinkless orientation, coloring, and the Lovász Local Lemma [Brandt2016, chang16exponential, chang18complexity]. Since then, round elimination has been used to prove lower bounds for a number of different problems [Brandt2019, Balliu2019hardness, Balliu2019, binary]. More importantly, though, this technique can be automated in a certain sense.
Automatic round elimination
In a recent breakthrough result, Brandt2019 showed that, under certain conditions, given any locally checkable problem , we can automatically define a problem that is exactly one round easier than . Conceptually, this tremendously simplifies the task of applying round elimination to prove lower bounds in the LOCAL model: instead of having to find each subsequent problem that is one round easier than the previous one by hand, we can do the same by just mechanically applying certain operations. The main issue with automatic round elimination, and the reason why the result by Brandt does not immediately provide new lower bounds for all kinds of problems, is the growing description complexity inherent in each of the round elimination steps: If we start from a problem defined via a set of labels (and certain output constraints), then the problem obtained after one step is defined via a subset of the label set . In other words, even if is some simple problem that can be described in a compact way, the description of the problem can be exponentially larger, and it may be difficult for a human being to understand and feasibly deal with the problems obtained after applying only a few of these steps.
The main approach for dealing with this issue is to find good relaxations, i.e., to reduce the description complexity of the obtained problem without losing too much round complexity: the goal is to transform into a problem such that on one hand, has a much simpler description than , while on the other hand, is provably at least as easy as , but not much easier. If we can find such a relaxation after each round elimination step (and use the relaxed problem as the starting point for the next step), then the number of steps until we obtain a round solvable problem still constitutes a lower bound for the problem.
Bounded automatic round elimination
One natural way to ensure that problems do not grow beyond some description complexity threshold is to fix a constant , and after each step of automatic round elimination, relax the problem in a way that ensures that the obtained problem uses at most labels in its description.
We propose the study of round elimination lower bounds that only require a constant number of labels as a major research program. Understanding better for which problems we can obtain lower bounds using this technique, and how good the lower bounds achieved in such a way can be, would not only help us to get a better handle on general automatic round elimination, but also has another advantage: the bound on the number of labels ensures (in some sense) that the lower bound can be found automatically by a computer. The basic idea is that while the automatic round elimination technique can be used in theory to define a problem that is exactly rounds easier than the original problem, for any , this may be really hard to do in practice, since in each step the problem that we obtain can have an exponentially larger description than the old one. However, if we bound the number of labels to some (reasonably small) constant , a computer can actually try all possible relaxations that give problems with at most labels. If at least one obtained relaxation results in a sufficiently hard problem, by repeating this process we can obtain lower bounds automatically also in practice. We discuss this automatization in more detail in Section 7.
The bottom line is that understanding for which problems a round elimination proof with a restricted number of labels works would allow us to decide which problems to attack with the help of computers and for which problems this would simply be a waste of resources. Moreover, a more finegrained understanding of which constant (depending on the chosen problem) is required to obtain the largest possible lower bound, or more generally, how the obtained lower bound depends on the chosen constant , would help to direct the use of resources in the most efficient way.
A very interesting example case is provided by the recent round^{2}^{2}2Note that bounds obtained by round elimination are usually bounds in . In order to transform these bounds into bounds in , we simply have to consider graphs with suitably chosen (as a function of ). lower bound for BMM by Balliu2019. Here, the authors use automatic round elimination with label number restricted by a constant as part of their proof, but this part only yields a lower bound of rounds, that is subsequently lifted to a bound of rounds by applying another technique on top of it. One important ingredient in our new lower bounds is to increase the chosen constant from to , which results in a direct round lower bound. Besides proving that bounded automatic round elimination can yield the full round lower bound, our result highlights how sensitive the achieved lower bound can be to the exact number of used labels. While the change from to labels increases the difficulty in finding good relaxations after each round elimination step due to the substantially increased number of possible relaxations, the behavior of the obtained problem sequence is actually much easier to understand than the one given in [Balliu2019]. Together with the fact that we can omit the step in [Balliu2019] that lifts their round lower bound to rounds, we obtain a considerably simplified proof for both the deterministic round and the randomized round lower bound.
Upper bounds
Automatic round elimination is not only a tool to prove lower bounds; it can also be used to prove upper bounds. While, for technical reasons, the obtained upper bounds only hold on highgirth graphs, they can still give valuable insight into how a problem could possibly be solved in general. Furthermore, studying bounded automatic round elimination is also very interesting from an upper bound perspective, due to the fact that the bounded number of labels ensures that the resulting algorithm is bandwidth efficient, i.e., we do not only obtain an upper bound automatically, but the bound directly applies to the CONGEST model as well! While we will not prove it, we remark that the upper bounds that we provide in this work can be obtained using bounded automatic round elimination.
Our results
We prove tight bounds for maximal matchings. Let
We will start by proving the following theorem.
Theorem 1.2.
The maximal matching problem requires exactly rounds for deterministic algorithms in the port numbering model, even on colored regular balanced trees.
While an upper bound for the port numbering model directly implies also an upper bound for the LOCAL model, the same is not true for lower bounds. However, we will show that these bounds can be lifted to the LOCAL model, and obtain truly tight bounds in the LOCAL model for all combinations of ,, and . Moreover, we will examine how large can be, as a function of , such that our lower bounds still hold:
Theorem 1.3.
For any , and for large enough and , any randomized algorithm for bipartite maximal
matching that fails with probability at most
in the LOCAL model requires at least rounds, unless , or .Note that this theorem implies tight bounds for problems such as the maximal matching problem, for graphs where is not too large as a function of . While a randomized lower bound directly implies the same deterministic lower bound, we will show that we can get better deterministic bounds by relaxing our tightness requirements in from truly tight to asymptotically tight.
Theorem 1.4.
For any , and for large enough and , any deterministic algorithm for bipartite maximal matching in the LOCAL model requires , unless .
1.1 Related Work
The maximal matching problem has been widely studied in the literature of distributed computing. We know since the 80s that MM cannot be solved in constant time. In fact, the lower bound by Linial for coloring a cycle [Linial1992] implies a lower bound also for MM. Naor proved that this lower bound holds also for randomized algorithms [Naor1991] .
Concerning upper bounds, we also know since the 80s that MM can be solved in rounds using a randomized algorithm [Israeli1986], and there has been a lot of effort in trying to obtain also good deterministic complexities. The first polylogarithmic deterministic algorithm was provided by Hanckowiak1998, who showed that MM can be solved in rounds. The same authors later improved the upper bound to [Hanckowiak2001]. More recently, Fischer substantially improved this bound to [fischer17improved]. If the degree of the input graph is small, there is a very efficient algorithm by panconesi01simple, that runs in rounds and thus matches, as a function of , Linial’s lower bound. In the meantime, also the upper bound on the randomized complexity of MM has been improved. Barenboim2012, Barenboim2016 and fischer17improved showed that MM can be solved in time, by paying only an additive dependency.
In 2004, Kuhn2004, Kuhn2006, kuhn16local substantially improved Linial’s lower bound: they showed that MM cannot be solved in , and this bound holds also for randomized algorithms. After that, different works made progress in obtaining a better understanding of the complexity of MM as a function of . First, Hirvonen2012 proved that if we are only given an edge coloring (but no IDs and no randomness) then indeed this problem requires rounds. Then, a similar technique has been used to show an lower bound for fractional matchings, under the assumption that the running time of the algorithm does not depend on at all [Goos2017]. Finally, a lower bound for the LOCAL model has been proved by Balliu2019, who showed that MM cannot be solved in by randomized algorithms and by deterministic ones.
2 Preliminaries
2.1 Model of Computing
In this work, we will mainly consider two different models of distributing computing, namely, the port numbering model and the LOCAL model.
In the port numbering model, we are given a graph , where nodes represent computing entities, and edges represent communication links. The computation is synchronous: all nodes start in the same round, and in each round each node can send a different message to each neighbor, receive messages sent by the neighbors, and perform some local computation. In this model, nodes are anonymous, that is, they have no IDs, but they can distinguish neighbors using port numbers. Each node has ports, and each edge is connected to a specific port. If an edge is connected to port of node and port of node , then node can decide to send a message to its port number , and the message will be received by node on port . That is, nodes can refer to ports.
Depending on the context, we may assume that nodes initially know the size of the graph , and the maximum degree of the graph . Also, in a randomized algorithm in the port numbering model, each node is provided with an unbounded number of private random bits. We say that a randomized algorithm succeeds with high probability if the output of all nodes is globally correct with probability at least . The running time of an algorithm is the number of communication rounds required before all nodes output their part of the solution.
The LOCAL model is defined similarly. The only difference is that in this case nodes are not anonymous, that is, they are provided with unique IDs in for some constant .
2.2 Automatic Round Elimination
In order to obtain our lower bounds we will make use of the automatic round elimination framework developed in [Brandt2019], in the bipartite formulation used first in [Balliu2019]. For any problem we will consider, the input will be a regular bipartite graph. As we will prove lower bounds, this does not restrict the generality of our results. We refer to the nodes on one side of the bipartition as white nodes and the nodes on the other side as black nodes. Each node is aware of the bipartition, i.e., it knows whether it is a white or a black node.
Problems
In the bipartite round elimination framework (which, for simplicity, we will describe only for regular graphs), a problem is formally given by an alphabet , a white constraint , and a black constraint . Both and are a collection of words of length over the alphabet , where technically each word is to be considered as a multiset, i.e., the order of the elements in a word does not matter and the same element can appear repeatedly in a word. A correct output for is a labeling of the edges of the input graph with one label from per edge such that

the white constraint is satisfied, i.e., for each white node , the output labels assigned to the edges incident to form a word from , and

the black constraint is satisfied, i.e, for each black node , the output labels assigned to the edges incident to form a word from .
Each word in is called a white configuration, and each word in a black configuration. To succinctly represent multiple configurations in one expression, we will make use of regular expressions, e.g., we will write to describe the collection of all configurations consisting of exactly one , and a or an at every other position. For simplicity, we will also call such a regular expression a (white or black) configuration. Where required, we will clarify which kind of configuration is considered by using the terms single configuration and condensed configuration (the latter indicating a regular expression). Moreover, we will use the term disjunction to refer to parts of a regular expression describing that each choice of a subset of labels is valid, such as . Notice that the encoding of a problem can look quite different if its configurations are condensed in different ways. This is just a syntactic difference: if two different sets of condensed configurations generate the same set of single configurations, then the two sets encode the same constraint.
We remark that even though we only consider regular graphs, it is possible to also encode locally checkable problems that are not only defined on regular graphs in a similar way. We also note that if we restrict attention to trees or highgirth graphs, any locally checkable problem can be described in this form. In fact, by increasing the number of labels, it is possible to encode any output constraint that depends on the constantradius neighborhood of each node. In the remainder of the paper, we will use the term “locally checkable problem” (or simply “problem”) to refer to problems of the above kind.
Example
Let us look at an example that shows how to encode BMM. In BMM we basically have to ensure two constraints: a node cannot have two incident edges in the matching, and if a node does not have any incident edge in the matching, then all its neighbors must have at least one.
We start by defining the white constraints as follows:
In other words, a white node either outputs on an edge (the matched edge) and on all the others, or it outputs (pointer) on all edges. We now need to ensure that the pointers reach only matched black nodes. Thus, we define black constraints as follows:
That is, a black node accepts a pointer only if one of its edges is labeled as . Clearly, a solution satisfying these constraints is a matching (two s are never allowed). Maximality is guaranteed by the following observations:

In order for white nodes to not be matched, they need to write on all edges. These s must reach matched black nodes, since on the black side s are accepted only if an is present.

In order for black nodes to not be matched, they need to have all edges marked with the label , and s are written by white nodes only if they are matched.
Technically, what we defined is not exactly BMM: if we are given a solution for BMM, where the edges are either marked to be part of the matching or not, we do not have edges marked with pointers. Nevertheless, white nodes can produce these pointers in rounds. That is, the problem that we defined is equivalent to BMM.
Algorithms
We will distinguish between white algorithms and black algorithms. In a white algorithm, each white node decides on the output labels for all incident edges whereas black nodes take part in the usual communication but have no say in deciding on the output; in a black algorithm, the roles are reversed. The white complexity (resp. black complexity) of a given problem is the usual time complexity of the problem where we restrict attention to white algorithms (resp. black algorithms). White and black complexities cannot differ by more than one round as any white node can inform any black neighbor of the intended output label for the connecting edge, and vice versa.
Notice that if we consider, e.g., a white algorithm, black nodes do not actually need to know the output given by white nodes. If we consider the more standard assumption where both nodes that are incident to the same edge know the output for that edge, we see that such an algorithm requires at most one round more than what is required by either a white or a black algorithm. However, in the bipartite round elimination framework, such algorithms require an extra step of argumentation which we omit for simplicity, by considering only white and black algorithms. We emphasize that the tightness of our MM bound does not depend on this choice, just the bound itself: in the setting where both endpoints of an edge have to know the output for that edge, the tight bound is , instead of .
Round Elimination Theorems
The automatic round elimination theorem given in [Brandt2019, Theorem 4.3], roughly speaking, states that for any locally checkable problem , there exists another locally checkable problem that can be solved exactly one round faster if we restrict attention to highgirth graphs, i.e., graphs where the cycle of smallest length is sufficiently long. A useful fact observed in [Balliu2019] is that the given proof also extends to the case of hypergraphs, and hence can also be phrased in the context of bipartite graphs, by interpreting nodes as one side of the bipartition and hyperedges as the other side. In order to satisfy the conditions of [Brandt2019, Theorem 4.3], we will restrict our attention to the class of regular, bipartite graphs with a girth of at least , for the remainder of the paper. We will see that our lower bounds hold already for this restricted graph class. Now, we can formulate [Brandt2019, Theorem 4.3] in our setting as follows.^{3}^{3}3For the reader interested in the technical subtleties of our rephrasing, four remarks are in order: 1) The edge orientations prescribed in [Brandt2019, Theorem 4.3] are simply given by the port numberings (of the nodes on one side of the bipartition) in our setting. 2) In our setting the nodes do not see the port numbering of adjacent nodes in a round algorithm, while the nodes in [Brandt2019] do see the edge orientations; however, it is straightforward to check that the proof is not affected by this change. 3) If can be solved in rounds, then the proof of [Brandt2019, Theorem 4.3] ensures that this also holds for ; hence we can replace the condition of a strictly positive complexity of by a minimum expression. 4) As the nodes on one side of the bipartition in our setting correspond to (hyper)edges in the original setting, the step from to (resp. ) in our setting corresponds either to the first step from to the intermediate problem given in the proof of [Brandt2019, Theorem 4.3], or to the step from to the final problem (depending on whether we consider white or black nodes as (hyper)edges).
Theorem 2.1 ([Brandt2019], rephrased).
Let be a locally checkable problem with white (resp. black) complexity . Then there exists a locally checkable problem (resp. ) with black (resp. white) complexity .
The problem is constructed explicitly in [Brandt2019]. Translated to our setting, we obtain from as follows.
Let , , and denote the alphabet, white constraint, and black constraint of . The alphabet of is simply the set of all subsets of . In order to describe the black constraint of , we first construct an intermediate collection of configurations over . Let be the collection of all configurations with such that for each choice of labels from , it holds that is a configuration in . We now obtain from by removing all configurations that are not maximal, i.e., for which it is possible to obtain another configuration from by adding elements to the (more precisely, at least one element to at least one of the ). Since the above removal process ensures that for each nonmaximal configuration there always remains a “superconfiguration” in the collection, the order in which the nonmaximal configurations are removed does not matter.
Similarly, to obtain , we first construct an intermediate collection of configurations over . Let be the collection of all configurations with such that there exists a choice of labels from such that is a configuration in . We now obtain from by removing each configuration that contains some set that does not appear in any of the configurations in the black constraint .
So, roughly speaking, apart from some simplifications on top of it, we obtain the new black constraint by “applying” the universal quantifier to the old black constraint, and the new white constraint by “applying” the existential quantifier to the old white constraint.
We define analogously to , with the only difference that the role of white and black is reversed. The following observation follows immediately from the definition of our functions and .
Observation 2.2.
Let be some problem, and assume we have already computed the black constraint of (resp. the white constraint of ). Then the white constraint of can be obtained by iterating through the white configurations of and replacing in each configuration each label by the disjunction of all sets that occur in the black constraint of and contain label . Similarly, the black constraint of can be obtained by iterating through the black configurations of and replacing in each configuration each label by the disjunction of all sets that occur in the white constraint of and contain label .
As the alphabet of a problem obtained by applying the function (resp. ) consists of sets of the original labels, we will need to be careful with notation. In order to clearly distinguish the set consisting of some labels from the disjunction , we will write it as . Moreover, we say that a configuration consisting of sets of labels can be extended to a configuration if can be obtained from by adding (potentially everywhere ) elements to the sets .
Example
We will now show an example of the application of this technique. We will consider the bipartite sinkless orientation problem [Brandt2016], where white constraints can be simpliy described as , and black constraint can be described as . Intuitively, the label represents an edge oriented from black to white, while represents an edge oriented from white to black, and the constraints require both black and white nodes to have at least an outgoing edge: for white nodes and for black nodes.
We start by applying the universal quantifier on the black constraint. Essentially, we must forbid all words where all the sets contain a , otherwise it would be possible to pick the configuration that is not allowed by the black constraint. Hence, we obtain the following:
We can now apply maximality, and since in the disjunction the label strictly contains all the others, we get the following:
We can now apply the existential quantifier on the white constraint. The white constraint basically requires to be able to pick at least one , hence, we now need at least one set containing a :
By removing labels that are not used in the universal step, we get the following:
We can now rename the sets to obtain something more readable. We can use the following renaming:
Thus, the black constraint of can be described as , while the white constraint can be described as .
Relaxations
As mentioned in the introduction, a crucial part of successfully applying automatic round elimination is to find good relaxations of problems. We say that a problem is a white relaxation of a problem if there is a round white algorithm that transforms any arbitrary correct output for into a correct output for . More specifically, in this round algorithm, each white node sees, for all incident edges , the given output label (for ) on , and its task is to relabel each incident edge (possibly with the same label as before) such that the global output is correct. We define a black relaxation analogously. It immediately follows from the definition that the relaxation of a relaxation of a problem is again a relaxation of the problem.
A simple way to find a relaxation of a problem is to replace a fixed label everywhere by another label. Any white or black node can solve the new problem in rounds given a solution to the old problem, by just performing the corresponding relabeling everywhere.
Observation 2.3.
Let be a problem, and labels. Then replacing all occurrences of label in both and by label results in a problem that is a relaxation of .
A more interesting way to find relaxations of problems given via white and black constraints starts by ordering the labels occuring in the constraints according to their strength, which roughly corresponds to their usefulness for outputting a correct configuration. Consider the black constraint of some problem . For two labels , we say that is at least as strong as (according to ) if for each appearing in some configuration in , replacing that by an results again in some configuration in . Equivalently, we say that is at least as weak as (according to ), and write . If is at least as weak as , but is not at least as weak as , then we say that is stronger than , and weaker than . If is at least as weak as , and is at least as weak as , we say that and are equally strong. We define these concepts and notations analogously for white constraints. Now, we can use the strengths of labels to find relaxations as follows.
Lemma 2.4.
Let be a problem, and labels such that is at least as strong as according to (resp. ). Then replacing an arbitrary number of labels in (resp. ) by label results in a problem that is a white (resp. black) relaxation of .
Proof.
For reasons of symmetry it is sufficient to prove the lemma for the case of replacing the labels in . We obtain a round algorithm as required in the definition of a relaxation as follows. Given a solution to (on each incident edge), each white node simply replaces as many incident occurrences of label by as have been replaced in the configuration the solution of around corresponds to. This satisfies the white constraint by definition. But also the black constraint is satisfied since replacing occurrences of label by preserves that a configuration is contained in due to the fact that is at least as strong as , and . ∎
Recall that the labels used to describe a problem obtained by applying the function (resp. ) to some problem are sets of labels of . The following observation follows immediately from the definition of (resp. ).
Observation 2.5.
Let be a problem, and labels of the problem (resp. ). If , then is at least as strong as according to the white constraint of (resp. to the black constraint of ).
To visualize the strengths of the labels according to the black (resp. white) constraint of a problem, we can draw a diagram as follows. For any two labels , we draw an arrow from to if

and are equally strong, or

is stronger than and there is no label such that is stronger than , and is stronger than .
We call the obtained diagram the black (resp. white) diagram of .
Example
Consider the following problem (for the curious reader, this problem is exactly rounds easier than BMM). The white constraint is the following:
The black constraint is the following:
The black diagram of is the following:
Notice that there is an arrow from to , since each time is allowed in a black configuration, the label is allowed as well. We can thus simplify the problem according to Lemma 2.4 using labels and . In particular, we can replace all occurences of with in the white constraint, and thus get rid of also in the black constraint. We obtain the following new white constraint:
The new black constraint is the following:
WhiteBlack Dualism
So far, we have seen a number of definitions and results that have two versions: one for white nodes, algorithms, configurations, etc., and one for the black equivalent. In fact, the white and black versions are completely dual: they only differ in exchanging the role of white and black. Moreover, this behavior will hold throughout the entirety of the paper. Hence, we will use the following convention in the remainder of the paper.
Convention 2.6.
For simplicity, we will formulate any theorem and lemma for which there is a white and a black version only in one of the two versions. Moreover, by giving a theorem and lemma for which all ingredients are defined if we exchange the terms “white” and “black”, we implicitly state that also its dual version holds. We will refer to the dual of a stated theorem or lemma by simply referring to the theorem or lemma in its original version; the context in which the statement is referred indicates which version is meant.
From the port numbering model to the LOCAL model
The round elimination theorem can be used to get lower bounds for deterministic algorithms in the port numbering model, where it is assumed that nodes have no IDs and no access to random bits. The usual way [Balliu2019, binary] to lift lower bounds obtained with this technique to the LOCAL model is the following. First, we incorporate the analysis of failure probabilities in the round elimination theorem, i.e., we show that if can be solved in rounds using a white randomized algorithm with some failure probability , then can be solved in rounds using a black randomized algorithm with some failure probability , where is not much larger than . Using this version of the theorem, we can repeatedly apply round elimination, until either we get a round solvable problem or we get a too large failure probability. This allows us to obtain a lower bound for randomized algorithms for the port numbering model, and since by using randomness it is also possible to generate unique IDs with high probability of success, then the same lower bound holds in the LOCAL model as well.
Notice that we do not have to actually prove a randomized version of Theorem 2.1: a randomized version of the round elimination theorem has been shown in [binary], where it has also been shown that, if the number of labels is bounded at each step, then we can automatically lift a lower bound obtained with this technique to the LOCAL model.
Also, since a lower bound for randomized algorithms implies also a lower bound for deterministic algorithms, we immediately get as a corollary a lower bound for the LOCAL model for deterministic algorithms.
We can then get even better lower bounds for deterministic algorithms by exploiting known gap results: we know that for locally checkable problems some complexities are not possible, and if we get as lower bound a complexity that falls into one of these gaps, we immediately get as new lower bound the smallest complexity , larger than , for which the gap does not hold anymore.
3 Roadmap
In the breakthrough result of Balliu2019, the round elimination technique yielded only an lower bound for BMM, and different techniques were necessary to lift the result to an lower bound. One of our contributions is to show the reason why a full lower bound was not achieved via round elimination: the relaxations performed at each step were too severe. In fact, the main issue of the approach in [Balliu2019] is the following. After each step of round elimination, some simplifying relaxations are performed in order to obtain a problem that can be described using just labels. By performing such simplifications, a special label called wildcard appears: this is a powerful label that can replace any other label in any configuration without invalidating the configuration. The issue is now the following: by performing a round elimination step on a problem where valid configurations contain some number of wildcards, the obtained resulting problem contains configurations with many more wildcards. In particular, the number of wildcards grows quadratically. Hence, after round elimination steps the obtained problem description contains so many wildcards that the problem is round solvable.
What we can show is that, if we allow just one more label at each step, hence instead of , then the number of wildcards does not grow at all; instead we obtain a linear growth on some parameter that controls how easy the problem is. In this way, for BMM, we can perform steps of round elimination before getting to a rounds solvable problem. Since we can also provide an upper bound with the same number of rounds, this implies that the simplifications that we perform are not making the problem easier at all: they produce a problem that is easier to describe but that has the same round complexity as the one before the respective simplification. We will actually prove a stronger result: using labels we can prove exact bounds for the whole family of bipartite maximal matchings.
Hence, the family of problems that we provide in the lower bound proof really captures the essence of BMM and its variants: for each given variant of BMM and any , we describe in a compact form a problem that is exactly rounds easier than the given variant.
Round elimination
We start by defining a family of problems , described by parameters. Then, we prove that the problem is exactly rounds easier than the bipartite maximal matching problem (recall that bipartite maximal matching is standard BMM). Then, we relate problems in the family: we will show that the problem is at least one round easier than (the results that we provide in the upper bound section will imply that is actually exactly one round easier than ). In this way, we get a full characterization of all the problems in the family. Crucially, all the problems of the family are described using only labels, and while the result of the round elimination technique may contain more than labels, we will provide relaxations that allow us to map these problems back to this family.
Lower bounds for bipartite maximal matchings
We will then prove that, for some values of , the problem cannot be solved in rounds, and we will then show what lower bounds this implies for bipartite maximal matchings. In particular, we will obtain tightin lower bounds for the port numbering model.
Upper bounds
Then, we will prove upper bounds for the whole family of bipartite maximal matchings. These upper bounds will match exactly the lower bounds that we provided.
Lifting the bounds to the LOCAL model
At this point we have lower bounds for plenty of variants of matchings, for the port numbering model. We now need to lift these bounds to the LOCAL model. The first step is to prove a randomized lower bound for the port numbering model, that directly implies a lower bound for the LOCAL model as well, since it is possible to generate unique IDs in constant time. Then, while a randomized lower bound directly implies a deterministic lower bound, we will obtain a better deterministic lower bound using standard techniques.
Behind the scenes
We will then informally discuss about what we mean by automatic lower and upper bounds. We will briefly explain how, part of our results concerning both lower and upper bounds can be actually obtained automatically, not in theory but also in practice.
Open questions
We will finally conclude with some open questions regarding round elimination, and in particular about how different can be lower bounds obtained by using bounded automatic round elimination, compared to what can be obtained by using the standard version of round elimination for the same problem.
4 Lower Bounds
In this section, we will prove a lower bound for BMM that we will show to be tight in the LOCAL model in Sections 5 and 6. In particular, our lower bound holds even in the restricted setting of regular trees. In order to obtain this bound, we will define a family of problems that will help us to describe the behavior of BMM in the round elimination framework. As it turns out, our way of proving a lower bound via such a family of problems is very robust: we can also obtain tight lower bounds for very natural variants of BMM by generalizing our problem family. In fact, we believe that the general lower bound idea should also work for many other variants of BMM; however we will only give an explicit proof for the variants we consider to be the most natural extensions of BMM, namely those obtained by relaxing the packing and/or covering constraint used to define BMM. To be more precise, if we relax the packing constraint, we allow each node to be matched with up to many neighbors, for some parameter , and if we relax the covering constraint, we allow each unmatched node to have up to neighbors that are unmatched themselves, giving rise to the already defined notion of maximal matchings. BMM appears as a special case in this family, where we set and ; hence we will only give a general proof for the whole family and obtain the proof for BMM as a special case.
In order to obtain the desired lower bounds for all problems in this family, we define the aforementioned problem family (or more precisely, its extension that also describes the behavior of all bipartite maximal matching problems), parameterized (for each fixed ) by three values . For each and such that and , denote by (resp. ) the problem given by the white (resp. black) configurations
and the black (resp. white) configurations
The black diagram of is represented in Figure 1. For extreme values of additional arrows may be present.
We now proceed as follows. In Lemma 4.2 and Lemma 4.3 we will prove that is exactly two rounds easier than the bipartite maximal matching problem. In Lemma 4.4 we will prove that for most parameter values, the problems in the family are not round solvable. Finally, in Theorem 4.5 we will combine all the results of this section to prove a lower bound for the bipartite maximal matching problem. We will now start by relating problems in the family that we just defined, by showing in Lemma 4.1 that by applying Theorem 2.1 on and performing some relaxations, we obtain a problem that is still in the same family, but with different parameters.
Lemma 4.1.
For any , , , and , problem is a black relaxation of , where .
Proof.
Recall the definition of extending a configuration, and let . We start by showing that any configuration in the black constraint of can be extended to one of the configurations
Consider an arbitrary configuration in the black constraint of , and recall that the black constraint of is obtained by applying the universal quantifier to the white constraint of . We distinguish three cases.
If contains the set , then at most of the other sets in can contain an or since in each black configuration of there are at most labels from . Hence the remaining labels must be subsets of each. It follows that can be extended to .
If contains the set , then at most of the other sets in can contain an or since otherwise we would again be able to choose labels contained in from sets in which cannot result in a black configuration of , no matter which labels are picked from the remaining sets. Hence the remaining labels must be subsets of each, and similarly to the previous case, we see that can be extended to .
Consider the last remaining case, i.e., that contains neither the set nor the set . Since is a label that is at least as strong as according to the black constraint of , any set in that contains must also contain as otherwise adding to that particular set would still result in a black configuration of which would violate the maximality condition in the definition of . It follows that does not contain the set . By a similar argument to the previous one, any set in containing must also contain , and any set containing must also contain , due to being a label as strong as and being a label as strong as . Hence, it follows that each set in must contain the label , since if a set does not contain then it must not contain and as well, and this in turn would imply that the set is either or , and this case has already been covered. This implies that at most sets in can contain the label as otherwise we could choose times the label and times the label from the sets in which does not yield a black configuration of . Moreover, with an argumentation analogous to the one in the first case, we see that at most sets in can contain an or , and we can extend these sets to . The other sets must be subsets of , and at most of them can contain a . We extend these sets to . All other sets are extended to . It follows that can be extended to .
Now, Lemma 2.4 in conjunction with Observation 2.5 tells us that replacing the black constraint of by the configurations
will result in a black relaxation of . What is left to be done is to compute the white constraint of this relaxation. As we performed the above relaxation, formally we cannot directly apply Observation 2.2, but the same idea works: the definition of our function still ensures that we can obtain the white constraint of the relaxation by iterating through the white configurations of and replacing in each configuration each label by the disjunction of all sets that occur in the black constraint of the relaxation and contain label . We obtain that the white constraint is given by the configurations
Finally, renaming the sets in the black and white constraint of the relaxation according to
shows that the relaxation is identical to , thereby proving the lemma. ∎
In order to relate the maximal matching problem with the family , we now redefine the bipartite maximal matching problem in a way that conforms to the formalism used in Theorem 2.1. We will then prove that by applying Theorem 2.1 twice on this encoded version of maximal matching and performing some relaxations, we get some problem in the family . We will use labels: , , , and . The label represents a matched edge, and for both black and white nodes it can appear on at most incident edges. Also, we want unmatched white nodes to prove that they have enough matched neighbors. Thus, we require that they output at least pointers using the label . Then, they can output on all the other incident edges, where the label represent a wildcard. The pointers must be incident to only black matched nodes. In order to require that also unmatched black nodes have enough matched white neighbors, we limit the number of labels incident to an unmatched black node to . The label represents an unmatched edge, where no pointer nor wildcard has been used. More formally, we define the bipartite maximal matching problem as follows.
For any and , denote by (resp. ) the problem given by the white (resp. black) configurations
and the black (resp. white) configurations
The black diagram of is represented in Figure 2. For extreme values of and additional arrows may be present.
We now argue that the problem that we just defined is the maximal matching problem previously defined in the introduction. In particular, this definition correctly encodes the bipartite maximal matching problem, where the solution is given by edges labeled . In fact, first, note that each node is incident to at most edges labeled , hence the packing constraint is not violated. Then, note that unmatched white nodes are incident to at least s, and since black nodes incident to a must also be incident to an , then each white node has at most unmatched neighbors. Finally, note that unmatched black nodes are incident to at least s, and since white nodes incident to an must also be incident to an , then each black node has at most unmatched neighbors. Thus, the covering constraint is not violated. We now need to show that given a solution to the bipartite maximal matching problem we can output a solution for this problem with the encoding that we defined. This can be performed in round of communication, required by white nodes to be aware of which black neighbors are actually matched. Matched white nodes output on all matched edges and on all the others, while unmatched white nodes output on all edges connecting them to matched black nodes, and on all the others. This is a valid solution, since the number of s incident to each node is at most , at most neighbors of unmatched white nodes are also unmatched and thus white nodes are incident to at most s, and at most neighbors of unmatched black nodes are also unmatched and thus at most white nodes output on their ports. Notice that the bipartite maximal matching problem now has two different meanings:

The natural and intuitive version where both black and white nodes know which edges are part of the matching, and nothing else.

The locally checkable encoded version, where only white nodes need to know the output, but they need to know which edges are part of the matching and which edges must be marked with .
All the results that we prove in the paper about maximal matchings will refer to the locally checkable encoded version. As we have seen, there is at most a round difference between the two versions. We remark that it is possible to show that any algorithm that solves the problem in the original definition must in fact be aware of enough matched neighbors that it can output the required pointers without the round penalty, implying that these two problems are actually equivalent.
We now show how to connect the bipartite maximal matching problem (with the encoding described above) to the family of previously defined problems. In particular, we define an intermediate problem, and we apply Theorem 2.1 twice, once in Lemma 4.2 and once in Lemma 4.3, to show that is exactly two rounds easier than the bipartite maximal matching problem. The white diagram of the intermediate problem that we define is represented in Figure 3. For extreme values of and additional arrows may be present.
Lemma 4.2.
For any , and , the problem given by the black configurations
and the white configurations
is a black relaxation of .
Proof.
Similarly to the proof of Lemma 4.1, we start by showing that any configuration in the black constraint of can be extended to one of the configurations
Consider an arbitrary configuration in the black constraint of . We distinguish two cases.
If contains the set , then at most of the other sets in can contain an since in each black configuration of , there are at most labels . Hence, the remaining labels must be subsets of each, and it follows that can be extended to .
If does not contain the set , then each set in cannot contain as otherwise we could choose one and further labels that are all different from from the sets in , which does not yield a black configuration of . Using, again, the fact that in each black configuration of there are at most labels , it follows that can be extended to .
By applying Lemma 2.4 and Observation 2.5, we obtain that replacing the black constraint of by the configurations
will result in a black relaxation of . Computing the white configurations of the relaxation in an analogous manner to the approach in Lemma 4.1, we obtain the configurations
Finally, renaming the sets in the black and white constraint of the relaxation according to
shows that the relaxation is identical to , thereby proving the lemma. ∎
Lemma 4.3.
For any , and , problem is a white relaxation of .
Proof.
Similarly to the proof of Lemma 4.1, we start by showing that any configuration in the white constraint of can be extended to one of the configurations
Consider an arbitrary configuration in the white constraint of . We distinguish three cases.
Analogously to the argumentation in the case distinction in the proof of Lemma 4.1, we obtain that if contains the set , then can be extended to , and if contains the set , then can be extended to . Hence, consider the last remaining case, i.e., that contains neither the set nor the set . Again, analogously to the argumentation in the proof of Lemma 4.1, we see that each set in must contain the label , which in turn implies that at most sets in can contain the label . Since at most sets in can contain an , it follows that can be extended to .
Continuing analogously to the proof of Lemma 4.1, by applying Lemma 2.4 and Observation 2.5, and computing the black constraint, we obtain a white relaxation of that is given by the white configurations
and the black configurations
Now adding the black configuration (which can only relax the problem further) and renaming the sets according to
yields a relaxation that is identical to , thereby proving the lemma. ∎
We now prove that, if the parameters , , and are not too large, then the problem is not round solvable.
Lemma 4.4.
For any , , , and , there is no deterministic white algorithm that solves in rounds.
Proof.
For a contradiction, assume that such a white algorithm exists. As it is deterministic, and in the port numbering model each node has exactly the same information in the beginning, i.e., after rounds, each white node will necessarily choose the same configuration from the white constraint and output the labels contained in on the incident edges according to a fixed bijective function that maps the set of port numbers (or, in other words, the set of incident edges) to the (multi)set of labels in . For each edge , call the port number that the white endpoint of assigns to edge the white port of . From the above, it follows that for each label in , there is a fixed port number such that each edge whose white port equals will receive the output . Therefore, w.l.o.g., we can assume the following. If , then each edge whose white port equals receives output label ; if , then each edge whose white port equals receives output label ; if , then each edge whose white port equals receives output label . Note that for the second case, our choice of parameters ensures that contains at least one .
Now, consider a black node such that the white port of each edge incident to equals . Clearly, there exist input graphs in which such a node occurs. Depending on which of the white configurations was chosen as , the multiset consisting of the output labels of the edges incident to is , , or . As, due to our choice of parameters, none of these multisets is a black configuration of , we obtain a contradiction to the correctness of the algorithm, which proves the lemma. ∎
We finally combine all the results obtained above to prove a lower bound for the bipartite maximal matching problem in the port numbering model.
Theorem 4.5.
In the port numbering model, for any and , the white round complexity of bipartite maximal matching is at least
In particular, the white round complexity of BMM is at least .
Proof.
By Lemmas 4.1, 4.2, and 4.3, we obtain the following sequence of problems in the round elimination framework if we start with bipartite maximal matching:
Each problem in the sequence is obtained from the previous problem by applying the function or the function , and additionally relaxing the obtained problem. By Theorem 2.1, it follows that each problem has a black, resp. white complexity that is at least one round less than the white, resp. black, complexity of the previous problem, depending on which of the two functions was applied in the respective step.
With an analogous argumentation to the one in the proof of Lemma 4.4, we can see that has white round complexity if and only if or , and also has black round complexity if and only if or . As the lemma assumes and only promises a round lower bound for the case where , the lemma statement holds for these cases. Hence, assume that (and ).
Consider first the case that , i.e., . Due to the growth behavior of the parameters in the above sequence promised by Lemma 4.1, the problem obtained after steps, starting from , satisfies
By Lemma 4.4, we obtain that, for these parameters and , problem cannot be solved in rounds by a black algorithm, which implies that the complexity of bipartite maximal matching must be at least .
Now, consider the case that . Similarly to the previous case, we obtain that the problem in the above sequence obtained after steps satisfies
Again, by Lemma 4.4, we obtain that, for these parameters and , problem cannot be solved in rounds by a white algorithm, which implies that the complexity of bipartite maximal matching must be at least . ∎
Note that for the case , there is a trivial round upper bound, which is why we do not cover this case in Theorem 4.5.
5 Upper bounds
We will now prove upper bounds for the family of bipartite maximal matching problems. The algorithm we provide is very similar to the proposal algorithm of Hanckowiak1998, that works as follows: for , at round , white nodes propose to be matched to their th neighbor (if it exists), and black unmatched nodes accept a proposal and reject all the others. This algorithm requires rounds, but it can be made round faster by making black nodes propose and white nodes output. For the special case of BMM, i.e., bipartite maximal matching, the algorithm that we propose achieves the same running time of the proposal algorithm. The main difference is that, in the general case, our algorithm achieves a better intermediate state than the proposal algorithm, i.e., the partial solution that it maintains at each step is better than the one maintained by the original proposal algorithm, and this allows us to get exact bounds for the whole family of problems. In the following, we will consider only the case where , since for there exists a trivial rounds algorithm that simply puts each edge into the matching.
The high level idea of the algorithm is the following. White nodes start sending matching proposals to arbitrary black neighbors. Black nodes that receive proposals accept one and reject the others. Black nodes that did not receive any proposal, send proposals to white neighbors that have not previously sent a proposal to the black node (but can be chosen arbitrarily apart from that). White nodes that receive proposals and are not matched, accept one and reject the others. By repeating this procedure for a sufficiently large number of rounds, we can ensure that if a black or white node is still unmatched, then a large number of neighbors are already matched. This is the main difference with the standard proposal algorithm, where only one side proposes: if we stop the execution of the algorithm before its natural termination and we look at the partial solution that it maintains, we see that the partial solution of the standard proposal algorithm maintains good guarantees for one side only.
We will now formally describe the procedure. Each node keeps track of the state of each incident edge, by maintaining four sets:

The set contains all free edges, i.e., the edges over which no request has ever been sent or received.

The set contains all edges that are already part of the matching.

The set contains all edges where the node sent a proposal.

The set contains all edges where the node received a proposal.
Note that the design of the algorithm will ensure that each edge will be used for at most one proposal in total, which implies that the set and the set are disjoint. In the beginning, each node initializes the set with all incident edges, and all the other sets as empty sets. Then, nodes apply the following round procedure repeatedly. Each round the role of active and passive is reversed between black and white nodes. Initially, white nodes are active.

Active nodes with do the following:

If contains at least one edge , send an acceptance over the edge , and put into the set .

Otherwise, remove edges from the set (or all edges if ), add them to the set , and send proposals over these edges.


Passive nodes with do the following:

If acceptances are received over a set of edges , add elements of to

If requests are received over a set of edges , add elements of to .

If at some point for some node , then terminates.
In the following, we prove that, by applying this procedure times, we obtain a solution (known by white nodes) for the bipartite maximal matching problem, where .
Regarding the maximum number of incident edges in the matching, note that a node sends at most proposals in parallel, and if at least one is accepted, then the node stops participating in the following phases. Thus, a node is matched with at most neighbors.
Next, we show that, for each node , if is empty after steps, then neighbors of are matched (i.e., have nonempty ). Notice that a node that is still unmatched after steps has been active for steps, and in those steps it added a total of different edges to the set , and hence sent proposals. If all the proposals are rejected, it means that the receivers of those proposals are matched with different nodes, and thus if a node is unmatched, then at least neighbors are matched. Notice that black nodes may not know if they are matched or not: the proposals that they sent in round are received by white nodes that output the solution without informing black nodes about their decisions.
Hence, if we want to solve the bipartite maximal matching problem, we can set
Comments
There are no comments yet.