1 Introduction
Locality of locally checkable problems.
One of the big themes in the theory of distributed graph algorithms is locality: given a graph problem, how far does an individual node need to see in order to be able to produce its own part of the solution? This idea is formalized as the time complexity in the LOCAL model [17, 22] of distributed computing.
While we are still very far from understanding the locality of all possible graph problems, there is one highly relevant family of graph problems that is now close to being completely characterized: locally checkable labeling problems, or in brief LCLs. In essence, LCLs are graph problems in which feasible solutions are easy to verify in a distributed setting—if a solution looks good in all local neighborhoods, it is also good globally. This family of problems was introduced in the seminal paper by Naor and Stockmeyer [19] in the 1990s, and while the groundwork for understanding the locality of LCLs was done already in the 1980s–1990s [14, 9, 20, 17, 18], most of the progress is from the past three years [3, 2, 4, 7, 6, 10, 11, 12, 13, 8, 5].
There are many relevant graph classes to study, but for our purposes the most interesting case is general boundeddegree graphs. We only assume that there is some constant upper bound on the maximum degree of the graph, and other than that there is no promise about the structure of the input graph. If there are nodes, the nodes will have unique identifiers from , and initially each node knows , , its own identifiers, and its own degree—everything else it has to learn through communication.
For boundeddegree graphs, the state of the art is summarized in Figure 1. The figure represents the landscape of all possible distributed time complexities of LCL problems, both for deterministic and randomized algorithms. There are infinite families of problems with distinct time complexities, but there are also large gaps: for example, Chang et al. [7] showed that there is no LCL with a time complexity in the range and . For deterministic algorithms, the work of characterizing possible time complexities of LCL problems is nearcomplete.
Role of randomness.
What we aim at understanding is how much randomness helps with LCLs. As shown in Figure 1, there are some problems in which randomness helps exponentially. The most prominent example is sinkless orientation: its deterministic complexity is , while the randomized complexity is [4, 7, 11].
On the one hand, it is known that there is at most an exponential gap between deterministic and randomized complexities [7]. On the other hand, there are also lower bounds that exclude many possible combinations of deterministic and randomized time complexities. As illustrated in Figure 1, the work by Chang and Pettie [6] and Fischer and Ghaffari [10] implies that there is no LCL with deterministic complexity and randomized complexity e.g. . If a problem can be solved in deterministic logarithmic time, then either randomness helps a lot or not at all.
Sinkless orientation and closely related problems such as coloring and algorithmic Lovász local lemma are currently the only LCLs for which randomness is known to help. Indeed, all known results previous to our work are compatible with the following conjecture:
Conjecture.
If the deterministic complexity of an LCL is , then its randomized complexity is either or . Otherwise the randomized complexity is asymptotically equal to the deterministic complexity.
In particular, randomness helps exponentially or not at all.
We show that the conjecture is false. We show that there are LCL problems that benefit from randomness, but only polynomially. We show how to construct, e.g., an LCL with deterministic complexity rounds and randomized complexity rounds.
Technique: padding.
The main technical idea is to introduce the concept of padding in the construction of LCL problems—the basic idea is inspired by the padding technique in the classical computational complexity theory [1, Sect. 2.6].
We start with an LCL problem and a suitable family of gadgets . Then we use the gadgets to construct a new graph problem such that both deterministic and randomized complexity of is higher than those of . More concretely, let be the problem of finding a sinkless orientation, with randomized complexity and deterministic complexity , and let be a suitable family of treelike graphs. By applying to , we obtain in which both randomized and deterministic time complexity have increased by a factor of ; hence the randomized complexity of is and the deterministic complexity is . By applying to recursively, we can then further obtain randomized complexity and deterministic complexity for any constant .
Figure 2 shows what we would ideally like to do: given a hard instance for , we replace each node with a suitable gadget to obtain a hard instance for . The intuition here is that padding increases distances, so if all gadgets happened to be trees of depth , then solving on is exactly times as hard as solving on .
This would be easy to implement if we had a promise that the input is of a suitable form, but with a promise one can trivially construct LCLs with virtually any complexity. The key challenge is implementing the idea so that is an LCL in the strict sense and we can control its distributed time complexity also in the family of all boundeddegree graphs. Some challenges we need to address include:

[noitemsep]

What if we have an input graph that is not of the right form, i.e., it does not consist of valid gadgets connected to each other?

What if we have an input graph in which the gadgets have different depths?
The first challenge we overcome by making the gadgets locally checkable. In essence, a node will be able to see within distance if it is part of an invalid gadget, and it is also able to construct a locally checkable proof of error. LCL is defined so that we have to either solve the original problem or produce locally checkable proofs of errors. This ensures that:

[noitemsep]

An algorithm solving cannot cheat and claim that the input is invalid if this is not the case.

The adversary who constructs input never benefits from a construction that contains invalid gadgets, as they will in essence result in “don’t care” nodes that only make solving easier.
See e.g. [15, 16] for more details on the concept of locally checkable proofs; in our case it will be essential that errors have a locally checkable proof with constantly many bits per node so that we can interpret it as an LCL.
The second challenge we overcome by choosing the original problem and the gadget family so that the worst case input that the adversary can construct is essentially of the following form:

[noitemsep]

Start with an node graph that is a worstcase input for .

Replace each node with an sized gadget, which has depth .
This way in the worst case the adversary can construct a graph with nodes, and if solving on took rounds for randomized algorithms and rounds for deterministic algorithms, then solving on will take rounds for randomized algorithms and rounds for deterministic algorithms. We can show that a different balance between the size of and the depth of each gadget will not result in a harder instance; both much larger and much smaller gadgets will only make the problem easier.
Discussion and open questions.
If we write for the deterministic complexity and for the randomized complexity of a given LCL, we have now seen that we can engineer LCLs that satisfy e.g. any of the following:
However, if we look at the ratio , we see that all examples with happen to satisfy
The main open question is whether we can construct LCLs with
This question is closely connected to the complexity of network decompositions, which is a longstanding open question: Ghaffari et al. [12] implies that, in the context of LCLs, any randomized algorithm running in time can be transformed to a deterministic algorithm running in time , where is the time required to compute a network decomposition in graphs of size with a deterministic distributed algorithm. The best known upper bound for is , due to Panconesi and Srinivasan [21]. The existence of any LCL with would imply a superlogarithmic lower bound for network decomposition—such a bound is currently not known.
2 Preliminaries
Model.
The LOCAL model is synchronous, that is, the computation proceeds in synchronous rounds. At each round, each entity sends messages to its neighbors, receives messages from them, and performs some computation based on the data it receives. In this model, the size of messages can be arbitrarily large, and the computational power of an entity is not bounded. The time complexity of an algorithm running in the LOCAL model is given by the number of rounds that entities need to run the algorithm in order to solve a problem.
The LOCAL model is equivalent to a model where each entity: (i) gathers its radius neighborhood, i.e., the entity learns the structure of the network around it up to distance , along with the inputs that the entities in this neighborhood might have; (ii) performs some computation based on the data that has been gathered; (iii) produces its own local output.
A distributed network is represented by a graph with nodes and edges, where a node represents a specific entity of the network, and there is an edge between two nodes if and only if there is a communication link between the entities that they represent. We denote a graph by , where is the set of nodes and the set of edges. The degree of a node is the number of its incident edges. The incident edges are numbered, that is, we assume that a node has ports numbered from to where incident edges are connected to. Each node, when receiving a message, knows the port from which the message arrives. We denote by the maximum degree in the graph.
For technical reasons, we deviate from the usual assumptions and we allow to be disconnected and to contain self loops and parallel edges. While all upper and lower bounds that we will present hold in this larger class of graphs, our final results hold for simple graphs as well.
Locally checkable labeling problems.
LCL problems are defined on constant degree graphs, i.e., graphs where . Each node has an input label from a constantsize set , and must produce an output label from a constantsize set . The output must be locally checkable, that is, there must exist a constanttime distributed algorithm that can check the correctness of a solution. If the solution is globally correct, this algorithm must accept on all nodes, otherwise it must reject on at least one node. A distributed algorithm solving an LCL problem in time is an algorithm that, for any graph with nodes, given and , runs in rounds and outputs a label for each node, such that the LCL
constraints are satisfied at each node. For randomized algorithms, we require global high probability of success, that is, the probability that the solution is wrong must be at most
.An example of an LCL problem is the proper ()coloring of the nodes of a graph: nodes have all the same input, that is a special character denoting the empty input label, and they must produce as output a color in . In a proper coloring it must hold that, for any pair of neighbors, their colors are different. It is easy to see that, if the graph is properly colored, each node will see a proper solution locally, otherwise there will be two neighbor nodes that will have the same color, noticing the error. Many other natural problems fall in the category of LCLs, such as edge coloring, maximal matching, maximal independent set, sinkless orientation, etc.
Deviating from the common way of writing inputs and outputs of LCLs only on nodes (or, occasionally, edges), we will write inputs and outputs on nodes, edges, and nodeedge pairs. This allows us to conveniently assign different labels to each half of an edge, something we will make use of in Section 4. For technical reasons we restrict our considerations to the subclass of LCLs where the local constraints determining whether a solution is correct can be checked “on nodes and edges”. Note that almost all commonly studied LCL problems can be reformulated in this form, by requiring each node to return, apart from its own output, also the outputs of all nodes at a constant distance. Formally, these nodeedgecheckable LCLs, or neLCLs, are defined as follows.
Nodeedgecheckable Lcls.
Let be the set of incident nodeedge pairs. The input to an neLCL is given by assigning an input label to each ; a solution to an neLCL is given by each node assigning an output label to itself and to each “incident” element of , where for each edge , nodes and have to choose the same output label for . Apart from the sets and of input and output labels, an neLCL is defined by a set of node constraints and a set of edge constraints, where describes for each node which output label configurations on are correct (depending on the input labels on those nodes, edges, and nodeedge pairs), and describes for each edge which output label configurations on are correct (again, depending on the input labels on those elements of ). Note that and do not depend on the choice of or in the above description, or on the port numbers or identifiers assigned to the edges or nodes of the graph.
As an example, let us see how sinkless orientation can be formulated as an neLCL. Each node has to output on each incident edge, or more precisely on each , either the label (outgoing) or the label (incoming). The constraint on nodes is that there must exist an incident edge labeled . This guarantees that no node is a sink. The constraint on edges is that, whenever an endpoint is labeled , the other endpoint must be labeled , and vice versa. This guarantees that the edges are oriented consistently. Note that in this example, the constraints are independent of any input labels. See Figure 3 for an illustration.
3 Padded Lcls
In this section we provide a technique that constructs new LCLs in a black box manner. More precisely, given an neLCL and a collection of graphs, socalled gadgets, with certain properties, we can construct a new neLCL with changed deterministic and randomized complexities. Informally, the idea is that the hard graphs for are socalled padded graphs, i.e., graphs obtained by taking some graph and replacing each node of with a gadget, thereby “padding” . See Figure 2 for an example.
Our new neLCL is constructed in a way that ensures that in such a padded graph solving is equivalent to solving on the underlying initial graph . Moreover, the padding itself will make sure that the distances between nodes of increase; in other words, simulating an algorithm on that solves incurs an additional communication overhead. Consequently the hard graphs for are given by those instances where the underlying graph belongs to the hard graphs for and the size of the gadgets used in the padding is finely balanced such that (1) the underlying graph is large enough (as a function of the number of nodes of the padded graph) to ensure a sufficiently large runtime for solving on , and (2) the gadgets are large enough to ensure a sufficiently large communication overhead.
We will start the section by defining gadgets and families thereof; in particular, we will describe their special properties that will enable us to define and prove that it has the desired complexities. Then we will give a formal definition of padded graphs which, intuitively, are the key concept for the subsequent definition of the new neLCL , even though, formally, they do not appear in the definition. After defining , we will conclude the section by showing how the complexity of the new neLCL is related to the complexity of the old neLCL .
The exact relation between the complexities of the two neLCLs (which relies on the subsequently defined concept of a gadget family) is given in Theorem 1. Let , resp. , denote the deterministic, resp. randomized, complexity of an LCL on instances of size . Then the following holds.
Theorem 1.
Let be a function such that, for each , we have and there exists some with . For each neLCL problem and each gadget family , there exists an neLCL problem with deterministic complexity and and randomized complexity and .
3.1 Gadgets
Definition 2.
An gadget is a (labeled) connected graph that satisfies the following:

The number of nodes is .

There are exactly special nodes labeled , for , called ports. All other nodes are labeled .

The diameter of and hence also the pairwise distances between the ports are at most .
Let be some function. A gadget family is a set of graphs satisfying the following:

Each is an gadget for some .

For each , there exists some with nodes such that the pairwise distances between the ports are all in . Let this gadget be .

There is an neLCL with the following properties, where denotes the input graph for .

The output label set for is , for some finite set .

If , then the unique (globally) correct solution for uses only the output label .

If , then there exists a (globally) correct solution for that uses only output labels from .

There is a deterministic distributed algorithm that, given an upper bound of , where is the number of nodes of , solves in rounds. Moreover, if , then uses only output labels from . We call the (global) output of a locally checkable proof of error.

3.2 Padded graphs
Intuitively, a padded graph is a graph obtained by starting from some arbitrary graph and replacing each node with a gadget . We now formally define the family of padded graphs for a given graph .
Definition 3.
Given a graph with maximum degree and a gadget family , the graph family is the set of all graphs that can be obtained by the following process.
Start from . For each node pick a gadget , where different gadgets may be picked for different nodes. Let be the gadget chosen for node . The final graph is the union of the (over all ), augmented by the following additional edges: for any edge connecting port of to port of , add an edge between node of and of . Moreover, in the final graph we label each edge already present in the union of the with , and each edge that has been added in the augmentation step with .
3.3 New Lcl
Given an LCL and a gadget family , in this section we define a new LCL that, informally, can be described as follows. Each edge of the input graph for is assigned a special label that indicates whether belongs to a gadget or to “the underlying graph”, denoted by . Intuitively, is the graph obtained by contracting the connected components induced by the edges labeled as belonging to a gadget. For each such connected component, there are two possibilities: Either it constitutes a gadget from our gadget family , in which case we call it a valid gadget, or it does not, in which case we call it an invalid gadget.
In each invalid gadget, can be solved correctly by the containing nodes providing a locally checkable proof of the invalidity of the gadget. Consider the graph obtained by deleting all gadgets where the contained nodes proved an error. Assuming that all invalid gadgets have been claimed to be invalid by their contained nodes (we do not require that nodes in an invalid gadget actually choose this option) and consequently deleted, the obtained graph may still not be a padded graph as described in Section 3.2. In fact, while padded graphs satisfy that a gadget corresponding to node of degree has nodes connected to port nodes of other gadgets, may have some port nodes connected to removed gadgets, thus valid port nodes are an arbitrary subset of . This implies that we can transform to a valid padded graph in a natural way, by just mapping the () valid port nodes to the ports from to . We will actually require nodes to produce such a mapping, and mark each port node as valid or invalid (see Figure 4 for an example). Then, is solved correctly if the nodes solve on the graph obtained from by contracting all valid gadgets.
Some care is needed to ensure that the above rough outline can be expressed in terms of neLCL constraints and to deal with the subtleties introduced thereby. We now proceed by defining .
Let the neLCL be given by input label set , output label set , node constraint set , and edge constraint set . Let be an arbitrary gadget family and let be as described in Definition 2. Recall that the graphs in are labeled. Let
denote the labels used for labeling the graphs in , which will be our input labels for . Let , , and denote the output labels, node constraints, and edge constraints for , respectively. In particular, we have
W.l.o.g., we can (and will) assume that both in and in , each element of is assigned exactly one input label (and each will receive exactly one output label) as we can encode multiple labels in one label and add an “empty label” for the case that no label was assigned. However, for convenience, we might deviate from this underlying encoding in the description of the new neLCL . We now give a formal definition of . We will later provide an informal explanation of each part.
Input labels.

[noitemsep]

Each node has a label in .

Each edge has a label in .

Each element of has a label in .
Output labels.

[noitemsep]

Each node must label itself with a label from , where

Each edge must be labeled with either or a label from .

Each element of must be labeled with either or a label from .
Constraints.

Each edge with input label has to be labeled , each edge with input label has to be labeled with a label from . Each has to be labeled if has input label , and with a label from if has input label .

On each connected component of the subgraph induced by the edges labeled , the neLCL has to be solved correctly. Put in a local way, for each node the node constraints of have to be satisfied, where we ignore each edge incident to that is labeled , and for each edge with input label the edge constraints of have to be satisfied.
Remark.
Here, as in the following descriptions, we will consider the labels defined above as a collection of several labels in the canonical way, e.g., each edge has three input labels, one each from , , and . Also, for simplicity, we will not explicitly mention which of the labels are relevant for the respective constraint if this is clear from the context. For instance, the (only) labels the above constraint for solving talks about (apart from the labels from that determine which edges are considered for the constraint) are the input and output labels for , i.e., the input labels from , , , and , and the output labels from , , and .

Each node has to be labeled if and only if has input label for some and, either there is no incident edge labeled , or there are at least two incident edges labeled . Otherwise has to be labeled either or .

For each edge with input label the following holds: If and are labeled and for some , respectively, and the output label of both and is , then the output label of both and cannot be ; if is labeled for some and at least one of and has input label or an output label from , then the output label of cannot be .

For each node with incident edges , if at least one of is assigned an output label from and none of the node constraints mentioned above are violated, then the node constraint for is always satisfied, irrespective of the conditions below. If all of the mentioned elements of are assigned an output label from , then the following conditions have to be satisfied for , where
denotes the part of the output label assigned to :

If is labeled for some , then the label is an element of if and only if the output of is .

If is labeled , then is ’s input label from .

If is labeled for some , and , then for any incident edge labeled , the labels and coincide with ’s input label from and ’s input label from , respectively.

The output label of encodes a configuration that satisfies the node constraints from . More precisely, let be the bijection that monotonically maps the elements of to the indices of the elements in , and consider a (hypothetical) node of degree with incident edges . Then labeling with input labels
and output labels
respectively, yields a correct node configuration at according to .


Similarly, for each edge , if at least one of is assigned an output label from and none of the node constraints mentioned above are violated, then the edge constraint for is always satisfied, irrespective of the conditions below. If all of the mentioned elements of are assigned output labels from , then the following conditions have to be satisfied for , where and denote the part of the output labels assigned to and , respectively, and we use the above notation augmented with a superscript to indicate the respective node:

If is labeled , then .

If is labeled , and and are labeled and for some , respectively, then and , and, for a (hypothetical) edge , labeling with input labels
and output labels
respectively, yields a correct edge configuration at according to .

Informal description.

Input labels. Elements of can be intuitively seen as endpoints of an edge, thus we will refer to them as “halfedges”. Each node, each edge, and each halfedge has an input for and an input for . Also, each node (resp. edge) may have a special label indicating if it is a port node (resp. edge).

Output labels. Each node must produce a tuple . The labeling must be a valid output for . The labeling is used to indicate the (in)correctness of the port connections. The labeling is the one that actually contains a solution for . First, contains a list of valid ports of the gadget. Then, contains a copy of all the inputs of the port nodes, as well as the inputs of their edges and halfedges, that is, everything that is needed to know the input of a virtual node. Finally, contains the output of the virtual node, described as node, edges and halfedges outputs. All these labels will be useful to check the validity of the output for in a local manner.

Constraints. For each aforementioned constraint, we provide an informal description, by following the same order.

We require that outputs for do not cross gadget boundaries. Thus, we require that port edges and halfedges are labeled , while everything else must actually contain outputs for .

Each connected component, given by removing port edges from the graph, must provide a valid solution for .

Port nodes do not output errors only in the case in which they are connected to exactly one other port node, and both of them are in a correct gadget.

Nodes claiming that the gadget is correct must:

Produce a list of valid ports of the gadget.

Copy the node input of , that will be treated as the input for the virtual node (this is an arbitrary choice, but since nodes may be provided with different inputs for , we need nodes to agree on some specific input for the virtual node).

Copy edge and halfedge inputs of port nodes to the output.

Produce outputs that are correct w.r.t. the constraints of .


On edges we first check that nodes of the same gadget are giving the same output. Then, port edges check that the edge constraints for are satisfied on the virtual edges.

3.4 Upper and lower bounds
We now proceed by showing upper and lower bounds for the defined neLCL , which, together, will then imply Theorem 1. Intuitively, in order to solve , nodes can do the following. They start exploring the graph to see if they are in a valid gadget. If they see that their gadget is invalid, then they can produce a locally checkable proof of error. Otherwise, they need to solve the original problem, by first seeing which ports are connected to exactly one valid gadget (on all other ports they can output or ), and then simulating the algorithm for the original problem on the graph obtained by contracting the valid gadgets to a node and ignoring invalid gadgets.
3.4.1 Upper bound
Lemma 4.
Problem can be solved in rounds deterministically, and in rounds randomized.
Proof.
Let be the algorithm guaranteed by Definition 2, able to produce a locally checkable proof of the (in)validity of the gadget or, equivalently, solving the neLCL . Each node starts by executing on the connected components of the subgraph obtained by ignoring edges labeled , which can be done in time where denotes the number of nodes of the input graph. For simplicity, we will refer to these connected components as gadgets, where we say that a gadget is valid if returns label everywhere in the gadget, and invalid if returns at least one label from . Node then outputs the labels returned by on itself and the incident edges and elements of , thereby providing the part of the output labels corresponding to , , and, , respectively, in the description of the output labels. Since solves , this takes care of Constraint 2; by outputting on all edges with input label and all associated elements of , we see that also Constraint 1 is satisfied.
If is a node labeled , then it outputs . If is labeled for some , then it gathers its constantradius neighborhood and checks whether it is a “valid” port: If has no incident edge labeled or at least two incident edges labeled , then it outputs . If has exactly one incident edge labeled , then it checks whether itself or the other endpoint of the edge is labeled or outputs an element of after executing . If one of the conditions is satisfied, then outputs , otherwise it outputs . This takes care of Constraints 3 and 4.
If a gadget is invalid, then by Constraints 5 and 6, the constraint for each node and edge in the gadget is satisfied, and we simply complete the outputs for all nodes in the gadget in an arbitrary way that conforms to the output label specifications. Hence, what remains is to assign to each node in a valid gadget the part of the output label in a way that ensures that Constraints 5 and 6 are satisfied. This is the part where, intuitively, we solve the original problem on the graph obtained by ignoring all invalid gadgets and contracting the valid gadgets to single nodes which are then connected by the edges labeled . We proceed as follows, considering only nodes in valid gadgets. Each node collects all input and hitherto produced output information contained in its gadget and the gadget’s radius neighborhood, and uses the obtained knowledge to determine the first part of by choosing in a way that conforms to Constraint 5. The choices for the mentioned labels immediately follow from the conditions in Constraint 5 (or can be freely chosen, for some labels).
For determining the second part of (corresponding to the actual outputs in the solution of ), each node solves the original problem as follows:

If the aim is a deterministic algorithm for , then gather the radius neighborhood; if the aim is a randomized algorithm, then gather the radius neighborhood.

Construct a (partial) virtual graph by contracting each valid gadget to a single node and deleting all nodes in invalid gadgets—note that this may result in a graph with parallel edges and/or selfloops, which is why, in our model, we allow graphs to contain these. For each virtual node , assign port numbers from to to the incident edges in the only way that respects the order of the indices of the gadget’s nodes the corresponding edges are connected to.

Assign, as identifier of a virtual node, the smallest id of its associated gadget.

Compute , a valid solution for for the current virtual node and its incident edges and elements of , where the indices indicate the corresponding port for the respective edge or element of .
Now, each node in a valid gadget transforms the output of the virtual node corresponding to the gadget into the desired tuple as follows. Recall the function defined in Constraint 5, and set
Note that, by construction . Now it is straightforward (if somewhat cumbersome) to check that this completion of the output of satisfies the last bullet of Constraint 5 and the first of Constraint 6. The remaining second bullet of Constraint 6 follows from the fact that the computed outputs form a valid solution to neLCL .
We need to show that, given their radius neighborhood, resp. radius neighborhood in the randomized case, nodes can actually find a valid solution for the original problem . To this end, we want to show that after collecting this neighborhood nodes can see up to a radius of at least , resp. , in the virtual graph, and that any virtual graph has size at most . This follows from the following observations:

In the worst case a gadget has diameter , where the worst case occurs if there is a single gadget containing all the nodes of the graph.

In the worst case for the size of the virtual graph each gadget has just constant size. Even in this case the virtual graph has at most nodes.
Hence, each node can simulate a round, resp. round, algorithm for (whose existence is guaranteed by ’s time complexity) on the virtual graph and thus find a valid solution for , as required. Note that this simulated algorithm assumes that the input graph for (i.e., the virtual graph) has size , which is an assumption that is consistent with the view of each node since we allow disconnected graphs (which is important in case a node sees the whole virtual graph, which is then interpreted as a connected component of an node graph). It follows that, in the randomized case, the failure probability of our obtained algorithm for is upper bounded by the failure probability of the used algorithm for since the algorithm for only fails if the algorithm for would fail on an node graph that contains the virtual graph as a connected component. In particular, the obtained randomized algorithm gives a correct output w.h.p.
Since the gathering process dominates the execution time, the time complexity of the obtained algorithm is in the deterministic case, and in the randomized case. Note that this algorithm works also on graphs containing selfloops and parallel edges. ∎
3.4.2 Lower bound
Lemma 5.
Let be a function such that, for each , we have and there exists some with . Solving problem requires rounds deterministically and rounds randomized.
Proof.
We start by proving the randomized lower bound. For a contradiction, assume that there is a round randomized algorithm that solves w.h.p. Recall the gadget family used to define . Let be the largest integer with such that there exists a gadget with nodes such that the pairwise distances between the ports of are all in . Let be an arbitrary graph with nodes, and consider the padded graph obtained by choosing gadget for each node of . Let be the node graph obtained by adding isolated nodes to .
Consider what happens if the nodes in simulate on . Since each node of has been expanded into a valid gadget, a valid solution for found by on the subgraph of yields a valid solution for on , by the definition of problem . Hence, due to the properties of our function , we have transformed into an algorithm for , and the failure probability of on graphs of size is upper bounded by the failure probability of on graphs of size . It follows that is correct w.h.p. Moreover, in order to simulate , it is sufficient if each node of collects its radius, due to the runtime of and the definition of . By Definition 2 and the definition of , we have ; therefore, the runtime of is . This yields a contradiction to the definition of and proves the randomized lower bound. The deterministic lower bound is proved analogously, the only difference being that we do not have to worry about failure probabilities. ∎
4 A gadget family
In this section we present a gadget family, and prove that it satisfies the properties described in Definition 2. Hence, we will prove the following theorem.
Theorem 6.
There exists a gadget family.
Informally, each gadget in the gadget family is composed by subgadgets. Each subgadget is a complete binary tree where we add horizontal edges, creating a path that traverses nodes of the same level. The bottom right node of each subgadget is a port (see Figure 5). Then, we add a node, that we call center, and connect it to the root of each subgadget (see Figure 6). Also, we add constantsize input labels to the gadget to make its structure locally checkable.
As stated in Definition 2, must be a neLCL. For the sake of readability, we will define as a constant radius checkable LCL. Then, we will show how to modify it and obtain a neLCL .
4.1 Subgadget
For any parameter , it is possible to construct subgadgets of height . Let be the coordinates of a node of the subgadget. For any node , it holds and . Let and be two nodes with coordinates and respectively, such that and . There is an edge between and if and only if:

[noitemsep]

, or

.
Subgadget labels.
We make a subgadget locally checkable by adding constantsize labels in the following way. First of all, each node has labels:

[noitemsep]

, where ;

, where , if and .
Moreover, each edge has a label on both endpoints, and . Each label is chosen as follows:

[noitemsep]

if ;

if ;

if ;

if ;

if .
See Figure 5 for an example of a subgadget.
4.2 Local checkability of a subgadget
Let be labels. We denote by the node reached from by following edges labeled . Each node of the subgadget checks the following local constraints.

Each node checks the following to guarantee some basic properties:

there are no self loops or parallel edges;

for any two incident edges , ;

it must be labeled for some , and its neighbors must be labeled as well;

if is labeled and , then .


Each node checks the following to guarantee a correct internal structure of the subgadget:

for each edge , if then , and vice versa;

for each edge , if then or , and vice versa;

, if the path exists;

, if the path exists.


Each node checks the following to guarantee the correct boundaries of the subgadget:

does not have an incident edge labeled if and only if neither does, if it exists;

does not have an incident edge labeled , if and only if neither does, if it exists;

if does not have an incident edge with label and it has an incident edge labeled , then ;

if does not have an incident edge labeled and it has an incident edge labeled , then ;

if does not have incident edges labeled and , then it is the root of the subgadget and it has only two incident edges with labels and
