1 Introduction
In this work, we show that the landscape of distributed time complexity is much more diverse than what was previously known. We present a general technique for constructing distributed graph problems with a wide range of different time complexities. In particular, our work answers many of the open questions of Chang and Pettie [5], and disproves one of their conjectures.
1.1 model
We explore here one of the standard models of distributed computing, the model [26, 21]. In this model, we say that a graph problem (e.g., graph colouring) is solvable in time if each node can output its own part of the solution (e.g., its own colour) based on its radius neighbourhood. We focus on deterministic algorithms – even though most of our results have direct randomised counterparts – and as usual, we assume that each node is labelled with an bit unique identifier. We give the precise definitions in Section 2.
1.2 problems
The most important family of graph problems from the perspective of the model is the class of problems [22]. Informally, problems are graph problems that can be solved in constant time with a nondeterministic algorithm in the model, and the key research question is, what is the time complexity of solving problems with deterministic algorithms. Examples of problems include the problem of finding a proper vertex colouring with colours: if you guess a solution nondeterministically, you can easily verify it with a constanttime distributed algorithm by having each node check its constantradius neighbourhood. As usual, we will focus on boundeddegree graphs. We give the precise definitions in Section 2.
1.3 State of the art
Already in the 1990s, it was known that there are problems with time complexities , , and on node graphs [21, 8]. It is also known that these are the only possibilities in the case of cycles and paths [22]. For example, the problem of finding a colouring of a path is inherently global, requiring time , while the problem of finding a colouring of a path can be solved in time .
While some cases (e.g., oriented grids) are now completely understood [4], the case of general graphs is currently under active investigation. problems with deterministic time complexities of [3, 6, 15] and for all [5] have been identified only very recently. It was shown by Chang et al. that there are no problems with complexities between and [6]. Classical symmetry breaking problems like maximal matching, maximal independent set, colouring, and edge colouring have complexity [1, 2, 23, 13]. Some classical problems are now also known to have intermediate complexities, even though tight bounds are still missing: colouring and edge colouring require rounds [3, 7], and can be solved in time [24]. Some gaps have been conjectured; for example, Chang and Pettie [5] conjecture that there are no problems with complexity between and . See Table 1 for an overview of the state of the art.
Complexity  Status  Reference 

exists  trivial  
,  does not exist  [22] 
,  ?  
exists  [21, 8]  
,  does not exist  [6] 
exists  [3, 6, 15]  
,  ?  
exists  [5]  
exists  trivial 
The picture changes for randomised algorithms, especially in the region between and . In the lower end of this region, it is known that there are no s with randomised complexity between and [6], but e.g. sinkless orientations have a randomised complexity [3, 15], and it is known that no problem belongs to this complexity class in the deterministic world. In the higher end of this region, it is known that all s solvable in time can be solved in time , the time it takes to solve a relaxed variant of the Lovász local lemma [5]. The current best algorithm gives [11].
So far we have discussed the complexity of problems in the strict classical sense, in graphs of maximum degree . Many of these problems have been studied also in the case of a general . The best deterministic algorithms for maximal independent set and colouring run in time [25]. Maximal matching can be solved in time [10] and edge colouring in time [12]. Corresponding randomised solutions are exponentially faster: rounds for maximal independent set [14], for colouring [18], for maximal matching [10], and for edge colouring [12, 9]. Some lower bounds are known in the general case: maximal independent set and maximal matching require time [20].
1.4 Contributions
Based on the known results related to problems, it seemed reasonable to conjecture that there might be only three distinct nonempty time complexity classes below , namely , , and . There are very few candidates of problems that might have any other time complexity, and in particular the gap between and seemed to be merely an artefact of the current Ramseybased proof techniques (see Chang and Pettie [5] for more detailed discussion on this region).
Our work changes the picture completely: we show how to construct infinitely many problems for the regions in which the existence of any problems was an open question. We present a general technique that enables us to produce time complexities of the form and for a wide range of functions , as long as is sublinear and at least logarithmic. See Table 2 for some examples of time complexities that we can construct with our technique.
“Low”  “High”  

trivial  [21, 8]  
gap  [22, 5]  gap  [6]  
this work  [3, 6, 15]  
this work  this work  
this work  this work  
[5]  
this work  this work  
[21, 8]  trivial 
The table also highlights another surprise: the structure of “low” complexities below and the structure of “high” complexities above look now very similar.
1.5 Proof ideas
On a high level, we start by defining a simple model of computation, called a link machine here. We emphasise that link machines are completely unrelated to distributed computing; they are simply a specific variant of the classical register machines. A link machine has registers that can hold unbounded positive natural numbers, and a finite program (sequence of instructions). The machine supports the following instructions: resetting a register to , addition of two registers, comparing two registers for equality, and skipping operations based on the result of a comparison.
We say that a link machine has growth if the following holds: if we reset all registers to value , and then run the program of the machine repeatedly for times, then the maximum of the register values is . For example, the following link machine has a growth :
Now assume that we have the following ingredients:

[noitemsep]

A link machine of growth .

An problem for directed cycles, with a time complexity .
We show how to construct a new problem in which the “relevant” instances are graphs with the following structure:

[noitemsep]

There is a directed cycle in which we need to solve the original problem .

The cycle is augmented with an additional structure of multiple layers of “shortcuts”, and the lengths of the shortcuts correspond to the values of the registers of machine .
Therefore if we take steps away from cycle , we will find shortcuts of length . In particular, if two nodes and are within distance from each other along cycle , we can reach from to in steps along graph .
In essence, we have compressed the distances and made problem easier to solve than , in a manner that is controlled precisely by function . For example, if , then distance along corresponds to distance in graph . If had a time complexity of , we obtained a problem with a time complexity of .
Notice that these results could not be achieved by just adding shortcuts of length directly onto every node of the cycle, since the lengths of the shortcuts would not be locally checkable, and moreover, it would not be true that a node can reach every other node within a certain distance.
1.6 Some technical details
Plenty of care is needed in order to make sure that

is indeed a welldefined problem: feasible solutions can be verified by checking the radius neighbourhoods,

is solvable in time also in arbitrary boundeddegree graphs and not just in “relevant” instances that have the appropriate structure of a cycle plus shortcuts,

there is no way to cheat and solve in time .
There is a fine balance between these goals: to make sure is solvable efficiently in adversarial instances, we want to modify the definition so that for unexpected inputs it is permitted to produce the output “this is an invalid instance”, but this easily opens loopholes for cheating – what if all nodes always claim that the input is invalid?
We address these issues with the help of ideas from locally checkable proofs and proof labelling schemes [17, 19], both for inputs and for outputs:

Locally checkable inputs: Relevant instances carry a locally checkable proof. If the instance is not relevant, it can be detected locally.

Locally checkable outputs: If the algorithm claims that the input is invalid, it also has to prove it. If the proof is wrong, it can be detected locally.
In essence, we define so that the algorithm has two possibilities in all local neighbourhoods: solve or prove that the input is invalid. This requirement can be now encoded as a bona fide problem.
As a minor twist, we have slightly modified the above scheme so that we replace impossibility of cheating by hardness of cheating. Our problem is designed so that an algorithm could, in principle, construct a convincing proof that claims that the input is invalid (at least for some valid inputs). However, to avoid detection, the algorithm would need to spend time to construct such a proof – in which case the algorithm could equally well solve the original problem directly.
1.7 Significance
The complexity classes and the gaps in the time hierarchy of problems have recently played a key role in the field of distributed computing. The classes have served as a source of inspiration for algorithm design (e.g. the line of research related to the sinkless orientation problem [3, 6, 15] and the followup work [16] that places many other problems in the same complexity class), and the gaps have directly implied nontrivial algorithmic results (e.g. the problem of colouring dimensional grids [4]). The recently identified gaps [4, 6, 5, 11] have looked very promising; it has seemed that a complete characterisation of the complexities might be within a reach of the state of the art techniques, and the resulting hierarchy might be sparse and natural.
In essence, our work shows that the free lunch is over. The deterministic complexities in general boundeddegree graphs do not seem to provide any further gaps that we could exploit. Any of the currently known upper bounds might be tight. To discover new gaps, we will need to restrict the setting further, e.g. by studying restricted graph families such as grids and trees [4, 5], or by focusing on restricted families of problems. Indeed, this is our main open question: what is the broadest family of problems that contains the standard primitives (e.g., colourings and orientations) but for which there are large gaps in the distributed time hierarchy?
2 Preliminaries
Let us first fix some terminology. We work with directed graphs , which are always assumed to be simple, finite, and connected. We denote the number of nodes by . The number of edges on a shortest path from node to node is denoted . A labelling of a graph is a mapping . Given a labelled graph , the radius neighbourhood of a node consists of the subgraph , where and , as well as the restriction of the labelling. The set of natural numbers is .
2.1 Model of computation
Our setting takes place in the model [26, 21] of distributed computing. We have a graph , where each node is a computational unit and all nodes run the same deterministic algorithm . We work with boundeddegree graphs; hence can depend on an upper bound for the maximum degree of .
Initially, the nodes are not aware of the graph topology – they can learn information about it by communicating with their neighbours. To break symmetry, nodes have access to bit unique identifiers, given as a labelling. We will also assume that the nodes are given as input the number of nodes (for most of our results, e.g. a polynomial upper bound on is sufficient). In addition, nodes can be given a taskspecific local input labelling. We will often refer to directed edges, but for our purposes the directions are just additional information that is encoded in the input labelling. We emphasise that the directions of the edges do not affect communication; they are just additional information that the nodes can use.
The communication takes place in synchronous communication rounds. In each round, each node

[noitemsep]

sends a message to each of its neighbours,

receives a message from each of its neighbours,

performs local computation based on the received messages.
Each node is required to eventually halt and produce its own local output. We do not limit the amount of local computation in each round, nor the size of messages; the only resource of interest is the number of communication rounds until all the nodes have halted.
Note that in rounds of communication, each node can gather all information in its radius neighbourhood, and hence a round algorithm is simply a mapping from radius neighbourhoods to local outputs.
2.2 Graph problems
In the framework of the model, the same graph serves both as the communication graph and the problem instance. In addition to the graph topology, the problem instance can contain local input labels. To solve the graph problem, each node is required to produce an output label so that all the labels together define a valid output.
More formally, let and be sets of input and output labels, respectively. A graph problem is a function that maps each graph and input labelling to a set of valid solutions. Each solution is a function . We say that algorithm solves graph problem if for each graph , each input labelling of , and any setting of the unique identifiers, the mapping defined by setting to be the local output of node for each , is in the set . Note that the unique identifiers are given as a separate labelling; the set of valid solutions depends only on the taskspecific input labelling . When and are clear from the context, we denote a graph problem simply .
Let . Suppose that algorithm solves problem , and for each input graph , each input labelling and any setting of the unique identifiers, each node needs as most communication rounds to halt. Then we say that algorithm solves problem in time , or that the time complexity of is . The time complexity of problem is defined to be the slowestgrowing function such that there exists an algorithm solving in time .
In this work, we consider an important subclass of graph problems, namely locally checkable labelling () problems [22]. A graph problem is an problem if the following conditions hold:

[noitemsep]

The label sets and are finite.

There exists a algorithm with constant time complexity, such that given any labelling as an additional input labelling, can determine whether holds: if , all nodes output “yes”; otherwise at least one node outputs “no”.
That is, an problem is one where the input and output labels are of constant size, and for which the validity of a candidate solution can be checked in constant time.
3 Link machines
A link machine consists of a constant number of registers, labelled with arbitrary strings, and a program . The program is a sequence of instructions , where each instruction is one of the following for some registers , , and :

[noitemsep]

Addition: .

Reset: .

Conditional execution: If (or if ), execute the next instructions, otherwise skip them.
The registers can store unbounded natural numbers. For convenience, we will generally identify the link machine with its program.
An execution of the link machine is a single run through the program, modifying the values of the registers according to the instructions in the obvious way. Generally, we consider computing with link machines in a setting where

[noitemsep]

all registers start from value , and

we are interested in the maximum value over all registers after executions of .
Specifically, for a register , we denote by the value of register after full executions of the link machine program, starting from all registers set to . We say that a link machine with registers has growth if, starting from all registers set to , we have that for all . While does not need to be a bijection, we use the notation to denote the function defined by setting for all .
3.1 Working with link machine programs
Composition.
Consider two link machines and with corresponding programs and . By relabelling if necessary, we can assume that the programs do not share any registers. Moreover, assume has a register we call the output register for and has a register we call the input register for . We define the composition as the program
Note that the growth of the program at step is the maximum between the growth of each program at step , and can be affected by the input given by to . The basic idea is to use this construct so that produces an output register dependent on , which is then used by to produce a composed growth function .
We define the composition of multiple link machine programs with specified input and output registers similarly.
Note that the growth of a link machine is at most . As we will see later this constraint is necessary, since otherwise we would contradict known results regarding gaps on complexities.
3.2 Building blocks
We now define our basic building blocks, that is, small programs that can be composed to obtain more complicated functions. These building blocks are summarised in Table 3. In all our cases, we will assume that the value of the input register is growing and bounded above by ; otherwise the semantics of a building block is undefined.
Program  Input  Output  Growth 

count  –  
–  
exp  
log 
Link machine programming conventions.
We use the following shorthands when writing link machine programs:

We write conditional executions as ifthen constructs, with the conditional execution skipping all enclosed instructions if the test fails. We also use ifelse constructs, as these can be implemented in an obvious way.

We write sums with multiple summands, constant multiplications, and constant additions as single instructions, as these can be easily simulated by multiple instructions and a constant number of extra registers.
Counting.
Our first program count simply produces a linear output :
Clearly, program count has growth .
Polynomials.
Next, we define a sequence of programs for computing . For any fixed , we define the program as follows:
We now have that , and by the binomial theorem, for all . Moreover, has growth .
Roots.
We define two versions of a program computing a th root. The first one does not take an input and has the advantage that it has sublinear growth of . Specifically, we define as follows:
Observe that started from all registers set to , we always have . Moreover, for register to increase from to , the values of the registers will visit all configurations where , and there are such configurations. This implies that the growth of register is .
The second version of the th root program takes an input register , and computes an output . We define this program as follows:
Clearly, we have that , and by the properties of the output register is . The growth of is .
Exponentials.
The program exp computes an exponential function in the input register :
We have that , and . Moreover, the growth of exp is .
Logarithms.
The program log computes a logarithm of the input register :
Clearly, we have that , and . Starting from the valid starting configuration, the register only takes values that are powers of two, and . Thus, we have . By construction, the growth of log is .
3.3 Composed functions
By composing our building block functions, we can now construct more complicated functions, which will then be used to obtain problems of various complexities. The constructions we use are listed in Table 4; the values of output registers and the functions computed by these programs follow directly from the results in Section 3.2.
Notice that for all the considered programs there is a register that is always as big as all the other registers. Thus, we can refer to it as the register of maximum value.
Program  Growth  

Remark 1.
While exploring the precise power of link machines is left as an open question, we point out that, in this paper, we do not list every possible complexity that one can achieve with link machines. Indeed, there are many more time complexities that can be realised; for example, one could define a building block that performs a multiplication, or add support for negative numbers and subtractions.
4 Link machine encoding graphs
In this section, we show how to encode link machine executions as locally checkable graphs. Fix a link machine with registers and a program of length that has nondecreasing growth in and , and let be an integer. The basic idea is that we encode the link machine computation of the value as follows:

We start from an grid graph, where , that wraps around in the horizontal direction, as shown in Figure 1 (here is the smallest constant that avoids parallel edges or selfloops). This allows us to ‘count’ in two dimensions; one is used for time, and the other for the values of the registers of . The grid is consistently oriented so that we can locally distinguish between the two dimensions, and all grid edges are labelled with either ‘up’, ‘down’, ‘left’ or ‘right’ to make this locally checkable.

We add horizontal edges to the grid graph to encode the values of the registers. Specifically, at level of the graph, the horizontal edges encode the values the registers take during the th execution of the link machine program, with edge labels specifying which register values the edges are encoding (see Figure 2).
The labels should be thought as input labels; as we will see later, they will allow us to recognise valid link machine encoding graphs in the sense of locally checkable proofs. We will make this construction more formal below.
4.1 Formal definition
Let be a link machine with growth as above. We formally define the link machine encoding graphs for as graphs obtained from the construction we describe below.
Grid structure.
The construction starts with a 2dimensional grid graph, where . Let denote the node on the th row and the th column, where and . The grid wraps around along the horizontal axis, that is, we also add the edges for all .
We add horizontal link edges to the graph according to the state of the machine . That is, we say that for a node , a link edge of length is an edge . Let denote the value of the register after executing times the full program of , and then executing the first instructions of . For each , register , and , we add a link edge of length to all nodes on level if it does not already exist.
Local labels.
In addition to the graph structure, we add constantsize labels to the graph as follows. First, each node has a set of labels for each incident edge, added according to the following rules if the corresponding edge is present (note that a single edge may have multiple labels):

[noitemsep]

The grid edge to is labelled with .

The grid edge from is labelled with .

The grid edge from is labelled with .

The grid edge to is labelled with .

For each register and , the link edge of length is labelled with .
Consider the set of labels that each node associates to each of its incident edges. When we later define graph problems, we assume these labels to be implicitly encoded in the node label given to .
Input.
Also, each node is provided with an input .
4.2 Local checkability
We show that the labels described in Section 4.1 constitute a locally checkable proof for the graph being a link machine encoding graph. That is, there is a algorithm where all nodes accept a labelled graph if and only if it is a link machine encoding graph.
Local constraints.
We first specify a set of local constraints that are checked by the nodes locally. All the constraints depend on the radius neighbourhood of the nodes, so this can be implemented in the model in rounds. In the following, for labels , let denote the node reached by following the edges with the specified labels. The full constraints are now as follows:

Each node checks that the basic properties of the labelling are correct:

All possible edge labels are present exactly once, except possibly one of and .

The direction labels , , , and are on different edges if present.


Grid constraints ensure the validity of the grid structure:

Each node checks that each of the edges labelled with , , or has the opposite label in the other end.

If there is an edge labelled , check that .

If there is not an edge labelled , check that also nodes and do not have edges labelled .

If there is not an edge labelled , check that also nodes and do not have edges labelled .


Nodes check that the values of the registers are correctly initialised on the link edges:

Nodes that do not have an edge labelled with check that the register values are initialised to , that is, the labels and are on the same edge for all registers .

Nodes that have an edge labelled with check that the registers are copied correctly, that is, for all registers .


Nodes check that the program execution is encoded correctly as follows. Each instruction is processed in order, from to . The th is checked as follows, depending on the type of the instruction:

If the instruction is :

Register is correctly set to : the labels and are on the same edge.

Any of the other registers did not change, that is, labels and are on the same edge for all registers except .


If the instruction is :

Register is set correctly: .

Any of the other registers did not change, that is, labels and are on the same edge for all registers except .


If the instruction is an if statement comparing registers and , check if the labels and are on the same edge, and if this does not match the condition of the if statement, check that the following instructions are not executed:

Any registers do not change for steps, that is, for all registers , we have that labels are on the same edge.

Skip the checks for the next instructions.



If no edges are labelled , check that the link edges corresponding to the register with the maximum value form cycles.
Correctness.
It is clear that link machine encoding graphs satisfy the constraints 1–5 specified above. Conversely, we want to show any graph satisfying these constraints is a link machine encoding graph, but it turns out this is not exactly the case.
It might happen that the register values exceed the width of the grid, the edges “wrap around”, and the correspondence between the edge lengths and register values gets lost. However, for this to happen one has to have a row with .
In order to characterise the graph family captured by the local constraints, we define that a graph is an extended link machine encoding graph if

[noitemsep]

is an grid for some and that wraps around horizontally but not vertically,

satisfies the local constraints of link machine encoding graphs, and

there is an with such that up to row the edge lengths are in onetoone correspondence with the register values of the first executions of link machine .
Note that a link machine encoding graph is trivially an extended link machine encoding graph, as we can simply choose and and hence . The intuition is that extended link machine encoding graphs have a good bottom part with dimensions , and on top of that there might be any number of additional rows of some arbitrary garbage.
Lemma 2.
Proof.
By constraints (1) and (2), we have that the graph is a grid graph that wraps around horizontally. By the assumption that there is a node without edge labelled by and by constraint (2), the grid cannot wrap around vertically. Hence is an grid that wraps around horizontally, for some values and , and by assumption it satisfies the local constraints of link machine encoding graphs.
Constraints (3) and (4) ensure that the link edges and the corresponding labels are according to the link machine encoding graph specification, as long as .
Constraint (5) ensures that we cannot have for all , as in the top row we must have three edges that form a cycle that wraps around the entire grid of width at least once. Hence at some point we must reach , and this is sufficient for to satisfy the definition of an extended link machine encoding graph. ∎
5 constructions
Let be a link machine with nondecreasing growth in and , and let be an problem on directed cycles with complexity – for concreteness, will either be colouring (complexity ) or a variant of colouring (complexity ). To simplify the construction, we will assume that is solvable on directed cycles with onesided algorithms, i.e., with algorithms in which each node only looks at its successors. We now construct an problem with complexity related to , as outlined in the introduction:

If a node sees a graph that locally looks like a link machine encoding graph for , and the node is on the bottom row of the grid, it will need to solve problem on the directed cycle formed by the bottom row of the grid. As will be shown later, in steps, a node on the bottom row of the grid sees all nodes within distance on the bottom cycle, so this is solvable in rounds.

If a node sees something that does not look like a link machine encoding graph, it is allowed to report an error; the node must also provide an error pointer towards an error it sees in the graph. A key technical point here is to ensure that it is not too easy to claim that there is an error somewhere far, even if in reality we have a valid link machine encoding graph. We address this by ensuring that error pointer chains can only go right and then up, they cannot disappear without meeting an error, the part that is pointing right must be properly coloured, and the part that is pointing up copies the input given to the node that is witnessing the error. If some nodes claim that the error is somewhere far up, we will eventually reach the highest layer of the graph and catch the cheaters. Also, nodes cannot blindly point up, because they need to mark themselves with the input of the witness. If all bottomlevel nodes claim that the error is somewhere right, we do not necessarily catch cheaters, but the nodes did not gain anything as they had to produce a proper colouring for the long chain of error pointers.
There are some subtleties in both of these points, which we address in detail below.
5.1 The problem
Formally, we specify the problem as follows. The input label set for is the set of labels used in the link machine encoding graph labelling for as described in Section 4. The possible output labels are the following:

[noitemsep]

output labels of the problem ,

an error label ,

an error pointer, pointing either right () or up (), with a counter mod and a label in ,

an empty output .
The correctness of the output labelling is defined as follows.

If the input labelling does not locally satisfy the constraints of link machine encoding graphs for (see Section 4.2) either at the node itself or at one of its neighbours, then the only valid output is . Otherwise, the node must produce one of the other labels.

If the output of a node is one of the labels of , then the following must hold:

Node does not have an incident edge with label in the input labelling.

If any adjacent nodes have output from the label set of , then the local constraints of must be satisfied.


If the output of a node is empty, then it must have an incident edge with label in the input labelling.

If the output of a node is an error pointer, then the following must hold:

Node has only one outgoing error pointer.

The error pointer is pointing either or if the node does not have an edge labelled in the input, and if the node does have an edge labelled in the input.

The node at the other end of the pointer either outputs an error label , or an error pointer.

The  counters of nodes outputting error pointer form a 2colouring in the induced subgraph of those nodes.

The nodes outputting error pointer have the same label as the next node in the chain. If is the last node in the chain, holds, where is the witness outputting .

These conditions are clearly locally checkable, so is a valid problem.
5.2 Time complexity
We now prove the following bounds for the time complexity of problem ; recall that , where is the growth of link machine . In the following, denotes the number of nodes in the input graph, and is the smallest number satisfying . The intuition here is that the “worstcase instances” of size will be grids of width approximately and height approximately .
Theorem 3.
Problem can be solved in rounds.
Theorem 4.
Problem cannot be solved in rounds.
5.2.1 Upper bound – proof of Theorem 3
We start by observing that the link machine encoding graphs essentially provide a ‘speedup’ in terms of how quickly the nodes on the bottom cycle can see other nodes on the bottom cycle. Recall that , where is the growth of link machine ; also recall that .
Lemma 5.
Let be a link machine encoding graph for , and let and be nodes on the bottom cycle. If can be reached from in steps following edges labelled with , then can be reached from in steps following edges labelled with , , or register labels.
Proof.
Starting from a node on the cycle, in steps it is possible to see a node on the cycle that is steps away: take steps up, steps right along shortcuts, and steps down. We will use a similar procedure to go to a node at any distance .
Let be the smallest value such that . By assumption, . Since , we have for large enough graphs, and hence .
We find a path from to by a greedy procedure. First go up for steps. Recall that at height there are shortcuts of length . Go right along shortcuts until the distance to the column of is less than (taking one more shortcut would bring us to a node that is on the right of ). This takes at most steps. Next step down and do a greedy descent to get to the column of . At each level , if the remaining distance to the column of is at least , take steps along the longest shortcut until the distance is less than . Since , the length of the shortcuts at level is at most a constant number of times the length at level , hence this number of steps is bounded by . Finally, step down. We either reach the column of or the bottom cycle, and in this case the distance to the column of is less than .
Since , then . We take a total of at most steps up and down, at most steps right at level , and steps for each of the levels, for a total of steps. ∎
Lemma 6.
If a node not having an edge labelled sees no errors within distance for a sufficiently large constant , then it can produce a valid output for the problem .
Proof.
First consider a global problem, i.e., . If we explore the grid up and right for steps, and we do not encounter any errors, and the grid does not wrap around, then we would discover a grid fragment of dimensions at least for a . Such a grid fragment would contain nodes, and for a sufficiently large this would contradict the assumption that the input has nodes. Hence we must encounter errors (which by assumption is not the case), or the grid has to wrap around cleanly without any errors, in which case we also see the entire bottom row and we can solve there by brute force.
Second, consider the case . By a similar reasoning, the node can gather a grid fragment of dimensions . In particular, it can see a fragment of length of the bottom row. Furthermore, we have : to see this, note that is nondecreasing, , and hence . Therefore in rounds, for a sufficiently large , we can gather a fragment of the bottom row that spans up to distance at least , and this is enough to solve . ∎
As a consequence, we obtain an upper bound for the complexity of .
See 3
Proof.
The idea of the algorithm that solves the described problem is the following. First, each node gathers its constantradius neighbourhood, and sees if there is a local error:

If a node witnesses a local error, it marks itself as ‘witness’

If a node is either a witness itself, or it is adjacent to a witness, it marks itself as a nearwitness, and outputs .
Now let , for a large enough constant . Each node in the bottom cycle – not having an edge labelled – attempts to gather full information about an rectangle to the up and right from node , that is, a rectangle composed by the bottommost nodes of the first columns to the right of . By Lemma 5, in rounds we can either successfully gather the entire rectangle if it is errorfree, or we can discover the nearest column that contains a nearwitness:

If the entire rectangle is errorfree, we can solve on the bottom row by Lemma 6.

Otherwise, we find the nearest column containing a nearwitness . In such a case, node will output its modulo2 distance from that column, the input of the witness, and produce a path of error pointers that spans a sequence of edges labelled followed by a sequence of edges labelled , reaching . Notice that this path is unique and always exists, since all columns before the nearest one containing a witness must be faultfree (up to height ), and if a witness is in the same column of , the lowest one can be always reached by a faultfree path spanning only edges labelled .
Finally, nodes that are not on the bottom cycle and do not see bottom nodes wanting to produce error pointer paths produce empty outputs.
Clearly, this produces a valid solution to on extended link machine encoding graphs, since they satisfy Lemma 6. Also, if there are no witnesses and every node has an edge labelled , all nodes produce empty outputs, that is valid.
Now, consider a graph that is not an extended link machine encoding graph. A node will explore the graph for rounds. If the node satisfies the requirements of Lemma 6, then it produces a valid solution for the problem . Otherwise the node sees a witness. If a node decides to produce an error pointer towards a nearwitness , then all the nodes on the error path will produce an error pointer towards . This follows from the observation that, on valid fragments, nodes on the same row reach the same height while visiting the graph, due to the rectangular visit. Thus, if outputs a pointer towards , then all the intermediate nodes will output a pointer, and these pointers will correctly produce a path from to with the right modulo2 distance and labelling. ∎
5.2.2 Lower bound – proof of Theorem 4
Next, we prove that the upper bound in Theorem 3 is tight. The worstcase instances are truncated link machine encoding graphs, defined as follows: take a valid link machine encoding graph and remove rows from the top, until it is satisfied that , where is the length of the bottom cycle. The basic idea is to show that on truncated link machine encoding graphs, any algorithm has two choices, both of them equally bad:

[noitemsep]

We can solve problem on the bottom cycle, but this requires time .

We can report an error, but this also requires time .
Note that truncated link machine encoding graphs have errors, and hence it is fine for a node to report an error. However, all witnesses are on the top row (or next to it), and constructing a correctly labelled error pointer chain from the bottom row to the top row takes time linear in the height of the construction. We will formalise this intuition in what follows.
Lemma 7.
Let be a truncated link machine encoding graph, be a node on the bottom row, and let be any function satisfying . Let be the set of all nodes that can see in steps. Then is contained in the subgraph induced by the columns within distance of .
Proof.
By the construction of the link machine encoding graphs, the maximum distance in columns we can reach in steps is bounded by . Since , we have that for any positive integer . Thus, for any , there is such that for all , and thus
for all , which implies the claim. ∎
See 4
Proof.
The nodes on the bottom row have the following possible outputs:

At least one node produces an error pointer . Then we must have a chain of pointers all the way to the nearwitness near the top row, and the chain has to be labelled with the input of . The distance from to is , and the claim follows.

None of the nodes on the bottom row produce an error pointer , but at least one of them produces an error pointer . But then all nodes on the bottom row must output , and the bottom cycle has to be properly coloured.

None of the nodes produce any error pointers. Then all nodes on the bottom row must solve problem .
As colouring the bottom row is at least as hard as solving problem on the bottom row, it is sufficient to argue that the third case requires rounds. The proof is by simulation. We assume a faster algorithm for and use it to speed up the corresponding problem on cycles.
Let be an algorithm for with running time . The algorithm has to solve the problem on the bottom cycle. Now, given a cycle of length
as input, we create a virtual link machine encoding graph on top of the cycle as follows: each node creates the nodes in its column, their identifiers defined to be the identifier of the bottom node padded with the node’s height, encoded in
bits.To simulate an algorithm with running time in this virtual graph, each node needs to learn the identifiers of all nodes in its radius neighbourhood in the virtual graph. By Lemma 7, the columns of those nodes are contained within distance in the virtual graph. Thus, we can recover the identifiers of the nodes by scanning the cycle up to distance . Now each node can apply and find a solution for on the cycle in time , by outputting the output of the node at the bottom of the virtual column created by node . This yields an algorithm with running time on cycles of length , a contradiction. ∎
Remark 8.
Note that we did not use the fact that our algorithms are deterministic in this proof. In fact, a similar argument can be applied to randomised algorithms. This is due to the fact that, as we will see later, we consider problems that are equally hard for randomised and deterministic algorithms. Also, on truncated link machine encoding graphs the only way to cheat with error pointers is to produce a 2colouring or copy the input of nodes that are far on the graph, that is, to solve problems that are equally hard for randomised and deterministic algorithms.
5.3 Instantiating the construction
We consider the problems and defined on cycles as follows:
 (colouring):

Output a proper colouring.
 (safe colouring):

Given an input in , label the nodes with such that

[noitemsep]

input nodes are labelled with ,

input nodes are labelled with , , or ,

is never adjacent to ,

is never adjacent to ,

is never adjacent to , , or .
In essence, if we have an all input, we can produce an all output, and if we have an all input, we can produce an all output. However, if we have a mixture of s and s, we must properly colour each contiguous chain of s. The worstcase instance is a cycle with only one , in which case we must properly colour a chain of length .
The time complexity of this problem is , and it can be solved with onesided algorithms. It is also clearly an problem. Note that, unlike colouring, safe
colouring is always solvable for any input (including odd cycles).

We now instantiate our construction using the link machines defined in Section 3. The general recipe of these instantiations will be the following:

We start with a link machine program with growth , and compute the function that controls the speedup.

Next, we observe that there can be link machine encoding graphs with nodes, in which the bottom cycle has length satisfying , in which nodes of the bottom cycle see no errors within distance .
By considering each of the composite functions of Table 4, and by applying Theorems 3 and 4, we obtain all of the new time complexities listed in Table 2.
Theorem 9.
There exist problems of complexities

[noitemsep]

,

,
where ,, and are positive integer constants, satisfying and .
Proof.
Let with growth . We have

[noitemsep]

, and

.
When is we obtain:

[noitemsep]



.
By setting and the claim follows.
When is we obtain:

[noitemsep]


.
The claim follows by setting the values of and appropriately. ∎
Theorem 10.
There exist problems of complexities

[noitemsep]

, and

,
where and are positive integer constants such that .
Proof.
Let with growth . We have

[noitemsep]

and ,

.
Thus, the problem has complexity

[noitemsep]

when is , and

when is . ∎
Theorem 11.
There exist problems of complexities

[noitemsep]

, and

,
where and are positive integer constants such that .
Proof.
Let with growth