1 Introduction
What can be computed within logarithmic complexity has been one of the most fundamental questions in distributed and parallel computing, since the initiation of the study of parallel algorithms in the eighties. Various problems were shown to belong to the NC class back then, i.e., the class of problems that can be solved in polylogarithmic time by a polynomial number of machines. This includes several fundamental problems, namely, coloring, Maximal Independent Set and Maximal Matching. All of these problems admit deterministic logarithmic parallel algorithms. In the distributed setting, however, these problems turned out to be much more challenging, if one aims at a deterministic solution. The first deterministic polylogarithmic solution was an time algorithm for the problem of Maximal Matching, obtained by Hanckowiak, Karonski and Panconesi [21]. More recently, polylogarithmic deterministic coloring was obtained by Barenboim and Elkin [3]. The problem of edgecoloring was provided with an rounds deterministic algorithm by Fischer, Ghaffari and Kuhn [15]. Recently, a plethora of results were published in this field, with various improvements to the aforementioned algorithms. See, e.g., [4, 1, 14, 23], and references therein. In a very recent breakthrough, a wide class of problems have been solved using deterministic polylogarithmic number of rounds [26], including coloring and Maximal Independent Set. This was achieved by providing an efficient algorithm for the NetworkDecomposition problem, which is complete in this class.
1.1 Our Results
In the current paper we investigate yet another distributed setting, namely, the Sleeping Setting. Several variants of this setting have attracted the attention of researchers recently [5, 8, 10, 13, 18, 22]. The particular setting and complexity measure we consider in this paper were introduced by Chatterjee, Gmyr and Pandurangan [8] in PODC’20. This sleeping setting is similar to the standard distributed setting [24], but has an additional capability, as follows. In the sleeping setting, the vertices of the network graph can decide in each round to be in one of two states; a ”sleep” state or an ”awake” state. If all the vertices are awake all the time, the setting is identical to the standard setting. However, the capability of entering a ”sleep” state is where a vertex cannot receive or send messages in the network, neither can it perform internal computations. Consequently, such rounds do not consume the resources of that vertex, and shall not be counted towards a complexity measurement that aims at optimizing resource consumption. Indeed, in this setting a main complexity measurement takes into account only awake rounds. Specifically, the worstcase awake complexity of an algorithm in the sleeping setting is the worstcase number of rounds in which any single vertex is awake. In PODC’20, Chatterjee, Gmyr and Pandurangan [8] presented a Maximal Independent Set randomized algorithm with expected awake complexity of
. Its highprobability awake complexity is
, and its worstcase awake complexity is polylogarithmic.This work raised the following two important questions:
(1) Can MIS be solved within deterministic logarithmic awake complexity?
(2) Can additional problems be solved within such complexity?
In the current paper we answer these questions in the affirmative. But much more generally, we show that any decidable problem can be solved within deterministic logarithmic awake complexity in the distributed sleeping setting. Namely, a decidable problem is any computational problem that has a sequential deterministic algorithm that provides a correct solution within a finite sequential running time (as large as one wishes).
Note that undecidable problems in the sequential setting are also undecidable in the different variants of distributed settings.
For the purpose of solving decidable problems, we present a new structure, namely, a Distributed Layered Tree (DLT). We show that if one is able to compute a distributed layered tree, then any decidable problem can be solved within additional awake complexity of . This is because a DLT allows each vertex to obtain all the information of the input graph in a constant number of awake rounds, and then any decidable problem can be solved locally and consistently by all vertices using a sequential algorithm. We also prove that DLT itself can be solved in awake rounds. In particular, this provides a deterministic logarithmic solution to the fundamental Broadcast problem. This improves the best previouslyknown awake complexity of this problem in the sleeping setting, due to Chang et al. [7], by at least a quadratic factor. We note that the broadcast algorithm of Chang et al. was devised for settings with additional requirements, i.e., it is more general than Broadcast in the sleeping setting. Nevertheless, it was still the stateoftheart even in the sleeping model. Our improvement applies specifically to the sleeping model.
A natural question is how difficult the construction of DLT is. We answer this by proving a lower bound of awake rounds for the DLT problem. This lower bound is obtained by a simple but powerful tool of a reduction from message complexity lower bounds in the model. With this lower bound, given that the DLT problem itself is a decidable problem, we obtain a tight deterministic bound of worstcase awake time on the class of decidable DLThard problems ^{1}^{1}1A DLThard problem is a problem whose solution provides a DLT within additional awake rounds. in the sleeping model.
An additional direction of ours is the analysis of a class we define as the OLOCAL class. This is a class of problems that can be solved using an acyclic orientation of the edges, by choosing a solution for each vertex after all vertices reachable from it have computed their solution, and as a function of these solutions. A notable example is coloring, where each color can be selected once all neighbors on outgoing edges have selected their own colors, such that the color does not conflict with any of them. Another example is MIS, where each decision is made after all outgoing neighbors have made their decisions. We show that any problem that belongs to this class has deterministic worstcase awake complexity of .
In addition to the number of awake rounds, which is the main complexity measurement in this setting, we are also interested in optimizing the overall number of communication rounds. Since the DLT can be used to solve any decidable problem, it follows that certain such problems require communication rounds. (These are the global problems of the ordinary distributed setting. For example, the leader election is such a problem.) We investigate how close we can get to this lower bound with an algorithm of rounds for the distributed layered tree problem. While our basic algorithm requires communication rounds, a more sophisticated version requires only communication rounds. This comes at a price of increasing the worstcase awake complexity, but only by a factor of .
1.2 The Sleeping Setting
The sleeping setting represents the need for energyefficient algorithms in ad hoc, wireless and sensor networks. In such networks, the energy consumption depends on the amount of time a vertex is actively communicating or performing calculations. More importantly, significant energy is spent even if a node is idle, but awake. It was shown in experiments that the energy consumption in an idle state is only slightly smaller than when a node is active [11, 30]. This is in contrast to the sleeping state, in which energy consumption is decreased drastically. Thus, if a node may enter a sleeping mode to save energy during the course of an algorithm, one can significantly improve the energy consumption of the network during the execution of an algorithm.
The sleeping model is a formulation of this premise, and is a generalization of the traditional model. In the sleeping model, similarly to the model, a communication network is represented by an vertex graph , where vertices represent processors, and edges represent communication links. There is a global clock, and computation proceeds in synchronous discrete rounds. In each round a vertex can be in either of the two states, ”sleep” or ”awake”. (While in the model the vertices are only in the ”awake” state.) If a vertex is in the ”awake” state in a certain round, it can perform local computations as well as sending and receiving messages in that round. On the other hand, in a round of a ”sleep” state, a vertex cannot send or receive messages, and messages sent to it by other vertices are lost. It also cannot perform any internal computations. A vertex decides apriori about entering a ”sleeping” state. That is, in order to enter a sleeping state in a certain round , either the vertex decides about it in an awake round , or such a decision is hardcoded in the algorithm, and is known before its execution. Nodes in the ”sleep” state consume almost no energy, and thus shall not be counted towards the energy efficiency analysis.
Initially, vertices know the number of vertices , or an upper bound . The IDs of vertices are unique and belong to the set . Even if , the awake complexity of our algorithms is not affected at all. The overall number of clock rounds, however, may be affected. In this case should be replaced by in the clock complexity bounds. In some of our algorithms, however, the dependency on and can be made as mild as the logstar function. See Section 4.
The main efficiency measures in the Sleeping Model. The measurements for the performance of an algorithm in the sleeping model were first mentioned by Chatterjee, Gmyr and Pandurangan in [8]. For a distributed algorithm with input graph in the sleeping model, two types of complexity measurements are defined. One is nodeaveraged awake complexity in which, for a node , define as the number of rounds spends in the ”awake” state until the end of the algorithm. The nodeaverage awake complexity is the average .
The second efficiency measurement is the worstcase awake complexity. This is defined as and is a stronger requirement than the nodeaveraged awake complexity. In this paper we focus entirely on the worstcase efficiency measurement.
1.3 Our Techniques
1.3.1 Upper Bound
Our main technical tool is the construction of a Distributed Layered Tree. (We denote it shortly by DLT.) A DLT is a rooted tree where the vertices are labeled, such that each vertex has a greater label than that of its parent, according to a given order. Moreover, in a DLT each vertex knows its own label and the label of its parent. This knowledge of the label of the parent is not trivial in the sleeping model since passing this information between parent and child requires both of them to be in an awake state. Therefore, this knowledge and hierarchy of IDs throughout the tree makes DLTs are very powerful structures in the sleeping setting. Indeed, once such a tree is computed, the information of the entire graph can be learned by all vertices within awake complexity, as follows. For a nonroot vertex , let and be the labels of and the parent of in the DLT, respectively. Each nonroot vertex is awake only in rounds and . The root awakes only in round . This way the root is able to perform a broadcast to all the vertices of the tree. Each vertex receives the information from the root in round (this information has arrived to the parent of in an earlier stage) and passes it to its children in round . Indeed, in this round and all its children are awake. In a similar way, a convergecast in the DLT can be performed. We choose some label which is greater than all vertex labels in the DLT. Each vertex awakens in rounds and . This way, information from the leaves propagates towards the root. In round a vertex receives information from all its children, and in a later stage, in round , the vertex forwards the information to its parent. Note that indeed , since , according to the definition of a DLT. A formal proof for the running time of the broadcast and convergecast procedures in a DLT can be found in Lemma 2.1 in Section 2.
Thus, it becomes possible to perform broadcast and convergecast in the tree within awake rounds per vertex. A broadcast and convergecast in a tree allows each vertex to obtain the entire information stored in the tree. Since the tree spans the input graph, the entire information of the graph is obtained. Then, any decidable problem can be solved using the same deterministic algorithm internally in all vertices of the graph. Finally, each vertex deduces its part in the solution. This execution that is performed internally, does not require any additional rounds of distributed communication and is considered as a single awake round in terms of the sleeping model. To summarize this discussion, a DLT makes it possible to solve any decidable problem within 5 awake rounds per vertex.
The ability to solve any decidable problem within a constant awake complexity suggests that the computation of a DLT is an ultimate goal in the sleeping setting. Thus establishing the efficiency of this construction is of great interest. We construct a DLT within awake rounds as follows. We begin with singleton DLTs, where each vertex of the input graph is a DLT in a trivial way. Then, we perform connection phases in which the trees are merged. Each phase requires at most awake rounds from each vertex. The number of DLT trees in each phase is at least halved. After phases, a single tree remains. This DLT contains all the vertices of the input graph. Thus, it is the DLT of the entire input graph .
The highlevel idea of our algorithm is somewhat similar to the celebrated algorithm of Gallager, Humblet and Spira for minimum spanning trees [17]. But the construction is fundamentally different. While GHS finds an existing subgraph that is an MST, our technique gradually builds trees that contain new data. These are new ID assignments that make it possible to have progress with trees formation. In each iteration trees are merged and IDs are reassigned, until a single DLT of the entire input graph is achieved. This tree has the desired IDs, according to the definition of the DLT, as a result of the ID recomputation made in each iteration.
1.3.2 Lower Bound
Once we establish an upper bound on the awake complexity for constructing DLT, we turn to examining lower bounds. We note that an ordinary lower bound technique may not work for the sleeping setting. This is because the standard techniques in the distributed setting deal with what information can be obtained within a certain number of rounds. That is, within rounds, for an integer , each vertex can learn its hop neighborhood. Then arguments of indistinguishably of views are used. (That is, vertices that must make distinct decisions are unable to do that, if their hopneighborhoods are identical. In this case, rounds are not sufficient to solve a certain problem.) However, such arguments do not work in the sleeping setting. Indeed, within awake rounds the entire graph can be learned on certain occasions. Thus, algorithms with awake rounds are not limited to obtaining knowledge of hopneighborhoods.
As a consequence of the latter phenomenon, we investigate alternative ways to prove lower bounds. We introduce a quite powerful technique that allows one to transfer lower bounds on message complexity into lower bounds for rounds in the sleeping setting. Indeed, if messages must be sent in a ring network to solve a certain problem, for an integer , then awake rounds are required for any algorithm that solves the problem in the sleeping setting. Otherwise, if awake rounds are possible, all the messages in each round can be concatenated, and thus each vertex sends up to messages to each of its two neighbors in the ring, during the rounds it is awake. The number of messages per vertex becomes , and the overall number of messages passed is thus smaller than , which contradicts the assumption that at least messages must be sent. A formal proof for this claim can be found in Lemma 3.1 in Section 3.
We employ this idea with the known lower bound of for message complexity of leader election in rings [16]. Since a DLT allows, in particular, to solve leader election, we deduce that awake rounds are required. Otherwise, it would be possible to solve leader election in rings within fewer than messages. We note that the lower bound of [16] holds for a given number of rounds assuming the IDs are sufficiently large. Our upper bounds on awake complexity, on the other hand, do not rely on the range of IDs, but only on the number of vertices in the graph. No matter how large the IDs are, on an vertex graph the awake complexity for constructing a DLT is . The overall number of clock rounds (awake and asleep) do depend on the range of identifiers, but the dependency can be made as low as the logstar function, by using the coloring algorithm of Linial [24]. Our algorithm is applicable also with proper coloring of components, not necessarily with distinct IDs. Consequently, for any given number of clock rounds (awake and asleep), which upper bounds the ordinary running time of an algorithm, there exists a sufficiently large range of IDs, such that the awake complexity of our algorithm is tight.
1.3.3 Improved UpperBound for OLOCAL problems
OLOCAL problems are those that can be solved sequentially, according to an acyclic orientation provided with the input graph, such that each vertex decision is made after all vertices emanating from it have made their own decisions, and as a function of these decisions. (Note that directing edges from endpoints of smaller IDs to larger IDs provides such an orientation.) For these kind of problems, we employ a technique that is quite different from that of a DLT, and obtain awake complexity of . We still employ a tree construction, but this time it is more sophisticated than a DLT. On the other hand, it is constructed internally by each vertex, and is the same in all vertices. The algorithm starts by a distributed computation of an coloring of the input graph within time. Then, each vertex constructs internally a binary search tree, whose leaves are the possible colors. (The colors are not consecutive, and inner nodes have integer values between the values of the colors.) Next, each vertex decides to wakeup in the rounds whose numbers appear in the path from the leaf of their color and the root. We prove that one of these rounds occurs after all neighbors of smaller colors have made their decisions. Moreover, by that round these vertices have communicated their decision to the vertex. Consequently, it may compute its own decision. Since the depth of the tree is , this requires only awakerounds per vertex.
1.4 Related Work
The distributed model was formalized by Linial in his seminal paper [24] from 1987. This paper also provides a deterministic coloring algorithm with roundcomplexity, as well as a matching lower bound. Since then, a plethora of distributed graph algorithms has been obtained in numerous works. See, e.g., the survey of [27] and references therein.
The sleeping setting has been intensively studied in the area of Computer Networks [9, 25, 28, 29]. In Distributed Computing the problem of broadcast in sleeping radio networks was studied by King, Phillips, Saya and Young [22]. The problem of clock synchronization in networks with sleeping processors and variable initial wakeup times was studied by Bradonjic, Kohler and Ostrovsky [6], and by Barenboim, Dolev and Ostrovsky [2]. A special type of the sleeping model, in which processors are initially awake, and eventually enter a permanent sleeping state, was formalized by Feuilloley [12, 13]. An important efficiency measurement in this setting is the vertexaveraged awake complexity. This setting was further studied by Barenboim and Tzur [5], who obtained various symmetrybreaking algorithms with improved vertexaveraged complexity.
The awake complexity of various problems has been also studied in radio models of general graphs (rather than unit disk graphs). In particular, several important results were achieved by Chang et al. in PODC’18 [7]. That work considered Broadcast and related problems in several radio models, that can be seen as the sleeping model with additional restrictions. Specifically, in the model that is the closest to the sleeping model, the vertices are able to either transmit or listen in an awake round, but not both. (In other words, this is a halfduplex communication model, while the sleeping model is fullduplex.) There are also even more restricted models studied in [7], in which vertices cannot receive messages from multiple neighbors in parallel.
Since the results of Broadcast [7] are applicable to the sleeping model, and they are the stateoftheart even in this model that we consider in the current paper, a comparison between them and our results is in place. The Broadcast algorithm of [7] with the best deterministic awake complexity has efficiency , where is the largest identifier. Our results, on the other hand, provide a deterministic Broadcast algorithm in the sleeping setting with awake complexity of . This is at least a quadratic improvement in the sleeping setting. We stress that our awake complexity is not affected by the size of identifiers, and remains , no matter how large is. The Broadcast algorithm of [7] constructs trees that partition the graph into layers, but these trees are very different from our DLTs, both in their structure and in the techniques for achieving them. Specifically, in [7] each vertex in layer in the tree has a neighbor of layer . On the other hand, a DLT does not necessarily have this property. (This is because layer in a DLT consists of all vertices labeled by in the tree, which are not necessarily at distance from the root.) In addition, the tree construction in [7] is based on ruling sets, while our techniques are considerably different.
The class OLOCAL of problems that we mentioned in Section 1.3.3 is inspired by the class PSLOCAL which was first defined by Ghaffari, Kuhn and Maus [19]. This class consists of all problems that can be solved as follows. Given an acyclic orientation, the output of each vertex is determined sequentially, according to the orientation, after vertices on outgoing edges have made their decisions. The decision of each vertex is based on information from a polylogarithmic radius around it. The OLOCAL class is similar to the PSLOCAL class, except that instead of examining a polylogarithmic radius neighborhood around a vertex, its neighbors on outgoing edges are examined. (And, more generally, all vertices on consistently oriented paths emanating from a vertex are inspected.)
2 Distributed Layered Trees
In this section we describe our method with which one can solve any decidable distributed problem in the sleeping model. We describe the construction of a certain kind of a spanning tree, called a Distributed Layered Tree, defined as follows. Each vertex in the tree is labeled with a label . These labels must have some predefined order, such that they can be mapped to natural numbers. The labeling is such that the label of each vertex, besides the root, is larger than the label of its parent, and each vertex knows the label of its parent. These two requirements of the spanning tree allow us to perform broadcast across the spanning tree in a fashion where each vertex is awake for exactly 2 rounds. This is also true for a convergecast procedure. Consequently, given such a tree, the root can learn the entire input graph, compute a solution for any decidable problem internally, and broadcast it to all vertices, all within a constant number of awake rounds. This is done in the following way. For a broadcast procedure, we start with a message from the root of the tree through the tree, where each vertex is awake for just 2 rounds. Namely, awakes in round and round , where is the parent of in the spanning tree. In other rounds is asleep. This ensures that a message sent from the root will propagate in the tree and eventually arrive to all vertices of the graph while having each vertex awaken exactly twice. In the same manner we perform the convergecast procedure, where each child sends its message to its parent at the round for some . Again we have each vertex active for only two rounds, specifically, and .
Using this method one can have the root of the spanning tree compute the solution for any decidable problem deterministically and broadcast this solution back through the tree to all the vertices in the input graph with worstcase complexity in the sleeping model. Therefore, our main goal is obtaining a distributed sleeping algorithm for computing such a layered tree. We begin with a formal definition of notations and the definition of a Distributed Layered Tree. Note that we refer here to lexicographic order. For sake of definition we do not limit or define this lexicographic order since it does not matter for the definition of a DLT. One can build a DLT with any order that can be mapped to natural numbers. We define a lexicographic order that serves our purposes in Section 2.1.
Vertex Label . The label of a vertex is denoted by . The labels are taken from a range of labels which has a lexicographic order.
Tree Label . The label of a tree is denoted by . The labels are taken from a range of labels which has a lexicographic order. The label of a tree is defined to be the label of the root of the tree. Hence, that label can always be found in the memory of the root.
Definition 2.1.
A Distributed Layered Tree (DLT). A DLT is a rooted oriented labeled spanning tree with two properties, with respect to some lexicographical order:

For each vertex and a parent , the label of is greater than the label of in the lexicographical order.

has knowledge of the label of .
As we show in the next lemma, DLTs are useful for distributing data across the graph in an efficient way.
Lemma 2.1.
Given a DLT, the procedures of broadcast and convergecast across the entire tree take exactly 2 rounds each in the sleeping model.
Proof.
Broadcast. Each vertex is awake in rounds and . As a vertex awakes in round and broadcasts a message, each of its children in the tree are awake in round and thus can receive the message from its parent. Therefore, a message sent by the root propagates throughout the tree until it reaches all leaves.
Convergecast. Let be an integer, such that , for all , and is known to all vertices. Each vertex is awake in rounds and . If a child has a message to pass to its parent , it awaits round and then sends the message to . In that round is awake and ready to receive the messages from all its children. Each vertex already has knowledge of the subtree rooted at it, since for each vertex in the subtree in which is the root, and thus the round comes before the round . Thus the message propagates up the tree all the way to the root.
∎
We note that in the proof above we require for an integer to be larger than all labels of all vertices in the tree. Since the labels are required to be taken from a range with lexicographic order, and we have knowledge of the size of this range, can be chosen appropriately. We describe the lexicographic order in Section 2.1, as well as describing how each vertex has knowledge of ranges from which labels are selected.
2.1 The Connection Phases
Our algorithm for constructing a DLT starts with a graph where each vertex is considered to be a singleton tree. The initial label of each such tree , , is determined by the ID of its vertex as follows. . The vertex label is set as . (Two coordinates are used, since trees are going to be merged and have many vertices. Then the lefthand coordinate is going to be the same for all vertices in a tree, while the righthand coordinates may differ. Also, distinct trees will have distinct lefthand coordinate.) Each of these singleton trees is a DLT in a trivial way. Our goal is merging these trees in stages, so that eventually a single DLT remains that spans the entire input graph. Assume we have a forest of DLTs. Initially, we have a forest of singlevertex trees. During the connection phases, we enforce a rule regarding the representation of the labels of the vertices. Let be a tree in our forest. The DLT label of , is an integer number. The label of each vertex is set as , where is a number assigned to , as described in Section 2.1.1. In what follows we describe the ordering of the vertex labels. The left coordinate is considered to be the more significant one among the two. That is, iff or ( and ). Note that the requirements on labels hold. That is, the labels have an ordering and the root of the tree can deduce the tree label from its own label by referring to the first coordinate. Next, we describe how our algorithm produces connections between the trees. We do this in two stages.
2.1.1 Connection Stage One  Several DLTs into a single DLT
In this stage our goal, for each tree , is finding an edge that connects to a neighbor DLT , such that . In this sense, and . Using this edge we connect with , such that becomes the parent of . In the case that is a single vertex , we simply choose the neighbor of in with a label smaller than that of . We note that there may be a case that no such edge is found, since is a DLT with local minimum label. We handle such a case in Section 2.1.2. If contains more than one vertex, we perform a convergecast from all the neighbors of all vertices in to the root of . Consequently, the root of learns the structure of and the set of edges that connect with other trees. The root chooses the edge which connects to a vertex , where is a neighboring DLT of . As explained above, the choice is made such that . Recall that at this point appears in the first coordinate of all the labels of the vertices in , and hence the root of has knowledge of all the labels of its neighboring DLTs.
Internally, the root of calculates a new label arrangement of the vertices of , such that the vertex that was chosen above (that is connected to ) becomes the new root, and remains a DLT, under the new label arrangement. This requires a new orientation of . This new oriented tree is denoted . The label arrangement of is obtained in the following way. Note that the DLT label of is . The new root sets its label to be . The rest of the vertices in are assigned labels in the following way. Each vertex is assigned the label where is the distance of from in . Specifically, is the level of in . It follows that each level of has labels smaller than the labels in the next level of the tree. Once this internal computation is done, sends the resulting label arrangement in a broadcast procedure to all vertices in . Note that becomes the new root, instead of , from now until further notice.
But now we are posed with the problem that neighbors of the vertices of the tree do not know the labels of their neighbors, which is required for the next connection phase as well as the second requirement for a DLT. This is solved by awakening the entire graph for one round. Each vertex sends its knowledge to all of its neighbors while receiving the knowledge from all of its neighbors in the same round. Then all vertices of the graph switch to asleep status. The round in which this happens is determined by all vertices before the beginning of the execution. Once all vertices determine the starting time of such a phase, they all wake up after rounds, for a constant , such that bounds from above the execution time of this phase. In the following lemma we prove that this connection procedure connects several DLTs into one DLT.
Lemma 2.2.
Let be a connected component in our graph produced at this stage. Then is a DLT.
Proof.
First, is a tree since each edge in is oriented from a vertex with a high label to a vertex with a lower label. Thus consistently oriented cycles are not possible. Moreover, each vertex has a single parent, the one which it chose to connect to. Thus cycles are not possible at all. (A cycle is either consistently oriented or with a vertex that selects two neighbors in the cycle.) Furthermore, there is a tree with the minimal DLT label among all DLTs that compose . This root did not choose any edge to connect through since the root label is a local minimum. This local minimum label appears in the first coordinate of all the labels of its vertices making them local minima in their respective neighborhoods. Thus, the root of is the root of the tree .
Secondly, we made sure that each child has a label greater than the label of its parent by the way we chose the connections. Let be two neighbors in such that is the parent of . If and originated from the same tree at the beginning of the connection phase then the root of assigned them new labels in the connection phase such that . Otherwise, , and becomes the root of its original tree and sets its own label as . Since we chose such that we have that in this case as well.
Thirdly, by awakening all the vertices in the graph for one round and sending labels to 1hop neighborhoods, we made sure that vertices has knowledge of labels in their 1hop neighborhood (including their respective parents in ).
Thus, is a tree, where each child has a label greater than the label of its parent and each child knows the label of its parent. Thus, by definition, is a DLT.
∎
We would like to note here that at the end of each connection phase each DLT in the new forest has a distinct DLT label. This is due to the fact that the DLT label of is set from one of the labels of the trees composing . If each DLT had a distinct label at the start of the connection phase, it is clear we preserved this property also once the connection phase is done. Thus, it is also true in the start of the next connection phase. Note that we start the algorithm with each DLT having a distinct DLT label since we set those labels according to the IDs of the vertices of .
2.1.2 Connection Stage Two  Connecting Local Minimums
We now make a second connection step as part of the connection phase. The motivation in this step is that there might be trees where the DLT labels are local minima and thus those trees did not connect to any other tree. They might also not have been chosen by any other tree. If such is the case, these trees made it through stage one of the connection phase without connecting to any other component.
We would like to at least halve the number of trees in each phase and have at most connection phases, we connect these localminimum label trees to other components. Let be such a localminimum label tree. First, we would like to verify that indeed has no connections to other components from the previous stage. If some other component has selected earlier, we no longer need to handle it, even though is a minimum label tree.
We only need to handle trees with no connection to other trees. We check this by performing a convergecast in each connected component sending signals from the leaves to the root. If received a signal from vertices that previously did not belong to then is not a problematic tree and was chosen by another tree in the first stage. This signal can be, for example, the label of a leaf. Since the label contains the root of the tree to which the leaf belongs (before the trees are merged), then can deduce that the leaf was not part of before the first stage of the connection phase. In this case we do nothing with .
The other option is where was not connected to any other component. In this case, we add such a connection. To this end, chooses an edge to connect to a DLT arbitrarily. Since it is the only edge connecting it to other DLTs, the result is a tree. A simple move now would be to make a subtree of the DLT . But we note that might not be the only tree that arbitrarily chose to connect to . Since there can be more than one local minimum DLT that chooses , we need to make the parent of all those DLTs which chose to connect to it. Otherwise, will become the childDLT of more than one DLT which breaks the structure of a directed tree we aspire for.
Therefore, in order for the result to be a DLT we need to make the subDLT of instead. That is, given that is the edge connecting to , we turn into the root of and make the child of in . Doing so poses a problem since and this violates the requirements for a DLT in the resulting component. We solve this by waking up the entire graph for a single round and have and exchange information.
After this round, the information about the localminimum label trees that asked to join is located in vertices of the component . This information is then delivered to the root of , by a convergecast procedure. performs label reassignment in the same way as in Section 2.1.1. Specifically, a single BFS is computed for all vertices in and the local minimum label trees that connected to . Then reassignment of these labels is made according to the levels of the BFS tree. This results in labels of the from , such that the first coordinate is the same for all vertices in the connected component and is the level of the vertex in . Note that the structure of the labels and the label arrangement is the same as described in Section 2.1.1. Thus, the proof of Lemma 2.2 applies here for the new connected component and its labeling. Therefore, is a DLT. This completes the description of connection stages. Their steps are summarized in Algorithm 1. In what follows, we analyze the algorithm.
2.2 Analysis
We turn to analyse our algorithm for spanning a DLT on the input graph . We prove several lemmas and conclude with the main result for our scheme.
Lemma 2.3.
Let be a connected component at the end of a connection phase. Then is a DLT.
Proof.
Since our connection phase is divided into two stages, we also divide our proof to two. First, Let be a connected component at the end of stage one of the connection phase. Let be the set of DLTs composing . We define a graph , where the vertex set is a DLT that corresponds to in . contains an edge if there is an edge between the respective DLTs in . We assign , that is each vertex has Id equal to the label of the root of its corresponding DLT in . Let and be two DLTs in such that oriented an edge towards . Therefore, . Then we have that is a child of in and . This gives us an acyclic orientation such that each vertex has a single outgoing edge (towards its parent). Thus, is an oriented tree. Not only that, is a DLT since each child has an ID greater than its parent. Then we look at the edge which is the edge that originally connects to in . We make sure that, internally, each vertex assigns the IDs of its corresponding DLT to root in while preserving the DLT property in this new assignment. Therefore, is a DLT where each vertex is by itself a DLT. Thus, is a DLT.
Let be the set of DLTs that connect to in stage two of the connection phase. We again look at each DLT as a vertex. First, since each such vertex has ID which is a local minimum, there are no two such vertices which are neighbors, and therefore there are no two vertices that can choose each other. Furthermore, we make sure that these vertices are not connected to any other vertex in phase one of the connection phase, thus they are an independent set. is a DLT and each DLT in connects to through a single edge that is oriented towards . We again have an acyclic orientation where each vertex has a single parent. Thus, is a tree. Let us look at the DLTs as vertices again. Then in our algorithm we set . This guarantees that the induced graph on the vertex set is a DLT. Then, internally, we root in and reassign the labels of the vertices in to preserve its DLT property. Thus, the resulting component is a DLT of DLTs thus it is a DLT of itself.
∎
Next, we analyze the awake complexity of the algorithm. Our claim is that a connection phase halves the number of DLTs in the forest. This is quite straightforward. If a DLT is not connected in stage one of the connection phase, phase two considers it as problematic and makes sure it connects to another DLT. Thus, every DLT connects to another DLT and thus the number of DLTs is at least halved.
The next lemma analyses the performance of a connection phase.
Lemma 2.4.
Each vertex is awake for at most rounds in each connection phase.
Proof.
We show this step by step over the algorithm.

Each vertex sends the edge to the neighbor with the minimal label to the root of a DLT. This is a convergecast procedure which we showed to take 2 awake rounds for each vertex.

The root chooses an edge and broadcasts it to all vertices in the DLT. A broadcast also takes 2 awake rounds for each vertex.

Each DLT, internally, reassigns the labels of its vertices. This can be done with one convergecast procedure and one broadcast procedure. This step takes up to four awake rounds.

A local minima DLT chooses a component to connect to and then receives the label of that component. We wake up the entire graph for exactly one round. Each vertex adds a single awake round to its awake count.

A local minima DLT, internally, reassigns the labels of its vertices. Again, this can be done using one convergecast procedure and one broadcast procedure. Each vertex is awake for at most 4 rounds.

All vertices in wake up for one round to learn the new labels of their 1hop neighborhood. Another single awake round is added for each vertex awake count.
Overall, each vertex is awake for rounds in each connection phase. ∎
We can now conclude the analysis of our DLT spanning algorithm. Since the number of DLTs is at least halved in each phase, there are such phases, and the following theorem is directly obtained from Lemma 2.4.
Theorem 2.5.
A DLT for any input graph can be deterministically computed in awake rounds in the sleeping model.
As shown in Lemma 2.1, the action of convergecast and the action of broadcast on the resulting spanning tree each require only time in the sleeping model and thus we can collect the topology of the entire input graph to the root of our spanning tree, calculate the solution to the problem deterministically and broadcast the solution to all vertices through our spanning tree, again in time. This places an upper bound on the class of all decidable problems as we conclude in the following theorem.
Theorem 2.6.
Any decidable problem can be solved within worstcase deterministic awakecomplexity in the sleeping model.
3 A Tight Bound for DLT
In this section we prove that the complexity of DLT in the sleeping model is . The proof is by a reduction from leader election on rings. For the latter problem, it is known that a certain number of messages must be sent in the network in order to solve it [16]. In what follows we prove that this lower bound on messages implies a lower bound on awake rounds in the sleeping model. Before presenting the proof, we need the following lemma, which demonstrates a connection between the number of messages an algorithm produces and the complexity in the sleeping model.
Lemma 3.1.
Any algorithm which requires at least messages for its execution has an awake complexity of at least in the sleeping model.
Proof.
Let the number of messages that must be sent during the execution of be where is a constant. We show that there is at least one vertex that must be awake for at least rounds. Assume for contradiction that all vertices in are awake for less than rounds. Each vertex sends at most messages (one across each adjacent edge) in a single awake round. If more than one message per edge per round is required, all these messages can be concatenated into a single message. Thus, each vertex sends less than messages, and the overall number of messages in the execution is less than . This is a contradiction. Therefore, there must be a vertex that is awake for rounds. We conclude that the awake round complexity of in the sleeping model is also . Given that there are at least messages and vertices and at most edges, on average, each vertex is awake for at least rounds. Thus, the running time of (in the worst case and average case) is at least . ∎
Remark: An algorithm that requires messages has an awake complexity of , not only in the worst vertex, but also on average over the vertices. (Such an average complexity is referred to as vertexaveraged complexity [8].) Indeed , if the vertexaveraged awake complexity is then the sum of awake rounds for all vertices is , and the number of messages is , according to the proof of lemma 3.1.
Next, we employ Lemma 3.1 in order to prove that DLT requires complexity in the sleeping model. We show this for a ring graph by a reduction from the leader election problem.
Theorem 3.2.
Let be an arbitrarily large integer, and any deterministic algorithm for the DLT problem, which requires rounds in the model. Then there is an ID assignment from a sufficiently large range, as a function of , such that requires awakecomplexity in the sleeping model.
Proof.
The proof is by contradiction. Assume that there is an algorithm with awakeround complexity of , overall complexity , for ID assignment from an arbitrarily large range. Then uses at most messages (see Lemma 3.1). Let be an vertex cycle graph. The maximum degree of is . We execute on in the ordinary (notsleeping) model. We obtain a DLT of within rounds. Now, the root can be elected as the leader, and the other vertices know that they are not the root. In a DLT they also know the ID of the root. Thus, we have an algorithm for leader election in the model which employs at most messages.
According to [16], the leader election problem requires messages, if vertex IDs are chosen from a set of sufficiently large size , where is the Ramsey function and is the running time of the algorithm. This is a contradiction.
∎
It follows that any problem whose solution can be used to elect a leader within additional awake rounds requires awakecomplexity. We denote the class of such problems by DLThard problems. Theorems 2.5 and 3.2 directly give rise to the following corollary.
Theorem 3.3.
The class of DLThard problems has a deterministic complexity tight bound of in the sleeping model.
4 Solving OrientedLocal Problems
In this section we devise an algorithm for solving a class of OrientedLocal problems. This class contains all problems which, given an acyclic orientation on the edge set of the graph, can be solved as follows. Each vertex awaits all neighbors on outgoing edges to produce an output, and then computes its own output as a function of the outputs of these neighbors. (Vertices with no outgoing edges produce an output immediately.) We define this class formally.
Definition 4.1.
The class of 1hop Oriented Local Problems (1OLOCAL) consists of all problems that, given an acyclic orientation on the edge set of , can be solved in the following way. Let be a vertex in . Let be the set of neighbors of in its 1hop neighborhood which precedes in the orientation , i.e., the vertices connected by outgoing edges from . Let , for , be the solution of the problem. Then, can internally calculate with the knowledge of .
The class of Oriented Local Problems (OLOCAL) is a generalization of 1OLOCAL, where the set contains all vertices on paths that emanate from , rather than ’s immediate neighbors on such paths.
As one can tell, a solution for a problem in this class depends on a given orientation. Such orientation can be calculated or given as an input to the algorithm. In this work we assume that no orientation is given and we are forced to calculate one as part of the solution. We note that MIS and vertexcoloring are examples of wellstudied problems which fall in the class of 1OLOCAL problems.
We start with an algorithm for vertexcoloring in time [24]. This gives us an orientation of the edges where we orient edges in descending order, i.e., each edge is oriented towards the endpoint of a smaller color. We have all vertices in awake during the entire coloring algorithm. Let be an upper bound on the number of colors of the algorithm of Linial such that is a power of 2. At the next stage each vertex builds a binary search tree internally. The size of the tree is . The root of the tree receives the label in the middle of the range , which is . Now we have values in each side of the tree. Specifically, for the left subtree and for the right subtree. We choose the middle of the range for the left child of the root and the middle of the range for the right child of the root. We continue this recursively, so that each node of the tree obtains a unique value from .
Now we recolor the vertices of the input graph using the following mapping. The recoloring is performed by all vertices in parallel, with no communication whatsoever. We map the elements from to the set of values appearing in the leaves of the binary tree. The mapping is the same in all vertices. Specifically, for each , the th element is mapped to the label of the th leftmost leaf of the tree. Consequently, all vertices that were initially colored by the color switch their color to the label of the the leaf in the tree. Note that each pair of neighbors select distinct leaves of the tree, since their original colors are distinct. Therefore, the coloring after the mapping is proper as well.
Next, we switch to the sleeping state for all the vertices in the graph, and start solving a given 1OLOCAL problem . For the sake of simplicity, we proceed with the problem of MIS, but our method can be applied to any 1OLOCAL problem, as would be obvious from the description of the algorithm. The scheme is as follows. Each vertex employs its color in the coloring, and a respective leaf in the binary tree, whose value equals the color of . Let be the path from the leaf of the color of to the root of the binary tree. Let be the values appearing in . We denote . Note that some values in may be greater than , while other values may be smaller than . Then awakes at each round , and sends a message to its awake neighbors about its state, e.g., whether it is in the MIS, not in the MIS or undecided. It also receives such messages from its awake neighbors in these rounds. Recall that is the round number that is equal to the color of . In round the vertex makes a decision if to join the MIS or not according to the information received from outgoing edges. The neighbors on such edges have smaller colors, and thus have made a decision before round . Indeed, the following lemma proves that has all the information from vertices of lower colors when round arrives.
Lemma 4.1.
At round , which is mapped to the color of , all vertices with colors smaller than that of have already made a decision. Furthermore, their decisions have been passed to in a previous round.
Proof.
We prove the lemma by induction on the colors of the orientation.
Base: For the leftmost leaf in the tree, the mapping of the first color in the orientation maps to the first awakening round of the algorithm. Vertices with the first colors of the orientation have no outgoing edges and need not wait for decisions of any of their neighbors. As they wake at the first round they make a decision to be in the MIS and sleep again.
Step: Let be a vertex which awakes in round and assume by induction that all neighbors with lower colors already made a decision to be in the MIS or not.
Let be the number of neighbors of with colors smaller than the color of . Let be the rounds mapped to each color of these neighbors.
Then we have . Thus, in the binary tree, for each , and have a lowest common ancestor with ID , such that . (See Figure 1.) This is because a lowest common ancestor of two leaves must have these leaves in distinct subtrees rooted in its children. Otherwise, if both leaves belong to the same subtree of a child of a common ancestor, it is not the lowest one.
Let be a neighbor of with a color corresponding to the mapping . We note that must be in the subtree rooted in the left child of the ancestor of ID and is in the subtree rooted in the rightchild of the ancestor . Both and are awakened in round according to our algorithm. At round , since , the vertex already made a decision if it is in the MIS or not, by the induction hypothesis. Thus, sends a message with its decision to at round . Since , at round , simply receives the messages and awaits round to make a decision. (During this waiting period, the vertex may communicate with additional neighbors.) When round finally arrives, all neighbors with lower colors, those in , have made decisions and sent their decision in the round corresponding to some common ancestor with in the binary tree. Thus, has learnt the decisions of neighbors with smaller colors than its own. Finally, makes a decision in round according to all the decisions made by neighbors in . This concludes the proof of the lemma. ∎
For a problem in 1OLOCAL, a vertex can make its decision in round . For example, the following decisions are made in some wellstudied 1OLOCAL problems: For MIS, joins the MIS if all neighbors with lower colors are not in the MIS. For vertexcoloring, chooses a new color from the palette which is not yet chosen as a new color by its neighbors with lower oldcolors (i.e., colors according to the initial orientation).
The depth of a binary tree with leaves is at most . Thus, the size of a path from a leaf to the root is at most . The vertex only awakens in rounds corresponding to keys appearing along , and thus awakens in at most rounds. This provides us with the complexity of our algorithm in the following theorem.
Theorem 4.2.
Any 1OLOCAL problem can be solved in deterministic awakecomplexity in the sleeping model.
Note that each vertex is able to accumulate all information received from outgoing neighbors and pass it later to incoming neighbors, when these neighbors ask it for its output. Consequently, each vertex learns the information from all vertices that emanate from it in the orientation. Thus, each vertex is able to produce an output not only as a function of its outgoingneighbors outputs, but as a function of all output of vertices that emanate from it. In other words, any OLOCAL problem can be solved this way. Hence, we obtain the following corollary.
Corollary 4.3.
Any OLOCAL problem can be solved in deterministic awakecomplexity in the sleeping model.
5 Lower Number of Clock Rounds
In the model any decidable problem can be solved in clock rounds by spanning a tree from the vertex with the smallest ID and using broadcast and convergecast actions. This is a known fact. Also, some decidable problems are known to have a lower bound of rounds, for example 2coloring of a path [24]. One may see that the method we devised in the DLT is beneficial in the sleeping model, but notice that each connection phase requires activation of vertices over
clock rounds. This is because each vertex waits for the round which equals to its ID, which is a vector of size 2, and each of its coordinates is of range
.In this section we show that a connection phase can be done more efficiently in respect to the number of clock rounds. This shows that our method does not require the entire graph to wait for a large number of clock rounds to complete when one observes all the vertices as a whole, as one would do in the model. This comes at an additional small price of awake times of vertices. More specifically, we show that each connection phase takes clock rounds while each vertex is awake for rounds during each phase. This is in contrast to our original method in which the connection phase takes clock rounds while each vertex is awake for rounds in each phase.
The main idea of this algorithm is to form trees of a bounded height in each connection phase. This is in contrast to our original algorithm, where a tree obtained from merging several DLTs may be arbitrarily high, which increases the waiting time for each vertex until it awakes. In the current version, however, we partition the set of trees that would like to join a common component into smaller components with bounded height. This limits the waiting time for a vertex until it needs to awake. Nevertheless, each such component still contains at least two DLTs of the previous phase. Thus, the number of DLTs at least halves with each phase, and the algorithm completes within connection phases. Moreover, when each tree of the previous phase is considered as a vertex in a component of the current phase, the height of the component is guaranteed to be bounded by a constant. Consequently, the merging of trees in the component requires just clock rounds, rather than as in our original algorithm. In order to partition a component of large height into components of bounded height we employ a 3coloring algorithm of the component tree, and then perform partitioning and merging using these colors.
We start by assuming that at the beginning of each connection phase, in each connected component , for each vertex , the ID vector is composed as , where is the root of (we remind that is a DLT at the start of a connection phase) and is the distance of from . We will end our connection phase with each newly formed DLT having this property. Then, indeed, a broadcast and convergecast actions in such a DLT takes clock rounds. For a broadcast, each vertex awakes in rounds and . For convergecast each vertex awakes in rounds and . The above property is trivial at the beginning of our spanning algorithm as each vertex is its own DLT. It simply constructs its ID as . We now describe a single connection phase, again, starting with the above property assumed and ending it with the above property achieved.
We focus on stage 1 of the connection phase as described in Subsection 2.1.1. Stage 2 remains unchanged. We start with each existing DLT choosing a parent DLT as described in that subsection. The result is a new connected component which is a tree of connected trees. That is, can be viewed as a directed tree of a set of vertices , such that each vertex in is a DLT. Let us look at the graph where we refer to each directed tree in as a vertex. That is where there is a vertex in for a corresponding DLT in . The ID of a vertex in equals the label of its corresponding DLT. There is an edge if correspond to connected DLTs in . The edge is directed in the same direction as the edge connecting the corresponding DLTs in . Thus, is a directed tree just as is a directed tree of DLTs. Each ID in is of the range . We give some notation to simplify the discussion ahead. We denote each of the DLTs which make the component . For each , we denote the corresponding vertex in . Vice versa, we denote the DLT in which corresponds to a vertex .
We 3color the directed tree using the algorithm of Goldberg, Plotkin and Shannon [20]. During this algorithm, after each step of the coloring algorithm, we pause such that each can convergecast and broadcast internally. This way, all vertices in have the knowledge that should have for the continuation of the coloring algorithm in . Since our assumption allows each DLT to perform the broadcast and convergecast actions in clock rounds, the overall number of clock rounds required to finish coloring is . Each vertex in has to be awake for at most rounds for spreading information between each step in the coloring algorithm. So each vertex is awake for at most rounds during the coloring of .
Now our goal is to break into a forest of trees each of depth at most 3. The steps of this procedure are described below. This procedure allows us to obtain the desired structure within a sufficiently small number of clock rounds. To keep track of the number of clock rounds that pass and the number of awake rounds for each vertex, we will analyze these complexity measures in each step of the following procedure that consists of four steps.

Each vertex of the color 1 chooses its parent in as a parent and sends a message to that it was chosen. For this communication we awake all vertices in for a single awakening round so that both endpoints of the edge connecting and can communicate. and mark themselves as ”connected”. (In this means a convergecast and broadcast actions inside and ). This takes at most clock rounds and at most awake rounds for each vertex.

Each vertex of the color 2 which is not yet marked connected, and that has a parent that is not marked ”connected”, chooses its parent in as a parent. Denote that parent as . Again, notifies that it was chosen and both are marked ”connected”. This takes at most clock rounds and at most awake rounds for each vertex.

Repeat the same as in step 2 for the color 3.

Let be a vertex in which is not marked connected. Then connects to its parent in .
We note that the above process takes at most clock rounds to finish and adds awake rounds to each vertex in . So far, the total number of clock rounds is at most and the awake time is at most . Let us denote the forest created as . We prove the following lemma.
Lemma 5.1.
Each is of depth at most 3.
Proof.
For any to be of depth greater than 3, there must be a vertex in with a grandgrandchild. Let be the vertices where is the child of , is the child of and is the child of (thus, is a grandgrandchild of ). Let us assume that chose before Step 4. We show that cannot even be a grandchild of let alone a grandgrandchild of . Since chose before step 4, itself could not have been marked ”connected” at that point. Also, since the colors of and are different in the 3coloring of , did not choose as a parent simultaneously when chose as a parent. Thus, when it came the turn for to act, is already marked ”connected” and cannot have chosen as a parent. Nor would it have done so in step 4 since only unconnected vertices act in that step. Especially, cannot be the grandgrandchild of .
According to what we have just shown, until step 4, all trees are of depth at most 2. The set of unconnected vertices in step 4 is independent and can only connect to a tree constructed in steps 13. Thus, the depth of each such tree can grow by at most 1 in step 4 and reach the depth of at most 3 as required to prove.
∎
Note that since in step 4 we guarantee that unconnected vertices become connected, it is clear that the depth of each is at least 2 and we at least halve the number of connected components from the start of the connection phase.
Next, we return to considering , which is now partitioned into subtrees of DLTs. a DLT remains connected to its parent only if there is a tree in which is a child of . We thus partitioned into smaller components. Let be such a tree. Each DLT can be in one of three states: Either is the root in and we denote as ; or is a leaf and we denote it as ; or is exactly between a root and a leaf in , in which case we denote it as .
We can now finally change the IDs in each to achieve the property with which we began the connection phase. W.L.O.G. we assume that is of depth 3. The process is as follows:

remains unchanged. We wake up the vertices of and all vertices in the DLTs marked for one round. sends the ID of and the height of to the root of each . We remind that all vertices of have this knowledge internally. All vertices sleep again.

In each the root calculates internally new IDs for the vertices in of the form where is the distance of the vertex from which can be calculated because the root knows its distance from . We perform a broadcast in in clock rounds and awake rounds. All vertices sleep again. Note that now all vertices in know the ID of the and their distance from it.

We repeat the above two steps where transmits to all its children, those marked as and the roots of these leaves reassign labels internally in their respective trees (similar to what is described in step 2).
Overall, the above process takes another clock rounds and additional awake rounds for each vertex in . When the process terminates, each label of each vertex in our newly formed DLT, denote , is of the form where is the distance of the vertex from , which is exactly the property we wanted to preserve and which allows us to move on to the next connection phase.
The number of connection phases needed remains , since we guarantee that each DLT is connected to at least one other DLT in each phase. Each connection phase performs the coloring process in which each vertex is awake for at most stages, each can take up to clock rounds due to the dissemination of knowledge inside each connected component that functions as a vertex in . Thus, the total number of clock rounds is instead of for finding a DLT for the input graph . The awake complexity of the sleeping model is instead of , since each dissemination requires each vertex in a connected component for only awake rounds as shown in Lemma 2.1. Thus, we improved the number of clock rounds significantly while paying a small price in awake complexity. The final result is a DLT for where each vertex has knowledge of its distance from the root. Thus, we can perform the broadcast and convergecast procedures using only clock rounds which is optimal.
Theorem 5.2.
There is a deterministic algorithm for the DLT problem with worstcase awake time which terminates within clock rounds.
6 Sleeping in the Model
In this section we show that our construction of a DLT on can be achieved also in the model. This is of great importance to some wellstudied problems in which the exchanged information is of restricted size. We define a class of problems CCONGEST as follows.
Definition 6.1.
The class of problems CCONGEST, or the Congested Combinations Class, is the class of problems which have solutions with the following properties:

The solution can be expressed by using up to bits.

Given two solutions on two subgraphs and , one can compute the solution on the graph using some sequential algorithm without incurring further communication.
Note that the fact that the solutions can be expressed as in the definition does not mean the problem is easy to compute in the distributed setting. It may well be possible that the problem has a lower bound of and thus has a solution less efficient than constructing a DLT. Examples of problems in CCONGEST are leader election, computing exact number of edges and average degree.
We thus need to show that such a DLT is possible to compute in the model. We will do so by going over the steps of our DLT algorithm as we did in Lemma 2.4 and show that each step can be performed in a congested network. As we will see, the steps that need the most attention are the steps where a DLT reassigns the labels internally to reorient the edges (steps 3 and 5 in Lemma 2.4). To this end, we devise a subprocedure that allows this reassignment to occur in the sleeping model, while using messages of size at most bits. We remind that the assignment of the labels is done by defining a new root and reorienting the edges of the DLT accordingly. For a DLT , The labels are of the form of , where is the distance of the vertex from the new root of .
Reassigning Labels in the DLT in a Congested Network. In the version suggested in Section 2 we aggregate all the knowledge to the root of the DLT, locally computing an assignment and broadcasting it to the vertices of the DLT. This requires messages of large size. Instead, we only aggregate the distance from the new root. This is done as follows.
Let be the root of the DLT and let be the new root from which we wish to reassign the labels. Let be the vertices in on the path from to (going up the tree). We start by performing a broadcast where only the vertices take part. When it is time for to be active, it sends a message to , its parent, that it is the new root. Thus, when it is time for to send a message to its parent, it sends a message to that it is in distance 1 from the new root. We continue this where each sends its own distance from to . Note that the distance is bounded by , thus we use only bits for each message. See Fig. 3 for an example. This discussion is summarized in the next lemma.
Lemma 6.1.
The reassignment of labels in a connection phase in a DLT can be done using messages of size bits.
Now we move on to perform a propagation of the new label assignment in . Each vertex in its turn simply sends its distance from to all of its children. Note that at the start of this stage, each vertex in the set knows its distance from and uses that distance in the message. Since, again, we send distances that are bound by we only use bits for each message. See Figure 4. We conclude this section with the following theorem.
Theorem 6.2.
The construction of a DLT can be done in a congested network in deterministic worstcase awake time in the sleeping model.
Proof.
We show that each step of the algorithm can be executed using messages of bits.

Each vertex sends the edge to the neighbor with the minimal label to the root of a DLT. Each vertex passes to its parent the edges with the minimal label seen so far in the subtree including the edge it itself chose. Since each vertex aggregates exactly one edge, we communicate only bits across each edge in the DLT.

The root chooses an edge and broadcasts it to all vertices in the DLT. Again, each vertex propagates a single edge to its children in the DLT. We communicate only bits across each edge in the DLT.

Each DLT, internally, reassigns the labels of its vertices. According to Lemma 6.1 this requires messages of size at most bits.

A local minima DLT chooses a component to connect to and then receives the label of that component. The same as we did in step 1, this can also be done using only bits across each edge.

A local minima DLT, internally, reassigns the labels of its vertices. According to Lemma 6.1 this requires messages of size at most bits.

All vertices in wake up for one round to learn the new labels of their 1hop neighborhood. Two labels are sent across each edge. This can be done using only bits.
∎
7 Conclusion
In this work we investigated the strength of Distributed Layered Trees in the sleeping model. We showed that the computation of such trees is complete and thus any decidable problem can be solved within awake complexity. This raises the question of finding nontrivial subclasses of decidable problems which one can solve in a more efficient way than using a DLT. We address this question by defining the OLOCAL class of problems and showing that it indeed can be solved more efficiently in the sleeping model. Since the model is of great interest in the field of distributed networks, we investigated it as well, and obtained a class of problems that can be solved within logarithmic awake complexity by using only messages of logarithmic size.
Another important aspect is the number of ordinary clock rounds of an algorithm with good awake complexity. While our simpler version of the algorithm has quadratic complexity of clock rounds, the more sophisticated variant gets closer to the optimal rounds.
Overall, we showed the strength of the sleeping model and the possibility of a significant energy conservation for distributed networks.
Acknowledgements The authors are grateful to the anonymous reviewers for helpful comments.
References
 1. A. Balliu, F. Kuhn, and D. Olivetti. Distributed edge coloring in time quasipolylogarithmic in delta. In PODC ’20: ACM Symposium on Principles of Distributed Computing, Virtual Event, Italy, August 37, 2020, pages 289–298. ACM, 2020.
 2. L. Barenboim, S. Dolev, and R. Ostrovsky. Deterministic and energyoptimal wireless synchronization. ACM Trans. Sens. Networks, 11(1):13:1–13:25, 2014.
 3. L. Barenboim and M. Elkin. Deterministic distributed vertex coloring in polylogarithmic time. J. ACM, 58(5):23:1–23:25, 2011.
 4. L. Barenboim, M. Elkin, and T. Maimon. Deterministic distributed (delta + o(delta))edgecoloring, and vertexcoloring of graphs with bounded diversity. In Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC 2017, Washington, DC, USA, July 2527, 2017, pages 175–184. ACM, 2017.
 5. L. Barenboim and Y. Tzur. Distributed symmetrybreaking with improved vertexaveraged complexity. In Proceedings of the 20th International Conference on Distributed Computing and Networking, ICDCN 2019, Bangalore, India, January 0407, 2019, pages 31–40. ACM, 2019.
 6. M. Bradonjic, E. Kohler, and R. Ostrovsky. Nearoptimal radio use for wireless network synchronization. Theor. Comput. Sci., 453:14–28, 2012.
 7. Y. Chang, V. Dani, T. P. Hayes, Q. He, W. Li, and S. Pettie. The energy complexity of broadcast. In C. Newport and I. Keidar, editors, Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, PODC 2018, Egham, United Kingdom, July 2327, 2018, pages 95–104. ACM, 2018.
 8. S. Chatterjee, R. Gmyr, and G. Pandurangan. Sleeping is efficient: MIS in O(1)rounds nodeaveraged awake complexity. In PODC ’20: ACM Symposium on Principles of Distributed Computing, Virtual Event, Italy, August 37, 2020, pages 99–108. ACM, 2020.
 9. J. Deng, Y. S. Han, W. R. Heinzelman, and P. K. Varshney. Scheduling sleeping nodes in high density clusterbased sensor networks. Mob. Networks Appl., 10(6):825–835, 2005.
 10. C. Dwork, J. Y. Halpern, and O. Waarts. Performing work efficiently in the presence of faults. SIAM J. Comput., 27(5):1457–1491, 1998.
 11. L. M. Feeney and M. Nilsson. Investigating the energy consumption of a wireless network interface in an ad hoc networking environment. In Proceedings IEEE INFOCOM 2001, The Conference on Computer Communications, Twentieth Annual Joint Conference of the IEEE Computer and Communications Societies, pages 1548–1557. IEEE Comptuer Society, 2001.
 12. L. Feuilloley. How long it takes for an ordinary node with an ordinary ID to output? In Structural Information and Communication Complexity  24th International Colloquium, SIROCCO 2017, Porquerolles, France, June 1922, 2017, Revised Selected Papers, volume 10641 of Lecture Notes in Computer Science, pages 263–282. Springer, 2017.
 13. L. Feuilloley. How long it takes for an ordinary node with an ordinary id to output? Theor. Comput. Sci., 811:42–55, 2020.
 14. M. Fischer. Improved deterministic distributed matching via rounding. Distributed Comput., 33(34):279–291, 2020.
 15. M. Fischer, M. Ghaffari, and F. Kuhn. Deterministic distributed edgecoloring via hypergraph maximal matching. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 1517, 2017, pages 180–191. IEEE Computer Society, 2017.
 16. G. N. Frederickson and N. A. Lynch. Electing a leader in a synchronous ring. J. ACM, 34(1):98–115, 1987.
 17. R. G. Gallager, P. A. Humblet, and P. M. Spira. A distributed algorithm for minimumweight spanning trees. ACM Trans. Program. Lang. Syst., 5(1):66–77, 1983.
 18. L. Gasieniec, E. Kantor, D. R. Kowalski, D. Peleg, and C. Su. Time efficient kshot broadcasting in known topology radio networks. Distributed Comput., 21(2):117–127, 2008.

19.
M. Ghaffari, F. Kuhn, and Y. Maus.
On the complexity of local distributed graph problems.
In
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 1923, 2017
, pages 784–797. ACM, 2017.  20. A. V. Goldberg, S. A. Plotkin, and G. E. Shannon. Parallel symmetrybreaking in sparse graphs. SIAM J. Discret. Math., 1(4):434–446, 1988.
 21. M. Hanckowiak, M. Karonski, and A. Panconesi. On the distributed complexity of computing maximal matchings. In Proceedings of the Ninth Annual ACMSIAM Symposium on Discrete Algorithms, 2527 January 1998, San Francisco, California, USA, pages 219–225. ACM/SIAM, 1998.
 22. V. King, C. A. Phillips, J. Saia, and M. Young. Sleeping on the job: Energyefficient and robust broadcast for radio networks. Algorithmica, 61(3):518–554, 2011.
 23. F. Kuhn. Faster deterministic distributed coloring through recursive list coloring. In S. Chawla, editor, Proceedings of the 2020 ACMSIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 58, 2020, pages 1244–1259. SIAM, 2020.
 24. N. Linial. Distributive graph algorithmsglobal solutions from local data. In 28th Annual Symposium on Foundations of Computer Science, Los Angeles, California, USA, 2729 October 1987, pages 331–335. IEEE Computer Society, 1987.
 25. M. Peng, Y. Xiao, and P. P. Wang. Error analysis and kernel density approach of scheduling sleeping nodes in clusterbased wireless sensor networks. IEEE Trans. Veh. Technol., 58(9):5105–5114, 2009.
 26. V. Rozhon and M. Ghaffari. Polylogarithmictime deterministic network decomposition and distributed derandomization. In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 2226, 2020, pages 350–363. ACM, 2020.
 27. J. Suomela. Survey of local algorithms. ACM Comput. Surv., 45(2):24:1–24:40, 2013.
 28. C. Titouna, A. M. Guéroui, M. Aliouat, A. A. A. Ari, and A. Adouane. Distributed faulttolerant algorithm for wireless sensor networks. Int. J. Commun. Networks Inf. Secur., 9(2), 2017.
 29. L. Wang, J. Yan, T. Han, and D. Deng. On connectivity and energy efficiency for sleepingschedulebased wireless sensor networks. Sensors, 19(9):2126, 2019.
 30. R. Zheng and R. Kravets. Ondemand power management for ad hoc networks. Ad Hoc Networks, 3(1):51–68, 2005.
Comments
There are no comments yet.