Scheduling periodic messages on a shared link

02/13/2020
by   Maël Guiraud, et al.
0

Cloud-RAN is a recent architecture for mobile networks where the processing units are located in distant data-centers while, until now, they were attached to antennas. The main challenge, to fulfill protocol time constraints, is to guarantee a low latency for the periodic messages sent from each antenna to its processing unit and back. The problem we address is to find a sending scheme of these periodic messages without contention nor buffering. We focus on a simple but common star shaped topology, where all contentions are on a single link shared by all antennas. For messages of arbitrary size, we show that there is always a solution as soon as the load of the network is less than 40%. Moreover, we explain how we can restrict our study to message of size 1 without increasing too much the global latency. For message of size 1, we prove that it is always possible to schedule them, when the load is less than 61% using a polynomial time algorithm. Moreover, using a simple random greedy algorithm, we show that almost all instances of a given load admit a solution, explaining why most greedy algorithms work so well in practice.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/22/2018

Deterministic Scheduling of Periodic Messages for Cloud RAN

A recent trend in mobile networks is to centralize in distant data-cente...
03/26/2018

Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities

Homa is a new transport protocol for datacenter networks. It provides ex...
02/18/2020

Synthesis in Presence of Dynamic Links

The problem of distributed synthesis is to automatically generate a dist...
12/05/2018

Ibdxnet: Leveraging InfiniBand in Highly Concurrent Java Applications

In this report, we describe the design and implementation of Ibdxnet, a ...
02/07/2019

Understanding Chat Messages for Sticker Recommendation in Hike Messenger

Stickers are popularly used in messaging apps such as Hike to visually e...
04/15/2020

Empirical Models for the Realistic Generation of Cooperative Awareness Messages in Vehicular Networks

Most V2X (Vehicle-to-Everything) applications rely on broadcasting aware...
09/12/2018

A Simple Elementary Proof of P=NP based on the Relational Model of E. F. Codd

The P versus NP problem is studied under the relational model of E. F. C...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Next generations of mobile network architectures evolve toward centralized radio network architectures called C-RAN for Cloud Radio Access Network, to reduce energy consumption costs [1] and more generally the total cost of ownership. The main challenge for this type of architecture is to reach a latency compatible with transport protocols [2]. The latency is measured between the sending of a message by a Remote Radio Head (RRH) and the receptions of the answer, computed by real-time virtualized network functions of a BaseBand Unit (BBU) in the cloud. For example, LTE standards require to process functions like HARQ (Hybrid Automatic Repeat reQuest) in ms [3]. In 5G, some services need end-to-end latency as low as ms [4, 5]. The specificity of the C-RAN context is not only the latency constraint, but also the periodicity of the data transfer in the frontaul network between RRHs and BBUs: messages need to be emitted and received each millisecond [3]. Our aim is to operate a C-RAN on a low-cost shared switched network.

Statistical multiplexing even with a large bandwidth does not satisfies the latency requirements of C-RAN [6, 7]. The current solution [8, 9] is to use dedicated circuits for the fronthaul. Each end-point, an RRH on one side and a BBU on the other side is connected through direct fiber or full optical switches. This eliminates all contentions since each message flow has its own link, but it is extremely expensive and do not scale in the case of a mobile network composed of about base stations.

The question we address is the following: is it possible to schedule periodic messages on a shared link without using buffers? Eliminating this source of latency leaves us with more time budget for latency due to the physical length of the routes in the network, and thus allows for wider deployment areas. Our proposed solution is to compute beforehand a periodic and deterministic sending scheme, which completely avoids contention. This kind of deterministic approach has gained some traction recently: Deterministic Networking is under standardization in IEEE 802.1 TSN group [10], as well at IETF DetNet working group [11]. Several patents on concepts and mechanisms for DetNet have been already published, see for example [12, 13].

The algorithmic problem studied in the present article, called Periodic Message Assignment or pma, is the following. Given a period, a message size and delay between the two contention points different for each message, choose an offset (departure time in the period) for each message, so that it goes through the two contention points when they are free. It is similar to the two flow shop scheduling problem [14] with periodicity. The periodicity adds more constraints, since messages from consecutive periods can interact. The objective is usually to minimize the makespan, or schedule length, but in our periodic variant it is infinite. Hence, we choose to look for any periodic schedule without buffering, which minimizes the trip time of each message.

To our knowledge, all studied periodic scheduling problems are quite different from the one we present. Either the aim is to minimize the number of processors on which the periodic tasks are scheduled [15, 16], while our problem corresponds to a single processor and a constraint similar to makespan minimization. In cyclic scheduling [17], the aim is to minimize the period of a scheduling to maximize the throughput, while our period is fixed. The train timetabling problem [18] and in particular the periodic event scheduling problem [19] are generalizations of our problem, since they take into account a fixed period and can express the fact that two trains (like two messages) should not cross. However, they are much more general: the trains can vary in size, speed, the network can be more complex than a single track and there are precedence constraints. Hence, the numerous variants of train scheduling problems are very hard to solve (and always

-hard), they usually allow for some delay and most of the work is devising good algorithms using branch and bound, mixed integer programming, genetic algorithms … 

[18]

In previous articles of the authors, generalizations of pma allowing buffers are studied on a single link [6] or on a cycle [20]

. Heuristics (using scheduling algorithms) and FPT algorithms are used to find a sending scheme with

minimal latency, while here we only look for sending scheme without any additional latency. More complex problems of computing schedules for time sensitive networks has been practically solved, using mixed integer programming [21, 22] or an SMT solver [23], but without theoretical guarantes on the quality of the produced solutions. Typical applications cited in these works (out of C-RAN) are sensor networks communicating periodically inside a car or a plane, or logistic problems in production lines.

Organization of the paper

In Sec. II, we present the model and the problem pma. In Sec. III, we present several greedy algorithms and prove that they always find a solution to pma for increasing load. Then, we illustrate their surprisingly good performances on random inputs in Sec. III-D. It turns out that the message size can be assumed to be one for a small price in added latency, as explained in Sec. IV. Hence, we develop a deterministic and a probabilistic algorithm for this special case in Sec. V, which work for much higher loads than the algorithms for large message size and we illustrate their performances on random inputs in Sec. V-C.

Ii Model

In this article, we model a simple network (see fig. 1) in which periodic messages flow through a single link. The answer to each message is then sent back through the same bidirectional link. The model and problem can easily be generalized to any network, that is any directed acyclic graph with any number of contention points, see [6]. We choose here to present the simplest non trivial such network, for which we can still obtain some theoretical results.

The time is discretized and the process we consider is periodic of fixed integer period . We use the notation for the set . In the C-RAN network we model, represented in Fig.1, all messages are of the same nature, hence they are all of the same size denoted by . This size corresponds to the time needed to send a message through some contention point of the network, here a link shared by all antennas. We denote by the number of messages, which are numbered from to . A message is characterized by its delay . It means that if the message number arrives at the link at time , then it returns to the other end of the link on its way back at time .

.

.

.



Fig. 1: An example of a network with two contention points

Since the process we describe is periodic, we may consider any interval of units of time to represent the state of our system. Describing the messages going through the two contention points during such an interval completely defines the periodic process. We call the representation of the interval of time in the first contention point the first period and the second period for the second contention point.

An offset of a message is a choice of time at which it arrives at the first contention point. Let us consider a message of offset , it uses the interval of time in the first period and in the second period. We say that two messages and collide if either or . If (resp. ) we say that message uses time in the first period (resp. in the second period).

We want to send all messages, so that they are no collision in the shared link. In other word, we look for a way to send the messages without using buffering and hence limiting the latency to the physical length of the links. An assignment is a choice of an offset for each message such that no pair of message collide, as shown in Fig. 2. Formally, an assignment is a function from the messages in to their offsets in .

We call Periodic Message Assignment or pma the problem studied in this article, which asks, given an instance of messages, a period and a size , to find an assignment or to decide there is none.

Fig. 2: An instance of pma ( messages, and ) and one of its assignments

The complexity of pma is not yet known. However, we have proven that, when parameterized by the number of messages, the problem is  [7]. A slight generalization of pma, with several contention points but each message only going through two of them, as in pma, is -hard [7]. If the shared link is not bidirectional, that is there is a single contention point and each message goes through it twice, it is also -hard [24]. Hence, we conjecture that pma is -hard.

To overcome the supposed hardness of pma, we study it when the load of the system is small enough. The load is defined as the number of units of time used in a period by all messages divided by the period. Hence the load is equal to . Our aim is to prove that, for small load, there is always an assignment and that it can be found by a polynomial time algorithm.

Iii Greedy Algorithms for Large Messages

In this section, we study the case of large messages. When modeling real problems, it is relevant to have when the transmission time of a single message is large with regard to its delay.

A partial assignment is a function defined from a subset of to . We say that , the cardinal of the domain of , is its size. We say that a message in is scheduled (by ), and a message not in is unscheduled. We only consider partial assignments such that no pair of messages of collide. If has domain , and , we define the extension of to the message by the offset , denoted by , the function defined as on and .

All presented algorithms build an assignment incrementally, by growing the size of the domain of a partial assignment. Moreover, the algorithm of this section are greedy since once an offset is chosen for a message, it is never changed.

Iii-a First Fit

Assume that for some partial assignment , the message has offset : it uses all times from to in the first period. If a message is scheduled with some offset before , then the last time it uses in the first period is and it should be less than , which implies that . If is larger than , to avoid collision between messages and , it should be larger or equal to . Hence the message forbids offsets for messages still not scheduled because of this use of time in the first period. The same reasoning can be done for the second period, which again forbids offsets. Hence, if messages are already scheduled, then offsets are forbidden for each unscheduled message. Note that this is an upper bound on the number of forbidden offsets, since the same offset can be forbidden twice because of a message on the first and on the second period.

Let be the maximum number of forbidden offsets when extending . Formally, assume is defined over and , is the maximum over all values of of . The previous paragraph shows that is always bounded by .

The first algorithm deals with the route in the order they are given: for each unscheduled route it tests all offsets from to until one do not create a collision with the current assignment. We call this algorithm First Fit. Remark that if , then whatever the delay of the route we want to extend with, it is possible to find an offset. Since and , First Fit (or any greedy algorithm) will always succeed when , that is when the load is less than . It turns out that First Fit always creates compact assignments (as defined in [6]), that is a message is always next to another one in one of the two period. Hence, we can prove a better bound on , when is built by First Fit, as stated in the following theorem.

Theorem 1.

First Fit always solves pma positively on instances of load less than .

Proof.

To prove the theorem, we show by induction on the size of , that . For , it is clear since a single message forbid at most offsets, as explained before. Now, assume and consider a route such that First Fit builds from . By definition of First Fit, choosing as offset creates a collision. W.l.o.g. say it is a collision in the first period. It means that there is a scheduled message between and (modulo ), hence all these offsets are forbidden by . The same offsets are also forbidden by the choice of as offset for , then only new offsets are forbidden, that is , which proves the induction and the theorem. ∎

Iii-B Meta-Offset

The second method is described in [6] and achieves the same bound on the load using a different method, that we recall here because we use it in the more involved algorithm of the next section. The idea is to restrict the possible offsets at which messages can be scheduled. It seems counter-intuitive, since it decreases artificially the number of possible offsets to schedule new messages. However, it allows to reduce the number of forbidden offsets, when the algorithm is designed accordingly. A meta-offset is an offset of value , with an integer from to . We call Meta-Offset the greedy algorithm which works as First Fit, but consider only meta-offsets when scheduling new messages.

Let be the maximal number of meta-offsets forbidden by . By definition, two messages with a different meta-offset cannot collide in the first period. Hence, can be bounded by and we obtain the following theorem.

Theorem 2 (Proposition 3 of [6]).

Meta-Offset always solves pma positively on instances of load less than .

Iii-C Tuples and meta-intervals

We now propose a more involved family of greedy algorithms which solves pma positively for larger loads. We try to combine the good properties of the two previous algorithms: the compacity of the assignments produced by First Fit and the use of meta-offsets to reduce collisions on the first period. The idea is to schedule several messages at once, using meta-offsets, to maximize the compacity of the obtained solution. We first describe the algorithm which schedules pairs of routes and then explain quickly how we can extend it by scheduling any tuples of messages instead of pairs.

We first prove a lemma which allows to assume that the period is a multiple of , since it makes the analysis of our algorithms much simpler and tighter. It only changes the load from to at most : the difference is less than , and thus very small for large .

Lemma 3.

Let be an instance of pma with messages of size , period and . There is an instance with messages of size and period such that an assignment of can be transformed into an assignment of in polynomial time.

Proof.

Let with . We define the instance as follows: , and . With this choice, we have . Consider an assignment of the instance . If we let , then is also an assignment for . Indeed, the size of each message, thus the intervals of time used in the first and second period begin at the same position but are shorter, which cannot create collisions. We then use a compactification procedure as in [7]. The first message is positioned at offset zero. The first time it uses in the second period is a multiple of since its delay is by definition a multiple of . Then, all other messages are translated to the left by removing increasing integers to their offsets, until there is a collision. It guarantees that some message is in contact with the first one on the first or second period, which implies that its offset is a multiple of . This procedure can be repeated until we get some solution to , such that all positions of messages in the first and second period are multiples of . Hence, if we define as , we obtain an assignment of . ∎

From now on, we always assume that and the load is then . We are interested in the remainder modulo of the delays of each message. We write and assume from now on that messages are sorted by increasing . A compact pair, as shown in Fig. 3 is a pair of messages with such that we can put them next to each other in the second period using meta-offsets. We have since . We denote by the gap between the two messages in the first period, that we define by . We require that so that there are no collision in the first period if we schedule these two messages such that the difference of their offsets is the gap.

Fig. 3: Representation of a compact pair scheduled using meta-offsets
Lemma 4.

Given messages, two of them always form a compact pair.

Proof.

If the first two messages or the first and the third message form a compact pair, then we are done. If not, then by definition . Hence, the messages and have the same delay and form a compact pair of gap . ∎

We call Compact Pairs the following greedy algorithm. From the routes in order of increasing , we build a sequence of at least compact pairs using Lemma 4. They are then scheduled in the order they have been built using meta-offsets only. If at some point all compact pairs are scheduled or the current one cannot be scheduled, the remaining messages are scheduled as in Meta-Offset. The analysis of the algorithm relies on the evaluation of the number of forbidden meta-offsets. In the first phase of the algorithm, one should evaluate the number of forbidden offsets when scheduling a compact pair, that we denote by . In the second phase, we need to evaluate . When scheduling a message in the second phase, a scheduled compact pair only forbids three meta-offsets in the second period. If messages in a pair are scheduled independently, they forbid four meta-offsets, which explains the improvement from Meta Offset. We first give a simple lemma, whose proof can be read from Fig. 4, which allows to bound and to lower bound the number of scheduled compact pairs.

Lemma 5.

A compact pair already scheduled by Compact Pair forbids at most four meta offsets in the second period to another compact pair when it is scheduled by Compact Pair.

Fig. 4: Representation of the meta offsets forbidden by a compact pair in blue when scheduling a compact pair in red
Theorem 6.

Compact Pairs always solves pma positively on instances of load less than .

Proof.

Let be the number of compact pairs scheduled in the first phase. When scheduling a new pair, the position of the messages on the first period forbid offsets for a compact pair. Indeed, each scheduled message can collide with each of the two messages which form a compact pair. On the second period, we can use Lemma 5 to bound the number of forbidden offsets by . Hence, we have established that during the first phase, the partial solution satisfies . This first phase continues while there is possible offsets for compact pairs, which is the case when , that is while . In the second phase, a compact pair forbids meta offsets in the second period and in the first. Hence, if we let be the number of messages scheduled in the second phase to give the assignment , we have . The algorithm can always schedule messages when is less than , thus

Hence, we can guarantee and the number of routes scheduled is at least , which is . Remark that we need to be able to schedule two third of the messages as compact pairs, which is possible by Lemma 4. Therefore an assignment is always produced when the load is less then . ∎

The algorithm can be improved by forming compact tuples instead of compact pairs. A compact -tuple is a sequence of messages (with increasing), for which meta-offsets can be chosen so that there is no collision, the messages in the second period are in order and two consecutive messages use the times and in the second period.

The algorithms Compact k-tuples works by scheduling compact -uples using meta offsets while possible, then scheduling compact -uples and so on until . The theorem we give is obtained for , but taking arbitrary large and using more refined bounds on is not enough to prove the same theorem for a load of and only works for larger .

Lemma 7.

Given messages, of them always form a compact -tuple.

Proof.

We prove the property by induction on . We have already proved it for in Lemma 4. Now assume that we have found a compact -tuple in the first messages. Consider the next messages. If of them have the same delay, then they form a compact -tuple and we are done. Otherwise, there are at least different values in those messages. By the pigeonhole principle, one of this delay allows to extend , since a compact -tuple forbid values of delay. Since it proves the induction and the lemma. ∎

Theorem 8.

Compact -tuples always solves pma positively on instances of load less than , for instances with large enough.

Proof.

Lemma 7 ensures there are compact -tuples, if there are enough messages, thus should be larger than . We need the following fact, which generalizes  5. A -tuples forbids offsets in the second period when scheduling a -tuple. If the remainder of the messages in the -tuples are larger than the remainder in the -tuples, it forbids messages only. It allows to compute a lower bound on the number of scheduled -tuples for equal down to by bounding , the number of forbidden meta-offsets when placing -tuple in the algorithm. If we denote by the number of compact -tuples scheduled by the algorithm, we have the following equation:

The equation for is slightly better:

A bound on can be computed, since can be extended while . A numerical computation of the ’s shows that the algorithm always finds a solution when the load is less than . ∎

The code computing the can be found on the author’s website at https://yann-strozecki.github.io/. To make the algorithm Compact -tuples work, we need to have enough messages to be able to produce enough compact -tuples. Theoretically, we need more than messages. This bound can be improved by a better algorithm to find compact tuples than the one described in Lemma 7

. On random instances, the probability that

messages do not form a compact -tuples is low, and we can just build the tuples greedily. Therefore, for random instances, forming compact -uples is almost never a problem and the algorithm works even for small .

Iii-D Experimental Results

We present results of simulations of the algorithms presented above, to experimentally assess their performance on random instances. The implementation in C of these algorithms can be found on the author’s website at https://yann-strozecki.github.io/. We experiment with several periods and message sizes. For each set of parameters, we try every possible load by changing the number of messages and give the success rate of each algorithm. The success rate is measured on instances of pma generated by drawing uniformly and independently the delays of each message in .

The algorithms we consider are the following:

  • First Fit

  • Meta Offset

  • Compact Pair

  • Greedy Uniform (see Sec. V)

  • Exact Resolution using an algorithm from [6]

On a regular laptop, all algorithms terminates in less than a second when solving instances with messages except the exact resolution, whose complexity is exponential in the number of routes (but polynomial in the rest of the parameters). Hence, the theoretical best value given by the exact resolution is only available in the experiment with at most messages.

Fig. 5: Experiment for ,
Fig. 6: Experiment for ,
Fig. 7: Experiment for ,

For the tree sets of parameters, the algorithms have the same relative performances. Meta Offset and Greedy Uniform perform the worst and have almost equal success rate. Remark that they have a success rate for load less than while it is easy to build an instance of pma of load which makes them fail. The difference between the worst case analysis and the average case analysis is explained for Greedy Uniform in Sec. V.

First Fit performs better than Meta Offset while they have the same worst case. Compact Pair, which is the best theoretically also performs the best in the experiments, always finding assignments for load of . As Shown in Fig.5 and Fig.6, it appears that the size of the messages have little impact on the success rate of the algorithms. Comparing Fig.7 and Fig.5 shows that with more messages, the transition between success rate of to a success rate of is faster. Finally, the results of Exact Resolution in Fig. 7 show that the greedy algorithm are fare from always finding a solution when it exists. Moreover, we have found an instance with load with no assignment, which gives an upper bound on the highest load for which there is always a solution to pma.

Iv From Large Message to Message of Size one

In this section, we explain how we can restrict ourselves to the study of pma with small and even if we increase the load or are willing to accept some buffering in our original problem.

Iv-a Reduction without buffering

We give a reduction from an instance of pma to another one with the same period and number of messages but the size of the messages is doubled. From , we build , where . The instance has a load twice as large as . On the other hand, all its delays are multiples of hence solving pma on is equivalent to solving it on .

We show that an assignment of can be transformed into an assignment of . Consider the message with offset , it uses all times between and in the first period and all times between to . If , we set , and the message of is scheduled “inside” the message of , see Fig. 8. If , then we set . There are no collision in the assignment , since all messages in the second period use times which are used by the same message in . In the first period, the messages scheduled by use either the first half of the same message in or the position before, which is at worst the second half of the times used by another message in and thus is not used in .

Fig. 8: Building from

Iv-B Reduction using buffering

We have presented our problem with a single degree of freedom by message: its offset. We could also allow to buffer a message

during a time between the two contention points, which means changing to . The quality of the solutions obtained for such a modified instance of pma are worst since the buffering adds latency to the messages. We now describe how we can make a trade-off between the added latency and the size of the messages, knowing that having smaller messages helps to schedule instances with higher load.

All messages are buffered enough time so that their have the same remainder modulo . It costs at most of buffering, which is not so good, since algorithms optimizing the latency do better for random instances, see [7]. However, it is much better than buffering for a time , the only value for which we are guaranted to find an assignment, whatever the instance. We can choose the message with the longest route as reference remainder, hence this message needs zero buffering. However, the message with the second longest route may have a remainder of , thus the worst case increase of total latency is . When all delays are changed so that is a multiple of , we have an easy reduction to the case of , by dividing all values by .

We can do the same kind of transformation by buffering all messages, so that is a multiple of . The cost in term of latency is then at most but the reduction yields messages of size . For small size of messages, it is easy to get better algorithm for pma, in particular for as we show in the next section. Here we show how to adapt Compact Pairs to the case of .

Theorem 9.

Compact Pairs on instances with always solves pma positively on instances of load less than .

Proof.

We assume w.l.o.g that there are less message with even

than odd

. We schedule compact pairs of messages with even , then we schedule single message with odd . The worst case is when there is the same number of the two types of messages. In the first phase, if we schedule messages, the number of forbidden offsets is . In the second phase, if we schedule additional offsets, the number of forbidden offsets is bounded by . Hence, both conditions are satisfied and we can always schedule messages when . ∎

We now prove that we can do the previous reduction with while optimizing the average added latency. It reduces by a factor of two the average latency. The only degree of freedom in the reduction is the choice of the reference remainder since all other delays are then modified to have the same remainder. Define the average buffer time for a choice of reference by . If we sum for to , the contribution of each message will be . Since there are messages, the sum of the for all is . There is at least one term of the sum less than the average, hence there is a such that . In other word, the average delay for a message, with this choice of reference is less than .

V Messages of Size One

When , any greedy algorithm finds a solution to pma when the load is less than since where is the number of scheduled messages. We now give a method which always finds a solution for load of .

V-a Deterministic algorithm

To go above of load, we use an algorithm which optimizes a potential measuring how many offsets are available for all messages, scheduled or not. Messages are scheduled while possible using any greedy algorithm. Then, when all unscheduled messages have no free offset, we use a swap operation defined later, which improves the potential. When the potential is high enough, it ensures that there are two messages whose offset can be changed so that a new message can be scheduled.

The algorithm is not greedy, since we allow to exchange a scheduled message with an unscheduled one. It cannot work online, since it requires to know all delays of the messages in advance.

Definition 1.

The potential of a message of delay , for a partial assignment is the number of integers such that is used in the first period and is used in the second period.

The potential of a message counts favorable configurations in terms of forbidden offsets. Indeed, when is used in the first period and is used in the second period, then the same offset is forbidden twice for a message of delay . Hence, the potential of a message is related to the number of possible offsets as stated in the following lemma.

Lemma 10.

Given a partial assignment of size , and an unscheduled message of potential , there are possible offsets which can be used to schedule it while extending .

For our algorithm, we need a global measure of quality of a partial assignment, that we try to increase when the algorithm fail to schedule new messages. We call our measure the potential of an assignment and we denote it by , it is the sum of potentials for of all messages in the instance.

Definition 2.

The potential of a position , for a partial assignment , is the number of messages of delay such that is used by a route scheduled by .

Instead of decomposing the global potential as a sum over the messages, it can be understood as a sum over positions which gives the next two lemmas.

Lemma 11.

The sum of potentials of all positions used in the first period by messages scheduled by is equal to .

By definition of the potential of the positions, we have the following simple invariant.

Lemma 12.

The sum of potentials of all positions for a partial assignment with scheduled messages is .

As a consequence of this lemma, and in the algorithm we propose, we guarantee to obtain at last half this value by an exchange mechanism, that we call a swap. Let be some partial assignment of size and let be an unscheduled message of delay . Assume that cannot be used to extend . The swap operation is the following: select a free position in the first period, remove the message which uses the position in the second period from and extend by with offset . We denote this operation by .

Lemma 13.

Let be some partial assignment of size and let be an unscheduled message. If cannot be used to extend , then either or there is an such that .

Proof.

The positions in the first period can be partitioned into two parts: the positions used by some scheduled message and the positions unused. Let be the sum of the potentials of the positions in and by the sum of the potentials of the positions of . By Lemma 11, since and partitions the positions, we have . Moreover, by Lemma 11, , then .

By hypothesis, since cannot be scheduled, for all , is used in the second period. We now define the function which associates to the position such that there is a scheduled route which uses in the second period, that is . The function is an injection from to . Remark now that, if we compare to , on the second period the positions are used. Hence, the potential of each position stay the same after the swap. As a consequence, doing the operation add to the potential of the position and removes the potential of the position .

Assume now, to prove our lemma, that for all , . It implies that for all , the potential of is smaller than the potential of . Hence, since is an injection from to , we have that . Since , we have that . ∎

Consider an assignment of size , and a message of delay . If we consider all used offsets and all times time in the second period, then and are both used for at least values of . The potential of a message is then larger than and when it cannot be scheduled it is also less or equal to , that is equal to .

Hence, the potential of any assignment of size is at least . As a consequence, the method of Lemma 13 will guarantee a non trivial potential for , that is . Any algorithm relying on the potential and the Swap operation cannot be guaranteed to work for load larger than or we should either improve on the Swap method or its analysis. In particular, positions in are not taken into account, and could help improve the analysis.

The Swap Algorithm

We now precisely describe the Swap algorithm. It schedules unscheduled messages while possible and then applies Swap operations while it increases the potential. When the potential is maximal, it tries to schedule a new message by moving one or two scheduled messages to new offsets. If it fails to find such messages the algorithm stops, otherwise it continues repeating the same steps. We now give an analysis of the algorithm, showing that it always works for a load small enough.

Theorem 14.

The Swap algorithm solves positively pma for instances with and load .

Proof.

We determine for which value of , the number of messages, the Swap algorithm always works. We need only to study the case when messages are scheduled by and we try to schedule the last one, since the intermediate steps are easier to realize.

Assume now that the unscheduled message is of delay . We consider the pair of times in the first and second period for . Since the message cannot be scheduled, there are three cases. First, is unused in the first period but is used in the second period. Since there are scheduled messages, there are such value of . If a message using the time in the second period can be scheduled elsewhere, so that the unscheduled message can use offset , then the algorithm succeeds. Otherwise the message has no possible offsets, which means its potential is equal to . The second case is symmetric: is used in the first period but is unused in the second period. Finally, we have the case is used in the first period and is used in the second period. There are such values of . If the two messages using times and can be rescheduled so that offset can be used for the unscheduled message, then the algorithm succeeds. This is always possible when one message is of potential at least and the other of potential at least . Since the messages must be of potential more than and at most , it is satisfied when the sum of the two potentials is at least .

If we assume that the algorithm was unable to schedule the last message by moving two scheduled messages, the previous analysis gives us a bound on twice :

By Lemma 13, we know that , hence the algorithm must succeed when

By expanding, we obtain a second degree expression in .

Solving this equation yields . ∎

The analysis of the potential is not optimal, hence we conjecture that the method works for load up to . Moreover, for random instances, we expect the potential to be higher than the stated bound and to be better spread on the messages, which would makes the algorithm works for larger load.

V-B Random Algorithm for Random Instances

We would like to understand better the behavior of our algorithms on instances drawn uniformly at random. To this aim, we analyze the following algorithm, called Greedy Uniform: for each message in order, choose one of the possible offsets uniformly at random and use it to extend the partial assignment.

We analyze Greedy Uniform over random instances: we assume that all messages have their delays drawn independently and uniformly in . We compute the probability of success of Greedy Uniform over all random choices of the algorithm and all possible instances. It turns out that this probability, for a fixed load strictly less than one, goes to one when grows. For a given assignment, we are only interested in its trace: the set of times which are used in the first and second period. Hence, if messages are scheduled in a period of size , a trace of an assignment is a pair of subsets of of size . We now show that these traces are produced uniformly.

Theorem 15.

The distribution of traces of assignments produced by Greedy Uniform when it succeeds, from instances drawn uniformly at random, is also uniform.

Proof.

We prove that by induction on the number of messages. It is clear for , since the delay of the first message is uniformly drawn and all offsets can be used. Assume now the theorem true for some . Greedy Uniform, by induction hypothesis has produced uniform traces from the first messages. Hence we should prove that, if we draw the delay of the message randomly, extending the trace by a random possible offset produces a random distribution on the traces of size .

If we draw an offset uniformly at random (among all offsets) and then extend the trace by scheduling the last message at this offset or fail, the distribution over the traces of size is the same as what produces Greedy Uniform. Indeed, all offsets which can be used to extend the trace have the same probability to be drawn. Since all delays are drawn independently, we can assume that, given a trace, we first draw an offset uniformly, then draw uniformly the delay of the added message and add it to the trace if it is possible. This proves that all extensions of a given trace are equiprobable. Thus, all traces of size are equiprobable, since they each can be formed from traces of size by removing one used time from the first and second period. This proves the induction and the theorem. ∎

Let be the probability that Greedy Uniform fails at the step assuming it has not failed before.

Lemma 16.
Proof.

The probability to fail is independent of the delay of the message. Indeed, the operation which adds one to all used times in the second period is a bijection on the set of traces of size . It is equivalent to removing one to the delay of the message. We can thus assume that the delay is zero.

Let be the set of used times in the first period by the first messages and the set of used times in the second period. We can assume that is fixed, since all subsets of the first period are equiprobable and because is independent from . There is no possible offset for the message, if and only if . It means that has been drawn such that it contains . The probability to draw a set of size which contains fixed elements is

From Lemma 16, we can derive the probability of success of Greedy Uniform by a simple product.

Theorem 17.

The probability over all instances with messages and period that Greedy Uniform solves pma positively is

If we fix the load , which satisfies , we can bound using Stirling formula. We obtain for some constant , that

We let . The derivative of is strictly positive for and , hence when . The probability that Greedy Uniform fails is bounded by , whose limit is zero when goes to infinity. It explains why Greedy Uniform is good for large .

V-C Experimental results

We present results of simulations of the algorithms of the previous section, to experimentally assess their performance on random instances. The settings are as in Sec. III-D, with . The evaluated algorithms are:

  • First Fit

  • Greedy Uniform

  • Greedy Potential, it schedules the messages in arbitrary order, choosing the possible offset which maximizes the potential of the unscheduled messages

  • Swap

  • Exact Resolution

As in Sec. III-D, the success rate on random instances is much better than the worst case analysis. In Fig. 9, all algorithms succeed on all instances when the load is less than . Greedy Uniform behaves exactly as proved in Theorem 17

, with a very small variance. The performances of Swap and of its simpler variant Greedy Potential, which optimizes the potential in a greedy way, are much better than First Fit or Greedy Uniform. Amazingly, Swap always finds an assignment when the load is less than

. Swap is extremely close to Exact Resolution, but it fails to find some assignments for load and , as shown in Fig. 10.

Fig. 9: Success rate for and
Fig. 10: Success rate for and

Vi Conclusion

In this article, we have proved that there is always a solution to pma and that it can be found in polynomial time for large and load or for and load . Moreover, the performance of the presented algorithms over average instances are shown to be excellent empirically but also theoretically for Greedy Uniform. Hence, we can use the simple algorithms presented here to schedule C-RAN messages without buffer nor additional latency, if we are willing to use only half the bandwidth of the shared link.

Many questions on pma are still unresolved, the first one being its complexity. Moreover, we plan to adapt the Swap algorithm to the case . While the potential can be defined in the same way, it is not yet clear that it would help as much in this setting. We could also analyze the behavior of the presented algorithms over random instances in the case . For instance, it is possible to form very long compact tuples over random instances and we could use that to obtain an algorithm which works with high probability for load less than . First Fit or Meta Offset can easily be adapted to network topologies with more than a single link, but we could also try to adapt Compact Tuples and Swap. Finally, to capture networks carrying several types of messages, it would be interesting to allow for different message sizes, which makes methods using meta-offsets seemingly useless.

References

  • [1] C. Mobile, “C-RAN: the road towards green RAN,” White Paper, ver, vol. 2, 2011.
  • [2] T.-S. N. T. G. of IEEE 802.1, “Time-sensitive networks for fronthaul,” July 2016. IEEE P802.1/D0.4.
  • [3] Y. Bouguen, E. Hardouin, A. Maloberti, and F.-X. Wolff, LTE et les réseaux 4G. Editions Eyrolles, 2012.
  • [4] 3GPP, 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Service requirements for the 5G system;. Stage 1 (Release 16).
  • [5] F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, and P. Popovski, “Five disruptive technology directions for 5G,” IEEE Communications Magazine, vol. 52, no. 2, pp. 74–80, 2014.
  • [6] B. Dominique, G. Maël, and S. Yann, “Deterministic scheduling of periodic messages for cloud ran,” arXiv preprint arXiv:1801.07029, 2018.
  • [7] D. Barth, M. Guiraud, B. Leclerc, O. Marcé, and Y. Strozecki, “Deterministic scheduling of periodic messages for cloud ran,” in 2018 25th International Conference on Telecommunications (ICT), pp. 405–410, IEEE, 2018.
  • [8] A. Pizzinat, P. Chanclou, F. Saliou, and T. Diallo, “Things you should know about fronthaul,” Journal of Lightwave Technology, vol. 33, no. 5, pp. 1077–1083, 2015.
  • [9] Z. Tayq, L. A. Neto, B. Le Guyader, A. De Lannoy, M. Chouaref, C. Aupetit-Berthelemot, M. N. Anjanappa, S. Nguyen, K. Chowdhury, and P. Chanclou, “Real time demonstration of the transport of ethernet fronthaul based on vran in optical access networks,” in Optical Fiber Communications Conference and Exhibition (OFC), 2017, pp. 1–3, IEEE, 2017.
  • [10] N. Finn and P. Thubert, “Deterministic Networking Architecture,” Internet-Draft draft-finn-detnet-architecture-08, Internet Engineering Task Force, 2016. Work in Progress.
  • [11] “Time-sensitive networking task group.” http://www.ieee802.org/1/pages/tsn.html. Accessed: 2016-09-22.
  • [12] W. Howe, “Time-scheduled and time-reservation packet switching,” Mar. 17 2005. US Patent App. 10/947,487.
  • [13] B. Leclerc and O. Marcé, “Transmission of coherent data flow within packet-switched network,” June 15 2016. EP Patent App. EP20,140,307,006.
  • [14] W. Yu, H. Hoogeveen, and J. K. Lenstra, “Minimizing makespan in a two-machine flow shop with delays and unit-time operations is np-hard,” Journal of Scheduling, vol. 7, no. 5, pp. 333–348, 2004.
  • [15] J. Korst, E. Aarts, J. K. Lenstra, and J. Wessels, “Periodic multiprocessor scheduling,” in PARLE’91 Parallel Architectures and Languages Europe, pp. 166–178, Springer, 1991.
  • [16] C. Hanen and A. Munier, Cyclic scheduling on parallel processors: an overview. Université de Paris-Sud, Centre d’Orsay, Laboratoire de Recherche en Informatique, 1993.
  • [17] E. Levner, V. Kats, D. A. L. de Pablo, and T. E. Cheng, “Complexity of cyclic scheduling problems: A state-of-the-art survey,” Computers & Industrial Engineering, vol. 59, no. 2, pp. 352–361, 2010.
  • [18] R. M. Lusby, J. Larsen, M. Ehrgott, and D. Ryan, “Railway track allocation: models and methods,” OR spectrum, vol. 33, no. 4, pp. 843–883, 2011.
  • [19] P. Serafini and W. Ukovich, “A mathematical model for periodic scheduling problems,” SIAM Journal on Discrete Mathematics, vol. 2, no. 4, pp. 550–581, 1989.
  • [20] D. Barth, M. Guiraud, and Y. Strozecki, “Deterministic contention management for low latency cloud RAN over an optical ring,” in ONDM 2019 - 23rd International Conference on Optical Network Design and Modeling (ONDM 2019), 2019.
  • [21] N. G. Nayak, F. Dürr, and K. Rothermel, “Incremental flow scheduling and routing in time-sensitive software-defined networks,” IEEE Transactions on Industrial Informatics, vol. 14, no. 5, pp. 2066–2075, 2017.
  • [22] W. Steiner, S. S. Craciunas, and R. S. Oliver, “Traffic planning for time-sensitive communication,” IEEE Communications Standards Magazine, vol. 2, no. 2, pp. 42–47, 2018.
  • [23] A. C. T. dos Santos, B. Schneider, and V. Nigam, “Tsnsched: Automated schedule generation for time sensitive networking,” in 2019 Formal Methods in Computer Aided Design (FMCAD), pp. 69–77, IEEE, 2019.
  • [24] A. J. Orman and C. N. Potts, “On the complexity of coupled-task scheduling,” Discrete Applied Mathematics, vol. 72, no. 1-2, pp. 141–154, 1997.