Relay-Assisted and QoS Aware Scheduling to Overcome Blockage in mmWave Backhaul Networks

12/29/2018 ∙ by Yong Niu, et al. ∙ BEIJING JIAOTONG UNIVERSITY Tsinghua University NetEase, Inc 0

In the scenario where small cells are densely deployed, the millimeter wave (mmWave) wireless backhaul network has been widely used. However, mmWave is easily blocked by obstacles, and how to forward the data of the blocked flows is still a significant challenge. To ensure backhauling capacity, the quality of service (QoS) requirements of flows should be satisfied. In this paper, we investigate the problem of optimal scheduling to maximize the number of flows satisfying their QoS requirements with relays exploited to overcome blockage. To achieve a practical solution, we propose a relay-assisted and QoS aware scheduling scheme for the backhaul networks, called RAQS. It consists of a relay selection algorithm and a transmission scheduling algorithm. The relay selection algorithm selects non-repeating relays with high link rates for the blocked flows, which helps to achieve the QoS requirements of flows as soon as possible. Then, according to the results of relay selection, the transmission scheduling algorithm exploits concurrent transmissions to satisfy the QoS requirements of flows as much as possible. Extensive simulations show RAQS can effectively overcome the blockage problem, and increase the number of completed flows and network throughput compared with other schemes. In particular, the impact of relay selection parameter is also investigated to further guide the relay selection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the rapid growth of mobile data demand, it’s becoming a trend that densely deploying small cells underlying the homogeneous macrocells to improve network capacity. This kind of network is usually referred to as heterogeneous cellular network (HCN) [1]. Because of the huge available bandwidth in the millimeter wave (mmWave) band, such as the 60GHz band and E-band (71-76 GHz and 81-86 GHz), mmWave wireless backhaul communication can provide multi-gigabit transmission rates and support a lot of high-speed data services. Compared with the fibre-based backhaul communication, it’s more cost-effective, more flexible and easier to deploy. Thus, it has become a candidate solution for the fifth generation (5G) mobile communication.

Compared with other electromagnetic waves at lower frequencies, mmWave communication has three main characteristics: high propagation loss, directivity, and vulnerability to obstacles [2, 3, 4]. We usually adopt directional antennas to combat the high propagation loss. The beamforming techniques are used to direct the beams of the transmitter and receiver towards each other. Under the directional communication, the multi-user interference (MUI) between different links is reduced, and thus concurrent transmissions (i.e. spatial reuse) can be fully utilized to improve the transmission efficiency and increase the network capacity.

Motivation: However, because of the vulnerability to obstacles, the flows (i.e. the traffic data transmitted between two stations) in mmWave band are easy to be blocked, which seriously affect the user’s experience for delay-sensitive applications, e.g., high-definition television (HDTV). For the blocked flows, how to ensure the data transmission has become an urgent problem to be solved. Besides, some bandwidth-intensive applications supported by mmWave networks, such as uncompressed video streaming, should be provided with multi-Gbps throughput to guarantee the transmission quality [5]. Therefore, in order to ensure backhauling capacity, the quality of service (QoS) requirements of flows should also be taken into account. Here, the QoS requirements means the minimum throughput requirements.

Main Contributions: According to the above analysis, in this paper, we aim at solving the blockage problem and maximizing the number of flows satisfying their QoS requirements. We propose a relay-assisted and QoS aware (RAQS) scheduling scheme to overcome blockage. In the scheme, we consider to actively deploy relays in the small cells densely deployment scenario to forward data for the blocked flows in the backhaul network. The relay node has simple structure and is easy to be deployed. It has lower cost and is more flexible compared with base station (BS). Furthermore, using relay can reduce the traffic load of BS and the complexity of scheduling.

In RAQS, we optimize the relay selection for the blocked flows. Since the number of slots in one superframe is limited and all nodes are assumed to be half-duplex, not any relay is beneficial to achieve the flows’ QoS requirements. Therefore, the relay path should have a high rate so that the demand of one flow can be completed as soon as possible and more slots can be saved to let other flows transmit. Besides, different blocked flows should choose different relays to reduce the node contention and allow more flows to transmit concurrently, which can achieve more flows’ QoS requirements. After establishing the relay path, an efficient and low-complexity scheduling algorithm when the relay paths and backhaul paths coexist is proposed. It fully exploits concurrent transmissions to satisfy the QoS requirements of flows. The contributions of this paper are summarized as follows.

  • We formulate the optimal concurrent transmission scheduling problem of the mmWave backhaul network with the relay paths and backhaul paths considered into a mixed integer nonlinear programming (MINLP) problem. In order to ensure backhauling capacity and guarantee fairness, we aim at maximizing the number of flows with their QoS requirements satisfied.

  • We design a relay selection algorithm to select appropriate relay(s) for the blocked flow(s). The rate of the selected relay paths are high enough and different blocked flows select different relays. In this way, we can make full use of concurrent transmissions to achieve the QoS requirements of flows in the limited time of one superframe.

  • To achieve a practical solution, we propose an heuristic algorithm to solve the joint scheduling problem of relay paths and backhaul paths. The interference between concurrent flows and the difference between the two-hop relay path and the one-hop backhaul path are considered to meet the QoS requirements of more flows and improve the network throughput.

  • We conduct extensive simulations in the mmWave band to evaluate the performance of our RAQS scheme. The results demonstrate our scheme can guarantee the number of completed flows and the system throughput at a high and stable level. Particularly, we also investigate the impact of the relay selection parameter on system performance.

The rest of this paper is organized as follows. Section II introduces the related work. Section III introduces the system model and assumption. In Section IV, we formulate the optimal scheduling problem when relay paths and backhaul paths coexist into an MINLP, and then in Section V, the relay selection algorithm and corresponding scheduling algorithm in RAQS are described in detail. In Section VI, we analyze the impact of interference threshold choice on the performance of our scheme. Finally, we present the simulation results in Section VII and we conclude this paper in Section VIII.

Ii Related Work

Recently, the mmWave network in the scenario where small cells are densely deployed has gained much attention. Taori et al. [6] considered the time-division multiplexing (TDM)-based scheduling scheme for the backhaul network, but the directivity of mmWave and concurrent transmissions are not exploited. Qiao et al. [7] proposed a slot resource sharing scheme in the mmWave 5G cellular network, where D2D communications and concurrent transmissions are enabled to improve network capacity. However, to simplify the problem, only non-interfering links are allocated to each time slot to share the resources [7]. Later, Qiao et al. [5] proposed a STDMA-based scheme in mmWave WPAN, where both non-interfering and interfering links are allowed to be transmitted concurrently. With the QoS requirements of flows considered, the main idea of the scheme is that if scheduling one flow can increase the system throughput, we then decide to schedule it. In this way, the number of flows satisfying their QoS requirements is also maximized. Based on [5], Zhu et al. [8] proposed a maximum QoS-aware independent set scheduling algorithm named MQIS in the mmWave backhaul network. In MQIS, the QoS aware priority is exploited to further increase the number of flows satisfying their QoS requirements and the system throughput. In [9], a joint transmission scheduling algorithm for the radio access and backhaul of small cells in the mmWave band was proposed. However, all schemes mentioned above ([6], [5], [8] and [9]) don’t consider the scenario where the flows may be blocked. In [10], both D2D communications and concurrent transmissions are exploited to improve the energy efficiency of multicast transmission in mmWave small cells, where power control is performed after concurrent transmission and D2D transmission scheduling to reduce energy consumption with the achieved throughput ensured. However, the blockage problem is not considered. In [11], multi-hop D2D transmissions are exploited to optimize the transmission scheduling from the base station to the service points, where the mobility information is considered. In [12], full duplex mmWave communication is utilized to achieve better quality of service guarantee in terms of throughput for flows in mmWave backhaul networks. In [13], concurrent transmissions, the multi-level antenna codebook, and D2D communications are jointly exploited to improve the throughput of multicast transmissions in mmWave small cells.

There are also some literatures focusing on the flow blockage problem. Genc et al. [14] tried to rely on the reflections from walls and other surfaces to overcome the obstruction. Singh et al. [15] used strategically placed reflectors to provide alternate paths for the blocked paths. Nevertheless, in these schemes, the power efficiency is reduced because of the power loss on the reflective surface and the extra path loss caused by longer transmission path. In [16], the authors resolved flow blockage by switching the beam path from a LOS link to a NLOS link, but NLOS transmissions will suffer from huge attenuation compared with LOS transmissions. In [17], an analog beam selection scheme with low complexity is proposed, and furthermore, a beam switching scheme based on channel state information is proposed to overcome blockage problem.

In [7], Qiao et al.

proposed a relaying mechanism to reduce the link outage probability by replacing a blocked link with an alternative path. Niu

et al. [1] and Qiao et al. [18] used other non-PNC (piconet controller) stations in WPAN to establish multi-hop paths to overcome blockage. However, Qiao et al. [18] more focus on how to exploit multiple short hops to improve the flow throughput and balance the traffic loads across the network. Singh et al. [19] proposed a novel multihop medium access control (MAC) architecture for the 60 GHz in-room WPAN. In this architecture, if the LOS path between the access point (AP) and the wireless terminal (WT) is obstructed, the AP intelligently chooses a WT in the neighboring sectors (with expected LOS connectivity to the lost WT) to act as a relay for future data transfers. However, it doesn’t consider the QoS requirements of flows. In [20], Leong et al. proposed a 3D pyramid network infrastructure consisting of a single AP with four (but not restricted to) active relays operating in parallel to overcome the obstructions, but they also don’t consider the QoS requirements of flows and don’t talk about the specific relay selection algorithm. Resulting from the limited slot resources and the half-duplex nature of nodes, we must note that not any relay is beneficial to satisfy the QoS requirements of flows.

In this paper, we first develop a relay-assisted and QoS aware scheduling (RAQS) scheme, which exploits independent relay nodes to solve the blockage problem in mmWave backhaul network, and considers the QoS requirements of flows at the same time. Specifically, it consists of a relay selection algorithm and a transmission scheduling algorithm when relay paths and backhaul paths coexist.

Iii System Model and Assumption

Fig. 1: The small cells densely deployed underlying the macrocell.

We consider a scenario where small cells are densely deployed underlying the homogeneous macrocell. As shown in Figure 1, the network includes BSs and relays. There are one or more BSs connected to the backbone network via the macrocell, which is(are) called gateway(s) [21]. A backhaul network controller (BNC) resides on one of the gateways. BNC synchronizes and coordinates the data transmission in the backhaul network [2]. It can obtain the QoS requirement of each flow and the location of each BS or relay. The BSs are connected through backhaul links in mmWave band to form a mesh network. When there is a traffic demand between two BSs, we say there is a flow between them. Each BS or relay is equipped with an electronically steerable directional antenna so that directional transmissions can be performed between the transmitters and receivers. When a flow is blocked, it can be forwarded through the surrounding relay nodes. For simplicity but without loss of generality, we just consider two-hop relay paths in mmWave band. In order to achieve a high transmission rate, in this paper, we assume that line of sight (LOS) transmissions can be achieved between the optional relays and the sources (or the destinations) of the blocked flows. Of course, the original flows (i.e. the unblocked flows) also perform LOS transmissions. Each node (BS or relay) is assumed to be half-duplex; so the flows sharing the common node can’t be transmitted simultaneously.

Iii-a MAC Frame Structure

Fig. 2: The structure of one superframe.

In our algorithm, time is divided into a series of superframes [2]. As shown in Figure 2, each superframe consists of two phases: scheduling phase and transmission phase [22]. In the scheduling phase, BNC receives the transmission request of each flow, selects relay(s) for the blocked flow(s) and makes the scheduling decision. Then, it broadcasts the scheduling decision to the whole network. In the transmission phase, time is further divided into equal time slots (TS). In every TS, some flows can be transmitted concurrently (either through a relay path or through a backhaul path) according to the scheduling decision.

Iii-B Received Power

In this paper, we use a popular LOS path loss model for mmWave as described in [23]. The received power from the source node to the destination node of link can be expressed as

(1)

is a factor that is proportional to , where denotes the wave length; represents the transmission power of the transmitter; represents the transmitted antenna gain in the direction of from to and represents the received antenna gain in the direction of from to , respectively; denotes the distance between and and is the path loss exponent [5].

Similarly, the received interference from the source node of link to the destination node of link can be expressed as

(2)

where is the multi-user interference (MUI) factor between different links, which is related to the cross correlation of signals from different links [8].

Iii-C Data Rate

With the reduction of multipath effect for directional mmWave links, the data rate of link f

can be estimated according to the Shannon’s channel capacity

[21].

(3)

where is the factor that describes the efficiency of the transceiver design, which is in the range of . is the channel bandwidth, and is the onesided power spectral density of white Gaussian noise [5]. represents the link that is transmitted simultaneously with . In fact, only when the scheduling decision is determined, or in other words, which links are scheduled at the same time with link is determined, the actual rate of one link can be determined.

Iv Problem Formulation

In this section, we formulate the optimal scheduling problem when relay paths and backhaul paths coexist into an MINLP.

It’s assumed that there are flows in the network and each flow f has its own QoS requirement . If a flow is blocked, it will select a relay path to forward data. The original flow is still transmitted through the backhaul path.

For flow f, the maximum number of hops of the selected path is denoted as . If it chooses the relay path, ; if it is still transmitted in the backhaul path,

. We use a binary variable

to indicate whether the hth hop of the relay path for flow f is scheduled in the ith slot (i = 1, 2, … K). If it is, ; otherwise . The source and destination of the hth hop of relay path for flow f are denoted by and , respectively. Similarly, indicates whether the backhaul path for flow f is scheduled in the ith slot. and denote the source and destination of the backhaul path for flow f. Besides, binary variable means the hth hop of the relay path for flow f and the pth hop of relay path for flow l are adjacent (i.e. they share the common node); means the backhaul path for flow f and flow l are adjacent; means the hth hop of the relay path for flow f and the backhaul path for flow l are adjacent.

In this paper, we aim at maximizing the number of flows satisfying their QoS requirements, i.e., the number of completed flows. This is because many applications in mmWave band require multi-Gbps throughput to guarantee transmission quality. As a result, the QoS requirements of flows should be taken into account. However, as the number of slots in the transmission phase is limited, if we blindly aim at increasing the total network throughput, the limited slot resources are always allocated to the flows with high transmission rates, so the flows with low transmission rates will hardly be scheduled, which is unfair. Therefore, aiming at maximizing the number of completed flows can ensure both backhauling capacity and fairness.

For a blocked flow transmitted in relay path, only when the QoS requirement is achieved in both two hops, it can be called a completed flow. This can be expressed as , where represents the actual throughput of the first hop of the relay path for flow , and represents the actual throughput of the second hop of the relay path for flow . For an original flow transmitted in backhaul path, when the QoS requirement is achieved in one hop, it is called a completed flow. This can be expressed as , where represents the actual throughput of the backhaul path for flow . Specifically, the throughput of the link that is currently being scheduled in the superframe for flow can be expressed as

(4)

Here, is the time of scheduling phase and is the time of one slot. denotes the actual rate of flow f in the i

th slot. The interference from other flows is considered. We use a scheduling vector

to indicate which flow(s) is(are) scheduled in the ith slot. In the vector, if the element , it means flow f is scheduled in this slot; if , it means flow f isn’t scheduled. According to (18), can be calculated as (5).

(5)

For convenience, we use a binary variable to indicate whether flow f is completed. indicates flow f is completed; indicates it isn’t completed. Therefore, the optimal scheduling problem (P) when the relay paths and backhaul paths coexist can be formulated as follows.

(6)

For a blocked flow transmitted in relay path,

(7)

For an original flow transmitted in backhaul path,

(8)

Now let’s analyze the constraints. First, due to the half-duplex nature of the node, adjacent links can’t be transmitted simultaneously. Here, three cases are included: 1) when both two adjacent flows are blocked and transmitted in the relay paths, the constraint can be expressed as

(9)

2) when both two adjacent flows are not blocked and transmitted in the backhaul paths, the constraint can be expressed as

(10)

3) when one flow is transmitted in the relay path and its adjacent flow is transmitted in the backhaul path, the constraint can be expressed as

(11)

Second, if flow selects the relay path, due to the inherent order of transmission, different hops in the same path can’t be concurrently scheduled, which can be expressed as

(12)

Third, in the relay path, the hth hop should be scheduled ahead of the th hop due to the inherent transmission order, which can be expressed as

(13)

Note that constraint (13) is a group of constraints, since varies from 1 to . Besides, varies from 1 to , which ensures that each prior hop is scheduled ahead of the hop behind.

Finally, for one flow, it can only select one path at most. In other words, in one slot, it is transmitted either in the backhaul path or in one hop of the relay path. If the flow is blocked but it doesn’t select a relay node, it can’t be transmitted at all, which can be expressed as

(14)

This is a mixed integer nonlinear programming problem (MINLP) and is NP-hard. It’s complex and is difficult to be solved in polynomial time. Therefore, we should propose an efficient and pratical algorithm to solve it.

V The Proposed RAQS Scheme

In this section, we describe the proposed relay-assisted and QoS aware scheduling scheme (RAQS). It mainly includes two parts. The first part is a relay selection algorithm for the blocked flows, and the second part is a heuristic transmission scheduling algorithm when the relay paths and backhaul paths coexist. The concurrent transmissions are fully exploited and the QoS requirements of flows are especially considered in both two parts. In particular, it’s assumed that during the transmission, the blockage is always here. Note that in this paper, when the QoS requirement of one flow is achieved, the flow is called a completed flow.

V-a Relay Selection Algorithm

When a flow is blocked, we need to select a relay for it to forward data. There are multiple relay nodes in the scenario. However, it’s not true that any relay is beneficial to achieve the QoS requirements of more flows. On one hand, if the rate of the relay link is too low, even if the flow can be transmitted throughout the transmission phase, it’s not necessarily able to achieve its QoS requirement in the limited superframe time. On the other hand, if multiple blocked flows select the same relay, it may lead to more node contentions because every node is half-duplex. This is not conducive to concurrent transmissions and thus reduces the transmission efficiency. As a result, it is also not beneficial to satisfy more flows’ QoS requirements.

To guarantee a high rate, according to (1) and (18), the selected relay(s) can’t be too far from the source and the destination of the blocked flow. As shown in Fig. 3, if the flow between and is blocked, we draw two circles with and as the centers, respectively. The radiuses of both circles are equal to the distance between and . The relay nodes that fall within the overlap of the two circles ( and , does not include the borders) become the initial candidate relay set for flow , which is denoted as .

Fig. 3: The selection of initial candidate relays for the blocked flow from to .

In order to further guarantee the transmission rate of flow , we can then select relay(s) from according to the time that it takes for the relay path to transmit a certain amount of data. Only the relays whose used time meets some condition can be selected. The condition can be described as

(15)

where denotes the rate of the backhaul path when the flow is not blocked; and denote the rates of the first and the second hop of the relay path, respectively. All rates here are calculated without interference, because we have not made the scheduling decision yet and the interference can’t be determined. is called the relay selection parameter, which can be adjusted according to the actual situation. When a certain amount of data is transmitted, for the backhaul path, the time it takes can be expressed as ; for the two-hop relay path, the time it takes can be expressed as . So the formula on the left side of the greater-than sign represents the time ratio between the backhaul path and relay path. To simplify the subsequent description, we call it TR (time ratio). The relay set for flow selected in this way is denoted as . If there are more than one relays in , we then choose the relay with the maximum TR in and denote it by .

It’s worth noting that if in Can3, different flows select the same relay, because of the half-duplex nature, there may be more node contentions, which is harmful to concurrent transmissions and achieve the QoS requirements of more flows. Therefore, we only assign the repeated relay to the flow that needs it most.

1 Input: The existing candidate relay set array Can2; The existing selected path array P;
2          The two flows , that select the same relay ;
3 Output: The new selected path array P and new Can2;
4 set the length of , the length of ;
5 if  then
6       , is the flow with a higher ; removes from ; ;
7      
8else if  then
9       ; removes from ;
10       if  then
11             the suboptimal relay ;
12             if  has been assigned to  then
13                   , iterate Algorithm 1;
14                  
15            
16      else
17             ;
18            
19      
20else if  then
21       , removes from ;
22       if  then
23             the suboptimal relay ;
24             if  has been assigned to  then
25                   , iterate Algorithm 1;
26                  
27            
28      else
29             ;
30            
31      
32else
33       , is the flow with a higher ; the other removes from ;
34       if  then
35             the suboptimal relay ;
36             if  has been assigned to  then
37                   , iterate Algorithm 1;
38                  
39            
40      else
41             ;
42            
43      
Algorithm 1 Eliminating the Repeated Relay

The algorithm that eliminating the repeated relay is shown in Algorithm 1. We allocate the relay according to the number of relays in Can2, because the relays in Can2 are the relays with high rates. For a flow , if there is only one relay in , we think it needs the relay most. Otherwise, a higher TR means the flow needs the relay more. In addition, we denote the selected path as P. If a blocked flow f picks out a relay, is set to be the selected relay’s ID (1, 2…). Otherwise . The initial P is equal to Can3. Specifically, the following three cases are included. 1) If the two flows that select the same relay both have only one candidate relay in Can2, we assign the relay to the flow with a higher TR, denoted as . Therefore, the other flow can’t be transmitted, which is described in line 6. 2) If one of the two flows has only one candidate relay in Can2, and the other has more than one relay in Can2, we assign the relay to the former. The latter removes the repeated relay from Can2 and selects the suboptimal relay (i.e. the relay with the second highest TR in Can2, as shown line 7-22. 3) If both of the two flows have more than one candidate relay in Can2, we also assign the relay to the flow with a higher TR. The other removes the repeated relay from Can2 and selects the suboptimal relay, as shown in line 23-30. If the suboptimal relay has been assigned to other flow(s), we iteratively execute the above three rules until an unused relay is selected for the flow or its Can2 becomes empty. Can2 becomes empty means the flow doesn’t have a path to transmit.

V-B The Proposed Transmission Scheduling Algorithm

After selecting proper relay for each blocked flow as indicated by constraint (14), we propose a heuristic scheduling algorithm to solve the joint scheduling problem of relay paths and backhaul paths. In order to fully exploit concurrent transmissions and let more flows achieve their QoS requirements, the concept of contention graph in [24] is still used. The contention graph could reflect the global information of the contentions residing in the network [8]. Besides, the difference between two-hop relay path and one-hop backhaul path is fully considered.

Fig. 4: The contention graph with contention between link 1 and link 2 and contention between link 1 and link 4.

In the contention graph, each vertex represents one link (relay link or backhaul link). If two links share the common node, or the interference that one link has on another is bigger than a threshold , as shown in (16), we say there is a contention between them and then add one edge between the two vertices. Links that have common node cannot be scheduled concurrently due to the constraints (9), (10), and (11). For example, as shown in Figure 4, there is a contention between link 1 and link 2, which indicates link 1 and link 2 share the common node or the interference between them is severe. The links with contention can’t be scheduled simultaneously. In contrast, there is no contention between link 1 and link 3. So they can be scheduled in the same slot. The edge number of one vertex is called the degree of the link. For instance, the degree of link 1 is 2, and the degree of link 3 is 0.

(16)

The transmission scheduling algorithm is shown in Algorithm 2. Firstly, in line 3, BNC receives the transmission request of each flow with their QoS requirements , and then it calculates the total number of slots that flow spends achieving its QoS requirement when transmitted in the selected path. If the flow is transmitted in the relay path, the total number of slots is equal to the sum of the number of slots spent in each hop, which can be expressed as . represents the number of slots spent in the first hop of the relay path, and represents the number of slots spent in the second hop of the relay path. If the flow is transmitted in the backhaul path, is equal to the number of slots spent in the only one hop, denoted as . denotes the number of slots spent in the one-hop backhaul path. Specifically, the number of slots spent in the current hop can be calculated as (17). is the rate of the current hop with no interference from other links. The numerator represents the total number of bits that need to be transmitted in one superframe. The denominator represents the number of bits that flow can transmit in one slot in the current hop.

(17)

The flows that spend too many slots will be removed from the scheduling set, as shown in line 4. This is based on the knowledge that the number of slots in the transmission phase is limited. If the flow has been scheduled throughout the transmission phase, but it still can’t achieve it’s QoS requirement, the slots are wasted. Next, in line 6, we initialize the unscheduled headmost hop and the maximum number of hops for each flow . is set to 1 at the beginning, which ensures the first hop of flows transmitted in the relay path is scheduled first indicated by constraints (12) and (13). In line 7, We also initialize a scheduling matrix , denoted the scheduling decision in slots.

1 Input:The final selected path array P;
2 Output:The scheduling matrix C;
3 BNC receives the transmission request of each flow with their QoS requirements and calculates of each flow;
4 remove ; = the number of remaining flows;
5 Initialization: = 1 and for each flow;  ;
6 for slot  do
7       if  or one hop of some flow is newly completed then
8             generate G of all flows in the current hop and remove invalid flows from G;
9             while  do
10                   = the set of remaining flows in G;
11                   obtain for the flows in ;
12                   ;
13                   if  then
14                         ;
15                         if  then
16                               , is the number of slots spent in the current th hop;
17                              
18                        else
19                               select ;
20                              
21                        
22                  else
23                         select ;
24                        
25                  ;
26                   remove and its neighbors from G;
27            
28      else
29             ;
30            
31      if any  then
32             if  then
33                   ;
34                  
35            else
36                   ;
37                  
38            
39      
Algorithm 2 The Transmission Scheduling Algorithm

We make the scheduling decision slot by slot. If it is the first slot or some flows newly achieve their QoS requirements in the current hop, we use the method in [24] to generate the contention graph G of all flows in the current hop. The flows that have been completed and that are ongoing don’t need to be judged again. To avoid contention, the neighbor(s) of ongoing flow(s) in contention graph shouldn’t be scheduled. These three kinds of flows are called invalid flows and we remove them from G, as shown in line 8. Then, in line 9-22, based on G, we make the scheduling decision. While G is not empty, we prefer to select the flow whose current hop equals 2, as shown in line 12. This is because that it means the first hop has been finished, if we don’t schedule the second hop, the slots used in the first hop are wasted. However, if there are multiple flows whose current hops equal 2, the flow that has the minimal degree is preferred, which is shown in line 13-14. Smaller degree means there is less node contention or smaller interference between this link and other links, which is beneficial to concurrent transmissions and satisfy more flows’ QoS requirements, which is the objective function in (6). If there are still multiple flows that have the same minimal degree, we select the flow that spends the least number of slots in the current hop, as shown in line 15-16. The faster one flow achieves its QoS requirement, the more slots can be saved to let other flows be transmitted, which is beneficial to satisfy the QoS requirements of more flows. The process of selecting a transmission flow is shown in line 12-21. Then the newly selected flow and its neighbor(s) are also removed from the contention graph, as shown in 22. We repeat these steps until the contention graph becomes empty. In this way, we can pick out all the flows that are scheduled in slot , denoted as . If no flow is completed, as shown in line 24, we still use the scheduling decision of the last slot. At the end of each slot, as shown in line 25-29, we should check whether there are some flows achieve their QoS requirements in the current hop. If the flow has achieved and the current hop is the maximum hop, it is completed and will never be scheduled later, which is denoted as . It helps us to save slots and satisfy more flows’ QoS requirements. If it isn’t the maximum hop, increase the hop by 1. As long as one hop of some flow is transmitted completely, the contention graph in the current hop needs to be regenerated and the number of slots spent in the current hop needs to be recalculated. We iteratively make decisions with the method described above, until all slots are completed.

To estimate the algorithm complexity, we can observe the outer for loop has iterations. The inner while loop has iterations in the worst case, Thus, the scheduling algorithm has the complexity of , which can be implemented in practice.

Parameter Symbol Value
Transmission power 1000mW
Path loss exponent 2
MUI factor 1
Transceiver efficiency factor 0.5
System bandwidth 1200MHz
Background noise -134dbmMHz
Slot time 18us
Scheduling phase time 850us
Number of slots in transmission phase 3000
Half-power beamwidth
TABLE I: Simulation parameters.

Vi Performance Analysis

In this section, we analyze the impact of interference threshold choice on the performance of our scheme. To fully reap the benefits of concurrent transmissions, the sum of transmission rates of links scheduled for transmission in the same time slot should be maximized. This sum can also be regarded as the throughput in one time slot, and has a big impact on the system performance. We denote the set of concurrent links scheduled in the th slot as . For one link , we can obtain its transmission rate as

(18)

The sum of transmission rates of links scheduled in the th slot can be obtained as

(19)

As stated before, concurrent links should have no contention. As indicated in (16), the interference between concurrent links is less than or equal to . Thus, the sum rate meets

(20)

The right side of (20) can be regarded as a lower bound of the sum rate. To maximize the sum rate, we can optimize the interference threshold to maximize the lower bound, which can be expressed as

(21)

To maximize the lower bound, we should maximize . The number of concurrent links is determined by the threshold . When increases, more links will have no contention between each other. Thus, also increases, and the number of product terms increases. However, each product term will decrease. When decreases, also decreases. The number of product terms decreases, while each product term will increase. Therefore, both too large and too small will decrease the sum rate. There should be an optimized value of that can maximize the sum rate, which is consistent with the performance evaluation results in Fig. 7 and Fig. 8.

On the other hand, since the function is convex, we can obtain

(22)

The equal sign is taken when is equal for each link . When and is fixed, more uniform can achieve higher sum rate and thus better network performance. With the transmission power fixed, more uniform link length can achieve better performance. Therefore, the relays should be deployed to form uniform relay link length as the backhaul link to achieve better performance. Thus, should be set to near 0.5 to achieve a better network performance, which is also indicated in Fig. 9 and Fig. 10.

Vii Performance Evaluation

Vii-a Simulation Setup

In the simulations, as the algorithm performance is dependent on the locations of BSs and relays, we consider a scenario that 10 BSs are uniformly and randomly distributed in a

square area. The relay nodes obey space poisson distribution with parameter

. The number of flows is set to 10. The sources and destinations of 10 flows are randomly selected. The QoS requirement of each flow is uniformly distributed between 1Gbps and 3Gbps

[8]. The blocked flow(s) is(are) also randomly set and the frequency of mmWave is 60GHz. Both BSs and relays have the same transmission power . Particularly, we use the realistic antenna model in [25]. The gain of a directional antenna in units of dB can be expressed as follows.

(23)

denotes an angle within the range . is the maximum antenna gain and it can be expressed as . denotes the angle of the half-power beamwidth. denotes the main lobe width in units of degrees and it can be expressed as . The sidelobe gain [21]. To better simulate the real scenario, we choose other relevant parameters as shown in Table I, most of which are the same as those in [5].

Vii-B Schemes for Comparison and Metrics for Evaluation

In the simulations, we compare our RAQS algorithm with the following three schemes:

1) MQIS: The maximum QoS aware independent set based scheduling algorithm. In the algorithm, concurrent transmissions and the QoS aware priority are exploited to achieve more successfully scheduled flows and higher network throughput [8]. To best of our knowledge, MQIS achieves the best performance in terms of the number of flows satisfying their QoS requirements and the system throughput. However, it doesn’t provide a solution to the blockage problem.

2) STDMA: The spatial-time division multiple access algorithm [5]. In this algorithm, if scheduling one flow can increase the system throughput, we then decide to schedule it. Similarly, it still doesn’t provide a solution to the blockage problem.

3) Random relay: For the blocked flows, the random relay selection algorithm selects the final relay(s) without any special algorithm. It just selects the final relay(s) uniformly and randomly.

The two metrics, number of completed flows and system throughput, are used for evaluation. Only when a flow achieves its QoS requirement in all hops of the selected path, can it be called a completed flow. The number of completed flows is the number of completed flows in the system until the end of simulation. The system throughput represents the throughput of all flows in the network, which also includes the throughput of uncompleted flows.

Particularly, we simulate these two metrics under different number of blocked flows and different interference thresholds in the contention graph. Besides, the impact of relay selection parameter in (15) is also simulated. The simulations are repeated 100 times to get the average results.

Vii-C Simulation Results

Fig. 5: Number of completed flows under different number of blocked flows.
Fig. 6: System throughput under different number of blocked flows.

To evaluate the impact of the number of blocked flows on the system performance, we plot the number of completed flows and system throughput for the four schemes, which are shown in Figure 5 and Figure 6, respectively. In the simulations, the interference threshold is set to 0.01 and the relay selection parameter is set to 0.53. From the results, we can observe both the number of completed flows and system throughput for all schemes decrease when the number of blocked flow increases. However, compared with MQIS and STDMA, the proposed algorithm always has significantly better performance. This is because when a flow is blocked, we can use a relay with good performance to forward the data, but MQIS and STDMA don’t provide a solution to the blockage problem. Besides, compared with the random relay selection algorithm, our scheme can still maintain higher and more stable performance. This is because when we select the relays, the rate of the relay link and the node contention are considered so that the flows can be completed faster and concurrent transmissions can be fully exploited to improve the transmission efficiency. No matter how many flows are blocked, we can always select proper relays for them. Specifically, when the number of blocked flows equals 10, our scheme improves the number of completed flows by 37.0% and system throughput by 47.8% compared with the random relay selection algorithm.

Fig. 7: Number of completed flows under different thresholds.
Fig. 8: System throughput under different thresholds.

In order to investigate the impact of thresholds on the system performance and find the optimal threshold, the two metrics under different interference thresholds are shown in Figure 7 and Figure 8. Here, the number of blocked flows is set to 5 and is set to 0.53. From the results, we can observe the performance of the proposed algorithm and random algorithm change significantly with the threshold. when the threshold is small, the difference between the two algorithms is negligible. This is because if the threshold is too small, even if the interference between flows is small, they are considered to be in contention and thus concurrent transmissions can’t be made full use of. At this time, the threshold is the main limiting factor. However, when the threshold increases, the proposed algorithm could achieve better performance compared with the random relay scheme. This is mainly because we select relays with high link rates, which helps to satisfy the QoS requirements of more flows in the limited time; different blocked flows select different relays, which is beneficial to exploit concurrent transmissions to improve the performance. when is bigger than , the performance of these two schemes decreases. This is because that if the threshold is too big, even if the interference between flows is big, they can still be scheduled simultaneously. As a result, the link rates become low and the transmissions become inefficient. When the threshold is bigger than 10, because the interference between flows can’t reach to this value, threshold becomes useless and thus the curves of the both algorithms become flat. Therefore, under the simulation conditions in this paper, is the optimal threshold. When , the proposed scheme can improve the number of completed flows by 14.1% and system throughput by 16.1% compared with the random relay selection algorithm. Compared with MQIS, which doesn’t provide a solution to the blockage problem, the performance of our algorithm always has obvious advantages. As for STDMA, because it doesn’t involve the threshold, it doesn’t change at all.

The impact of relay selection parameter on our protocol is shown in Figure 9 and Figure 10. At this time, the threshold is set to 0.01. On one hand, when the number of blocked flows is small, the impact of is not obvious. The greater the number of blocked flows, the greater the effect of . On the other hand, the smaller the value of , the higher the probability of selecting a relay for the blocked flow(s); so the better the performance. However, when is less than a certain value, the improvement of performance is not obvious. Considering that when is too small, the number of relays in Can2 is larger, and choosing the final relay(s) in P is more complex, we should select a proper according to the actual condition. For example, based to our simulation results, when 5 flows are blocked, is proper. This is because on one hand, when , the number of completed flow and the system throughput can maintain a high level, and on the other hand, it’s not very complex to find the final relay(s).

Fig. 9: Number of completed flows under different relay selection parameters.
Fig. 10: System throughput under different relay selection parameters.

Viii Conclusion

In this paper, we propose a relay-assisted and QoS aware scheduling (RAQS) scheme for the blockage problem in mmWave backhaul networks. First, we propose a relay selection algorithm to forward the data of blocked flow(s), which can select non-repeating relays with high link rates for different blocked flows. Then we propose a heuristic scheduling algorithm to solve the joint scheduling problem of relay paths and backhaul paths, in which both concurrent transmissions and QoS requirements of flows are fully taken into account. The difference between relay path and backhaul path is also considered. Extensive simulations show our scheduling algorithm can effectively overcome the blockage problem, and keep the number of completed flows (i.e., the flows satisfying their QoS requirements in all hops) and system throughput at a high and stable level. In addition, the impact of relay selection parameter is simulated to further guide the relay selection.

In the future work, we will consider other aspects of flows, such as delay, in the problem, and also investigate the delay performance of the proposed scheme. Besides, we will also investigate the utilization of full duplex technology in mmWave band to improve network performance.

References

  • [1] Y. Niu, C. Gao, Y. Li, L. Su, and D. Jin, “Exploiting multi-hop relaying to overcome blockage in directional mmWave small cells,” Journal of Communications and Networks, vol. 18, no. 1, pp. 364–374, Jun. 2016.
  • [2] Y. Niu, Y. Li, D. Jin, et al., “A survey of millimeter wave (mmWave) communications for 5G: opportunities and challenges,” Wireless Netw., vol. 21, no. 8, pp. 1–20, Apr. 2015.
  • [3] R. He, B. Ai, G. L. Stuber, G. Wang, and Z. Zhong, “Geometrical based modeling for millimeter wave MIMO mobile-to-mobile channels,” IEEE Transactions on Vehicular Technology, to be published, 2018.
  • [4] J. Zhang, L. Dai, X. Li, Y. Liu, and L. Hanzo, “On Low-Resolution ADCs in Practical 5G Millimeter-Wave Massive MIMO Systems,” IEEE Communications Magazine, to appear, 2018.
  • [5] J. Qiao , L. Cai, X. Shen, et al., “STDMA-based scheduling algorithm for concurrent transmissions in directional millimeter wave networks,” in proc. IEEE ICC (Ottawa Canada), Jun. 10-15, 2012, pp. 5221–5225.
  • [6] R. Taori and A. Sridharan, “Point-to-multipoint in-band mmwave backhaul for 5G networks,” IEEE Communications Magazine, vol. 53, no. 1, pp. 195–201, Jan. 2015.
  • [7] J. Qiao, X. Shen, J. W.  Mark, et al., “Enabling device-to-device communications in millimeter-wave 5G cellular networks,” IEEE Communications Magazine, vol.  53, no. 1, pp. 209–215, Jan. 2015.
  • [8] Y. Zhu, Y. Niu, J. Li, et al., “QoS-aware scheduling for small cell millimeter wave mesh backhaul,” in Proc. IEEE ICC (Kuala Lumpur Malaysia), May 23-27, 2016, pp. 1–6.
  • [9] Y. Niu, C. Gao, Y. Li, et al., “Exploiting Device-to-Device Communications in Joint Scheduling of Access and Backhaul for mmWave Small Cells,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 10, pp. 2052–2069, Oct. 2015.
  • [10] Y. Niu, Y. Liu, Y. Li, X. Chen, Z. Zhong, Z. Han, “Device-to-Device Communications Enabled Energy Efficient Multicast Scheduling in mmWave Small Cells,” IEEE Transactions on Communications, vol. 66, no. 3, pp. 1093–1109, Mar. 2018.
  • [11] Y. Liu, X. Chen, Y. Niu, B. Ai, Y. Li, and D. Jin, “Mobility-Aware Transmission Scheduling Scheme for mmWave Cells,” IEEE Transactions on Wireless Communications, vol. 17, no. 9, pp. 5991–6004, Sept. 2018.
  • [12] W. Ding, Y. Niu, H. Wu, Y. Li, Z. Zhong, “QoS-aware Full-duplex Scheduling for Millimeter Wave Wireless Backhaul Networks,” IEEE Access, vol. 6, pp. 25313–25322, Apr. 2018.
  • [13] Y. Niu, L. Yu, Y. Li, Z. Zhong, “Device-to-Device communications enabled multicast scheduling for mmWave small cells using multi-level codebooks,” IEEE Transactions on Vehicular Technology, to appear, 2018.
  • [14] Z. Genc, U. H. Rizvi, E. Onur, et al., “Robust 60 GHz Indoor Connectivity: Is It Possible with Reflections?,” in IEEE VTC-Spring (Taipei China), May, 16-19, 2010, pp. 1–5.
  • [15] C. Yiu and S. Singh, “Empirical capacity of mmWave WLANs,” IEEE J. Sel. Areas Commun., vol. 27, no. 8, pp. 1479–1487, 2009.
  • [16] X. An, C. S. Sum, R. V. Prasad, et al., “Beam switching support to resolve link-blockage problem in 60 GHz WPANs,” in IEEE PIMRC (Toyko Japan), Sep. 13-16, 2009, pp. 390–394.
  • [17] Y. Niu, Z. Feng, M. Chen, Y. Li, Z. Zhong, and B. Ai, “Low Complexity and Robust Codebook-Based Analog Beamforming for Millimeter Wave MIMO Systems,” in IEEE Access, vol. 5, pp. 19824–19834, 2017.
  • [18] J. Qiao, L. Cai, X. Shen et al., “Enabling multi-hop concurrent transmissions in 60 GHz wireless personal area networks,” IEEE Trans. Wireless Commun., vol. 10, no. 11, pp. 3824–3833, Nov. 2011.
  • [19] S. Singh, F. Ziliotto, U. Madhow, et al., “Millimeter Wave WPAN: Cross-Layer Modeling and Multi-Hop Architecture,” in proc. IEEE INFOCOM (Anchorage USA ), May, 6-12, 2007, pp. 2336–2340.
  • [20] C. S. C. Leong, B. S. Lee, A. R.  Nix, et al, “A robust 60 GHz wireless network with parallel relaying,” in IEEE ICC (Paris France), Jun. 20-24, 2004, pp. 3528-3532.
  • [21] Y. Niu, C. Gao, Y. Li, et al., “Energy-Efficient Scheduling for mmWave Backhauling of Small Cells in Heterogeneous Cellular Networks,” IEEE Transactions on Vehicular Technology, vol. 66, no. 3, pp. 2674–2687, Mar. 2017.
  • [22] I. K. Son, S. Mao, M. Gong, et al., “On frame-based scheduling for directional mmWave WPANs,” in Proc. IEEE INFOCOM (Orlando USA), Mar. 25-30, 2012, pp. 2149-2157.
  • [23] T. Rappaport, Wireless Communications: Principles and Practice. Upper Saddle River, NJ: Prentice Hall PTR, 1996.
  • [24] H. Luo, S. Lu, and V. Bharghavan, “A new model for packet scheduling in multihop wireless networks,” inProc. 6th Annu. Int. Conf. Mobile Comput. Netw., 2000, pp. 76–86.
  • [25] Q. Chen, X. Peng, J. Yang, et al., “Spatial reuse strategy in mmWave WPANs with directional antennas,” in Proc. IEEE GLOBECOM (Anaheim CA USA), Dec. 3-7, 2012, pp. 5392–5397.