End-to-End Learning of Proactive Handover Policy for Camera-Assisted mmWave Networks Using Deep Reinforcement Learning

04/09/2019
by   Yusuke Koda, et al.
0

For mmWave networks, this paper proposes an image-to-decision proactive handover framework, which directly maps camera images to a handover decision. With the help of camera images, the proposed framework enables a proactive handover, i.e., a handover is triggered before a temporal variation in the received power induced by obstacles even if the variation is extremely rapid such that it cannot be predicted from a time series of received power. Furthermore, direct mapping allows scalability for the number of obstacles. This paper shows that optimal mapping is learned via deep reinforcement learning (RL) by proving that the decision process in our proposed framework is a Markov decision process. While performing deep RL, this paper designs a neural network (NN) architecture for a network controller to successfully learn the use of lower-dimensional observations in state information and higher-dimensional image observations. The evaluations based on experimentally obtained camera images and received powers indicate that the learned handover policy in the proposed framework outperforms the learned policy in a received power-based handover framework.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 10

06/12/2019

Cooperative Sensing in Deep RL-Based Image-to-Decision Proactive Handover for mmWave Networks

For reliable millimeter-wave (mmWave) networks, this paper proposes coop...
02/19/2021

TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement Learning

We propose a novel approach to interactive theorem-proving (ITP) using d...
07/09/2020

Learning to Prune Deep Neural Networks via Reinforcement Learning

This paper proposes PuRL - a deep reinforcement learning (RL) based algo...
10/26/2018

Deep-Reinforcement-Learning-Based Distributed Vehicle Position Controls for Coverage Expansion in mmWave V2X

In millimeter wave (mmWave) vehicular communications, multi-hop relay di...
03/08/2022

Policy-Based Bayesian Experimental Design for Non-Differentiable Implicit Models

For applications in healthcare, physics, energy, robotics, and many othe...
10/12/2021

Quality-Aware Deep Reinforcement Learning for Streaming in Infrastructure-Assisted Connected Vehicles

This paper proposes a deep reinforcement learning-based video streaming ...
03/02/2020

Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction

The goal of this study is to improve the accuracy of millimeter wave rec...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Millimeter-wave (mmWave) communications are expected to play an important role in next-generation wireless networks, such as fifth-generation mobile networks[1, 2, 3, 4]. The exploitation of wider spectrum bands in the mmWave band enables multi-gigabit data transmission, and thereby supports communication services that require the multi-gigabit data transmission, such as high-definition and ultra-high-definition televisions[2].

However, mmWave communication links are vulnerable to link blockage. Link blockage penalizes the mmWave link budget by 20–30 dB, which is due to the propagation characteristics of mmWaves[5, 6]. Thus, communication links are intermittent in the condition where blockage occurs, and this problem is critical for supporting the aforementioned communication services.

To overcome the blockage problem, a handover between multiple base stations (BSs) is a promising scheme[7, 8, 9, 10, 11]. The study by [7] proposed a handover scheme in which the network controller forms a handover decision based on an optimized threshold of the number of failure transmissions. We experimentally demonstrated performance improvement over a mmWave communication system without conducting handovers[8]. The handover schemes are referred to as reactive handovers in which a handover is triggered after observing the degradation in the current link quality such as packet error rates or throughput. Thus, the schemes do not avoid the temporal degradation in the link quality until the link blockage is detected in the current connection.

To avoid even the aforementioned temporal degradations in the link quality, we have conceptualized a camera-assisted proactive handover system in mmWave networks[12, 13, 14]. The camera images111 We used depth images whose pixels measure the distance between obstacles and a camera [15]. Depth images allow us to obtain geometric relations between components within the scene. In the following discussion, we consider that depth images are available to a network controller. contain geographical information on obstacles that block either line-of-sight (LOS) or non-LOS (NLOS) path, thereby allowing network controllers to predict future blockages. In the camera-assisted proactive handover system, network controllers perform handovers prior to when the blockages degrade the data rate of the mmWave links. Our experimental results have demonstrated that the proactive handover achieves higher throughput than a conventional reactive handover[12]. Hence, to maximize the advantage of mmWave networks, camera images might be a key decision criterion for performing a handover.

To maximally utilize the camera-assisted proactive handover, it is necessary to optimize the handover decision rules. Extant studies including [12, 13, 14]

demonstrated a performance gain over performing handover reactively by using a heuristic handover policy. The heuristic assumed that the candidate BS provides the same data rate as the associated BS and that a handover does not involve a service disruption. Thus, for other situations, the heuristic is not optimal.

An issue in determining the optimal decision rule is how we make camera images contribute to the performance maximization. A solution involves explicitly estimating geographical information on obstacles, such as location, velocity, or shapes, from images. We have discussed the optimal handover policy that uses the location and velocity of a pedestrian who blocks the mmWave link

[16]. However, the solution is limited by making an assumption that a pedestrian blocks the mmWave link. The solution does not deal with the situation in which more pedestrians block the mmWave link or where obstacles are different from humans such as cars and baggage. To deal with the aforementioned situations, we are forced to adapt the system design to the number of obstacles and their shape.

Motivated by the issues detailed above, we have proposed the use of camera images without explicitly estimating geographical information on obstacles[17, 18]. In [18]

, from camera images, we directly estimated future received power of the mmWave signals transmitted to a BS via a supervised learning technique. The direct method allows a network controller to consider geographical information on the obstacles within the images to make handover decisions, while it does not require to adapt the system design to the number of the obstacles and their shape. The objective of a previous study

[18] is to demonstrate the feasibility of the received power estimation method. Although the estimated received power can facilitate the proactive handover decisions, the performance maximization was beyond the scope of the study. Hence, the method in [18] does not provide a solution as to how the network controller maximizes the performance.

To consider the performance maximization in camera-assisted proactive handover, a Markov decision process (MDP)[19] provides a useful mathematical framework. MDP allows us to analyze the optimal action selection that maximizes expected rewards. Previous studies [20, 21] used MDP and analyzed the optimal cell selection in heterogeneous wireless networks with the objective to maximize the weighted sum of network bandwidth and network delay. The work in [22, 23]

analyzed the optimal cell selection in mmWave networks that maximized the throughput or total received data in a mobile terminal. When a decision process is modeled by an MDP, reinforcement learning (RL) provides optimal action selection without any prior knowledge of the transition probability of the available information. Previous studies by

[24, 25, 26, 27] applied an RL algorithm to learn optimal cell selection. However, the aforementioned studies considered a decision process where a decision maker uses a current network state such as channel information, a received power or a network bandwidth, or user locations. To the best of our knowledge, extant studies did not detail a decision process wherein a decision maker uses camera images for handover control.

This paper designs a decision process in which a network controller makes a handover decision on the basis of camera images without estimating geographical information of obstacles. We confirm the process as an MDP based on which we directly map camera images to handover decisions with the goal of maximizing a performance metric. To learn optimal mapping without prior knowledge of the transition probability of camera images, we utilize RL. The RL algorithm applied in [24, 25]

is computationally impractical to our problem setting because of the high dimensionality of camera images. Hence, we use a recent machine learning advancement—deep RL

[28]—that is reported to handle input information with the aforementioned type of higher dimensions.

The contributions of this paper are as follows:

  • We propose an image-to-decision proactive handover (I2D-PH) framework that directly maps camera-images to a handover decision. The I2D-PH framework exhibits two distinct features. First, the proposed framework triggers a handover ahead of a temporal variation in the received power induced by obstacles even if the variation is extremely rapid such that it cannot be predicted from a time series of received power. Second, the direct mapping allows scalability for the number of obstacles. The scalability cannot be achieved by the handover frameworks that explicitly use geographical locations of obstacles.

  • We formulate the decision process in the I2D-PH by designing the state such that the state includes camera images; then, we confirm that the process is also an MDP. Thus, the optimal mapping in the I2D-PH exists and is learned via deep RL.

  • While performing deep RL on the designed decision process—the state comprises both lower-dimensional observations and higher-dimensional image observations—, we design a neural network (NN) architecture that has separate parameters for the lower-dimensional observations and image observations. The architecture allows a network controller to successfully learn how to use lower-dimensional observations and image observations for handover control.

  • We demonstrate the feasibility of the proposed framework via evaluations based on experimentally obtained camera images and received powers. We also demonstrate a performance gain of the learned handover policy in the proposed framework relative to learned handover policy in a received power-based handover framework.

The rest of this paper is organized as follows. Section II presents a system model of the camera-assisted proactive handover in mmWave networks. Section III formulates the decision process in the proactive handover as an MDP. Section IV describes the designed NN architecture that is used for deep RL. Section V evaluates the performance achieved by the proposed framework. Finally, Section VI provides concluding remarks.

Ii I2D-PH Framework

Ii-a System Model

We consider a mmWave network where multiple mmWave BSs and a station (STA) are deployed. There are obstacles that block either the LOS path or the NLOS paths between the STA and BS associated with the STA. It should be noted that the obstacles are not limited to humans and examples include baggage, cars, and industrial robots.

In this paper, we assume that the positions of the STA and BSs are quasi-static, i.e., the variation in the positions occurs over a larger time scale than the learning procedure. The assumption is motivated by a focus on solving link blockage problems caused by moving obstacles. Other problems, such as the variation in the positions of the STA and BSs (during the learning procedure), are beyond the scope of this paper. The assumption is reasonable for certain scenarios such as transmitting data to a monitor in an office or a digital signage in a railway station concourse.

After a handover is decided, the communication between the BS and the STA can be disrupted because of the necessary procedure in the association[29]. We define the duration in which the communication is disrupted as service disruption time .

Fig. 1: System model and learning procedure.

Ii-B Direct Mapping of Camera Images to Handover Decisions

The network controller directly maps consecutive camera images to handover decisions without explicitly estimating positions and velocities of the obstacles from the images. The framework triggers handovers based on the image pixels and their variations. The image pixels and their variations reflect the positions and velocities of each obstacle within the images Thus, the framework can deal with the blockages caused by each obstacle within the images irrespective of their numbers.

It should be noted that it is possible that the network controller estimates positions and velocities of the obstacles from consecutive camera images and seeks for the optimal mapping from the estimated values to the handover decisions as in [16]. However, the framework is limited to obstacles whose positions and velocities are estimated. Hence, the framework does not necessarily deal with the blockages caused by each obstacle within the images.

Ii-C Learning Procedure for Optimal Mapping

The network controller learns the optimal mapping of consecutive camera images to handover decisions via deep RL. Fig. 1 shows the learning procedure. In the learning procedure, the network controller obtains camera images, performs a handover in a trial-and-error fashion, and subsequently obtains a reward—a performance metric in the mmWave link such as received power, throughput, or data rate. Based on the history of the camera images, the handover decision, and the reward, the network controller learns the optimal mapping that maximizes the expected sum of rewards. The learning procedure continues for the predefined duration.

Iii Model Formulation

Iii-a Markov Decision Process

An MDP is a special case of a stochastic decision process. A stochastic decision process consists of the following four elements: a state set , an action set , a reward function , and transition probabilities

. At each decision epoch

, a decision maker observes state information . Subsequently, the decision maker selects an action on the basis of the policy , where denotes the set of the possible actions when the state is observed. Given the current state and selected action , the state transitions to at the next decision epoch according to the transition probability ; thereafter, the decision maker is given a reward .

The stochastic decision process is an MDP if and only if the state transition does not depend on [19]. In the MDP, the transition probability is defined as , where

denotes the collection of the probability distribution over

.

The goal of the decision maker is to determine the optimal policy that maximizes the total expected discounted reward. The optimal policy satisfies the following condition:

(1)

and , where denotes the discount factor. In the MDP wherein and are both countable non-empty sets, there exists at least an optimal policy[19].

To obtain the optimal policy in an MDP, it is sufficient to obtain the optimal action-value function . The optimal action-value function is defined as follows:

(2)

where denotes the expectation operator under the transition probability and denotes the left-hand side in (1). This is attributed to the fact that the optimal action-value function is related to the optimal policy as follows[19]:

(3)

In other words, the policy that selects the action that maximizes is optimal. In this paper, the optimal action-value function is learned via deep RL[28].

Iii-B Decision Process for I2D-PH

We formulate the decision process where the network controller forms handover decisions in camera-assisted mmWave networks, defining the states, actions, and rewards. Then we confirm that the decision process is an MDP by demonstrating that the state transition depends only on the current state and action under an assumption.

Iii-B1 States

In order for the network controller to leverage camera images for making handover decisions, we design the states such that they include consecutive camera images. Let the number of consecutive camera images be denoted as . We set the state set as follows:

(4)

In (4), denotes the set of all possible images, denotes the set of the BS indices, and denotes the set of the remaining decision epochs until the service disruption time ends, where denotes the number of the deployed BSs and denotes the floor function.

Let denote the state at the decision epoch . The element for is set as the image observed at the decision epoch . The element is set as the index of the BS associate with the STA. The element is set as the number of remaining decision epochs that the network controller experiences until the handover process is completed. When the decision epoch is not within the service disruption time, is set as zero.

Iii-B2 Actions

We let the set of possible actions be as follows:

(5)

In other words, the controller selects one of the BSs when the decision epoch is not within the service disruption time; otherwise, the controller selects only the index of the BS to which a handover is performed.

Iii-B3 Reward

We set the reward as a performance metric in the link provided by the BS currently associated with the STA with the exception that when the next decision epoch is within the service disruption duration, we set the reward as zero as follows:

(6)

In (6), denotes the performance metric in the link provided by BS at . In the performance evaluation, we set as a data rate in the link provided by BS as discussed in Section V.

Iii-B4 State Transition

The state transition to the next state is as follows. Let the state at epoch be . Evidently, the consecutive images at are determined by concatenating the image at with the current images and removing the oldest image . Based on the definition of the state, the term is determined as follows:

(7)

The term is determined as follows:

(8)

We show that the aforementioned state transition depends only on and by considering the transitions of images, , and . First, the consecutive image transition exhibits a first-order Markov property under the assumption that the process of observing the image sequence

is a Markov chain of order

:

(9)

where

denotes the random variable that denotes the image observed at

. This is because under the assumption, the process of observing consecutive images is the first-order representation of the original Markov chain[30]. Thus , the transition of consecutive images depends only on the current images , which is the elements of . Second, from (7), the transition of is determined from . Third, from (8), is determined from , —the elements of —, and . The three aforementioned transitions depend on the elements of and , and therefore the overall state transition depends only on and .

The assumption (9) is reasonable given that the image transition significantly depends on the recent images. For example, if the obstacle moves at a uniform speed, then the image transition depends on the two consecutive images, i.e., the assumption (9) holds for . Similarly, when the obstacle moves at a uniform acceleration, (9) holds for . Even if the obstacle movement is more complicated, it is expected that the assumption holds if we increase the number of images . In the performance evaluation, we set as two because the obstacles move at an approximately constant velocity.

It should be noted that without knowing the transition probabilities, we learn the optimal action-value function via deep RL[28]. To learn the optimal policy, we only require transition samples that can be obtained while making decisions in the learning procedure222 To learn the optimal policy, it might be one approach to use the algorithms that require the knowledge of the transition probability such as a dynamic programming technique[19]. However, to estimate the transition probability in our problem setting and in particular the transition probability of the entire image , a more intensive procedure such as supervised learning[31], is required. By using deep RL, we skip the procedure and directly learn the optimal policy from transition samples . .

Iii-C Example

This section details an example of the temporal transition of the decision process. We consider that at the decision epoch , , i.e., the camera images are available, the STA is associated with BS 1, and the decision epoch is not within the service disruption time. If the controller selects action , i.e., a handover is performed, then the state transitions to . The controller is subsequently given a reward of zero because (see (6)). In this case, until the service disruption time ends, the controller selects action , is given a reward of zero, and decreases the last element of the state by one. Conversely, if the controller selects action , i.e., the handover is not performed, then the state transitions to the state and the controller is then given the reward.

Iv Neural Network Architecture

Fig. 2: NN architecture for approximating optimal action-value function defined in (2) for and . With the exception of the output layer, the architecture is identical to the architecture used in the reference[18]

. The architecture is a combination of a convolutional NN (CNN), which deals with images, and long short-term memory (LSTM), which deals with sequential inputs

[31].

In deep RL, a NN is trained such that the NN is a good approximation of the optimal action-value function in (2)[28]. We focus on the NN architecture designed to perform deep RL in the decision process discussed in the previous section333The NN is trained via the method discussed in [28]. For details of the training, please refer to [28]..

We design the NN architecture such that the NN has separate outputs for each possible combination of , , and as shown in Fig. 2. The design allows us to divide the parameters into two parts: the parameters associated with the camera images and that associated with the other low-dimensional observations, , , and . Let be the NN, where , , and denotes the parameters of the NN. In the architecture, the NN is expressed as follows:

(10)

where denote the output values of the layer prior to the output layer and denote the parameters in the output layer corresponding to the combination of , , and . The parameters used to obtain the output values are associated with the camera images, and the parameters in the output layer, are associated with the low-dimensional observations, , , and .

The motivation for the architecture is that it is necessary to use the observations and for handover control. In our MDP setting, the state consists of consecutive images with thousands of elements and with only two elements. If we let the input of the NN be and thereby process the camera images and with the same parameters, then the variation in does not significantly impact the NN output values. This is because NNs generally estimate feature representations of overall inputs; thus, they do not propagate the variation in one or two elements in the inputs to the output[31]. Hence, the controller can ignore the variation in while making a handover decision.

It should be noted that we employ the NN architecture of [18] with the exception of the output layer. The architecture is reported to facilitate the prediction of future performances in a mmWave communication from camera images. Hence, it is expected that the architecture also facilitates the learning of the optimal action-value function that is the sum of the performance metrics in our MDP setting.

V Performance Evaluation

Fig. 3: Considered mmWave links.
Fig. 4: Top view of the measurement environment (left) and measurement setup showing the mmWave transmitter, measurement device and camera (right). The measurement device and mmWave transmitter correspond to BS 1 and the STA in Fig. 3, respectively.

V-a Evaluated Scenario

We consider that two BSs and an STA are deployed as shown in Fig. 3. The STA is initially associated with the BS that observes higher received power when compared to that of the counterpart when there are no obstacles within the deployed area. We term the BS that is initially associated with the STA as BS 1 and the other as BS 2. BS 2 is a candidate BS in the case in which the link between BS 1 and the STA is blocked by obstacles.

We assume that BS 2 is free from blockages. The assumption is reasonable given that a network controller is likely to perform a handover to a BS that is not blocked by any obstacles. In the following discussion, we consider that BS 2 is at a position where pedestrians cannot block the path between the STA and BS 2 and the received power at BS 2 is constant over time.

V-B Measurement Setup

We conduct the measurement as in [32] and obtain received powers and camera images. We deploy a mmWave transmitter, a measurement device, and a camera as shown in Fig. 4. The transmitter and a measurement device are considered as the STA and BS 1, respectively. The mmWave transmitter transmits signals at the carrier frequency of 60.48 GHz and subsequently the measurement device measures the power of a part of the signals[32]. The transmitted signals are considered as uplink signals from the STA to BS 1. In this environment, two pedestrians walk along the moving path in Fig. 4 and block the path between the transmitter and measurement device. Tables I and II summarize the experimental equipment and parameters associated with the experiment, respectively.

mmWave transmitter Dell Wireless Dock D5000
Spectrum analyzer Tektronix RSA306
Down-converter Sivers IMA FC2221V
Antenna Sivers IMA Horn antenna, 24 dBi
Depth camera Microsoft Kinect for Windows (Model:1656)
TABLE I: Experimental Equipment
Channel 60.48 GHz
Sampling frequency 56 MHz
Transmit antenna gain 10 dBi [33]
Receive antenna gain 24 dBi
Measurement bandwidth 40 MHz
TABLE II: Measurement Parameters

V-C Simulation Procedure of Decision Process

We divide the camera images and received powers into two parts to perform learning and performance evaluation based on different data from each other. Let the obtained camera images and received powers be denoted by and , respectively, where denotes the th image, denotes the received power that is simultaneously obtained, and denotes the set of the time indices. We divide into the following two subsets: and , where . We use and to learn the optimal action-value function and use and to evaluate the learned policy.

We simulate the decision process in the learning procedure using and . The decision epoch is set as the time step at which an image is obtained. The decision process starts at the time step at which is observed. The STA is initially associated with BS 1 and the time at which the process starts is not within a service disruption time, i.e., and . Thus, the state is set as . The action is selected according to the -greedy policy[28]; then, the next state is set such that it includes the images , , and , where and are determined based on as discussed in Section III-B. The procedure is iterated and then ends when the state includes the last image .

The reward for in (6) is set as a data rate provided by BS  associated with the STA and is calculated as follows. The reward is calculated by the Shannon capacity formula via the obtained received power value as follows:

where denotes the noise spectral density. The reward is set as a constant value on the basis of the assumption that the received power at BS 2 is a constant over the time,

We evaluate the performance of the learned policy. We simulate a decision process using the same procedure as the learning procedure with the exception that we use and , and the action is selected according to a greedy policy[19]. We calculate the time average of the reward as a performance metric of the learned policy.

We iterate the learning and evaluation by using the same data set. We evaluate the policy that achieves the highest average reward throughout the iterations. Parameters associated with the deep RL are summarized in Table III.

Discount factor, 0.99

The number of obtained images,
16860
The number of images used for learning, 13500
Number of iterations of learning and evaluation 1000
Exploration rate, 1–0.01 (Decreased by 0.01 per iteration)
Number of input images, 2
Number of received power values, 2
Number of pixels in an input image,
Interval between successive decision epochs  ms
Data rate that BS 2 provides, 150 Mbit/s (const.)
Minibatch size[28] 32
Frequency of updating the target network[28] 10000
TABLE III: Parameters Associated with RL

V-D Compared Framework

We compare the proposed framework with a received power-based framework. We design the received power-based framework by formulating the decision process as a similar MDP—we replace the images with the received powers in the definition of the state. Let denote the received power observed at BS in subsequent time steps. The state in the MDP is set as follows:

(11)

The received power-based framework does not trigger a handover unless a pedestrian causes the variations in the received power while our proposed framework triggers a handover before the variations with the help of camera images. The following subsection reveals the aforementioned characteristics of both frameworks and numerically evaluates the advantage of the proposed framework.

It should be noted that the handover policy in the received power-based framework is learned with deep RL with a NN different from that in Fig 2. We simplify the NN architecture because the input of the NN in the received power-based framework comprises several elements—the four elements in the evaluation. We replace the combination of the CNN and LSTM in Fig 2

with a fully connected multi-layer perceptron with eight hidden units and 32 output units where the two layers are activated using Rectified linear units

[31].

V-E Results

Fig. 5: Time series of data rate in the condition that the service disruption time  s, and corresponding camera images.
Fig. 6: Average data rate vs. service disruption time .
Fig. 7: Time series of data rate in the condition where the service disruption time  s. Our image-based framework performed handovers at 23.49 s and 24.09 s; and the received power-based framework performed handovers at 23.73 s and 24.27 s.

We show an example of time-varying data rate in the case where  s in Fig. 5. The pedestrians walk in front of the mmWave transmitter at approximately 41.5 s and 43.9 s. Simultaneously, the data rate provided by BS 1 is degraded from approximately 200 Mbit/s to 30 Mbit/s. Our framework successfully selects the BS that provides a higher data rate than the counterpart at each decision epoch and thereby maximizes the data rate.

It should be noted that we learn the policy as shown in Fig. 5 without explicitly estimating the positions and velocities of the two pedestrians. The result demonstrates the feasibility of the direct mapping from camera images to a handover decision. Thus, given that the direct mapping is scalable for the number of obstacles, we expect that we can easily scale up an arbitrary number of pedestrians 444To scale up an arbitrary number of pedestrians, we might require more intensive learning because the pixel values vary more intensively than those in our evaluation. For example, we require more data of camera images and received powers, or we should perform more advanced techniques for deep RL in [34, 35]. However, it should be emphasized that even in such conditions, the basic idea discussed in this paper is applicable to learn the handover policy. .

We compare the performance of the policy learned in our framework with that of the policy learned in the received power-based framework in Fig. 6. Fig. 6 shows the average data rate obtained in the evaluation procedure. The handover policy learned in our framework achieves a higher or equal data rate when compared to that of the learned policy in the received power-based framework.

We confirm that our framework triggers a handover in a proactive fashion by Fig. 7. Fig. 7 shows an example of time-varying data rate provided by our framework and the received power-based framework when the service disruption time  s. Our proposed framework successfully triggers handovers prior to the variation in the data rate provided by BS 1, while the received power-based framework triggers handovers after the variation.

Vi Conclusion

We proposed a proactive image-to-decision handover framework, which directly maps camera images to a handover decision to achieve the scalability for the number of obstacles. We formulated the decision process of the proposed framework and confirmed that the optimal mapping in the proposed framework is learned via deep RL by revealing that the designed decision process is an MDP. Furthermore, we developed a NN architecture that has separate parameters for image observations and lower-dimensional observations so that the network controller learns to use the lower-dimensional observations.

We performed the evaluations based on experimentally obtained camera images and received powers. The evaluation demonstrated the feasibility of the direct mapping by revealing that the optimal handover policy could be learned without explicitly estimating the positions and velocities of the obstacles. We can easily extend this to the conditions in which more pedestrians block the mmWave links because the basic concept of this paper is applicable to such conditions although a more sophisticated RL technique might be required. The evaluation also indicated that our image-based framework triggered a handover several hundreds of milliseconds earlier when compared to the received power-based framework, and this led to better performance in terms of the average data rate.

It should be noted that our proposed framework can trigger a handover earlier than the received power-based framework even when the delay to obtain images exceeds that to obtain received powers555In a general scenario, there exists a difference between the interval to obtain images—approximately 30 ms—and that to obtain the received powers—approximately less than one millisecond. Thus, obtaining camera images can be delayed relative to obtaining the received powers.. This is due to the difference between the manner of the variation in the camera images and that in the received powers. Camera images vary according to the obstacle movements irrespective of whether the obstacle is blocking a mmWave link or not. The variation allows our proposed framework to predict future link blockages and this leads to an earlier handover. Conversely, received power varies only when the obstacle is blocking a mmWave link. Hence, even when the received power is obtained earlier than the camera images, the received power-based framework triggers a handover only when an obstacle begins to block the mmWave link; thereby, a handover in the received-power based framework can be delayed relative to that in the proposed framework.

References

  • [1] K. Sakaguchi, E. M. Mohamed, H. Kusano, M. Mizukami, S. Miyamoto, R. E. Rezagah, K. Takinami, K. Takahashi, N. Shirakawa, H. Peng, T. Yamamoto, and S. Namba, “Millimeter-wave wireless LAN and its extension toward 5G heterogeneous networks,” IEICE Trans. Commun., vol. E98-B, no. 10, pp. 1932–1947, Oct. 2015.
  • [2] Y. Niu, Y. Li, D. Fin, and A. V. Vasilakos, “A survey of millimeter wave communications (mmwave) for 5G: Opportunities and challenges,” Wireless Netw., vol. 21, no. 8, pp. 2657–2676, Nov. 2015.
  • [3] C. Dehos, J. Gonzàlez, A. D. Domenica, D. Kténas, and L. Dussopt, “Millimeter wave access and backhauling: The solution to the exponential data traffic increase in 5G mobile communication systems?” IEEE Commun. Mag., vol. 52, no. 9, pp. 88–95, Sep. 2014.
  • [4] P. Wang, Y. L.i, L. Song, and B. Vucetic, “Multi-gigabit millimeter wave wireless communications for 5G: From fixed access to cellular networks,” IEEE Commun. Mag., vol. 53, no. 1, pp. 168–178, Jan. 2015.
  • [5] K. Haneda, “Channel models and beamforming at millimeter-wave frequency bands,” IEICE Trans. Commun., vol. E98-B, no. 5, pp. 755–772, May 2015.
  • [6] G. R. MacCartney and T. S. Rappaport, “A flexible millimeter-wave channel sounder with absolute timing,” IEEE J. Sel. Areas Commun., vol. 35, no. 6, pp. 1402–1418, Jun. 2017.
  • [7] X. Zhang, S. Zhou, X. Wang, D. Zhu, and M. Lei, “Improving network throughput in 60 GHz WLANs via multi-AP diversity,” in Proc. IEEE ICC 2012, Ottawa, Canada, Jun. 2012, pp. 4803–4807.
  • [8] Y. Oguma, R. Arai, T. Nishio, K. Yamamoto, and M. Morikura, “Implementation and evaluation of reactive base staiton selection for human blockage in mmWave communications,” in Proc. APCC 2015, Kyoto, Japan, Oct. 2015, pp. 1–6.
  • [9] M. Umehira, G. Saito, S. Takeda, T. Miyajima, and K. Kagoshima, “Feasibility of RSSI based access network detection for multi-band WLAN using 2.4/5 GHz and 60 GHz,” in Proc. WPMC 2014, Sydney, Australia, Sep. 2014, pp. 1–6.
  • [10] M. Polese, M. Giordani, M. Mezzavilla, S. Rangan, and M. Zorzi, “Improved handover through dual connectivity in 5G mmWave mobile networks,” IEEE J. Sel. Areas Commun., vol. 35, no. 9, pp. 2069–2084, Sep. 2017.
  • [11] Y. Sun, G. Feng, S. Qin, Y. C. Liang, and T. S. P. Yum, “The SMART handoff policy for millimeter wave heterogeneous cellular networks,” IEEE Trans. Mobile Comput., vol. 17, no. 6, pp. 1456–1468, Jun. 2018.
  • [12] Y. Oguma, R. Arai, T. Nishio, K. Yamamoto, and M. Morikura, “Proactive base station selection based on human blockage prediction using RGB-D cameras for mmWave communications,” in Proc. IEEE GLOBECOM 2015, San Diego, USA, Dec. 2015, pp. 1–6.
  • [13] Y. Oguma, T. Nishio, K. Yamamoto, and M. Morikura, “Proactive handover based on human blockage prediction using RGB-D cameras for mmWave communications,” IEICE Trans. Commun., vol. E99-B, no. 8, pp. 1734–1744, Oct. 2016.
  • [14] T. Nishio, R. Arai, K. Yamamoto, and M. Morikura, “Proactive traffic control based on human blocakge prediction using RGB-D cameras for millimeter-wave communciations,” in Proc. IEEE CCNC 2015, Las Vegas, NV, USA, Jan. 2015, pp. 152–153.
  • [15] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” in Proc. NIPS 2014, Montréal, Canada, Dec. 2014, pp. 1–9.
  • [16] Y. Koda, K. Yamamoto, T. Nishio, and M. Morikura, “Reinforcement learning based predictive handover for pedestrian-aware mmWave networks,” in Proc. IEEE INFOCOM Workshops 2018, Honolulu, HI, USA, Apr. 2018, pp. 1–6.
  • [17] H. Okamoto, T. Nishio, M. Morikura, and K. Yamamoto, “Machine-learning-based throughput estimation using images for mmWave communications,” in Proc. IEEE VTC2017-Spring, Sydney, Australia, Jun. 2017, pp. 1–6.
  • [18] T. Nishio, H. Okamoto, K. Nakashima, Y. Koda, K. Yamamoto, M. Morikura, Y. Asai, and R. Miyatake, “Proactive received power prediction using machine learning and depth images for mmWave networks,” arXiv preprint arXiv:1803.09698, Jul. 2018.
  • [19] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction.   MIT Press, Cambrige, MA, 1998.
  • [20] E. S. Navarro, Y. Lin, and V. Wong, “An MDP-based vertical handoff decision algorithm for heterogeneous wireless networks,” IEEE Trans. Veh. Technol., vol. 57, no. 2, pp. 1243–1254, Mar. 2008.
  • [21] B. Chang and J. Chen, “Cross-layer-based adaptive vertical handoff with predictive RSS in heterogeneous wireless networks,” IEEE Trans. Veh. Technol., vol. 57, no. 6, pp. 3679–3692, Nov. 2008.
  • [22] M. Mezzavilla, S. Goyal, S. Panwar, S. Rangan, and M. Zorzi, “An MDP model for optimal handover decisions in mmWave cellular networks,” in Proc. EUCNC 2016, Athens, Greece, Jun. 2016, pp. 100–105.
  • [23] S. Zang, W. Bao, P. L. Yeoh, H. Chen, Z. Lin, B. Vucetic, and Y. Li, “Mobility handover optimization in millimeter wave heterogeneous networks,” in Proc. IEEE ISCIT, Cairns, Australia, Sep. 2017, pp. 1–6.
  • [24] H. Tabrizi, G. Farhadi, and J. Cioffi, “Dynamic handoff decision in heterogeneous wireless systems: Q-learning approach,” in Proc. IEEE ICC 2012, Ottawa, Canada, Jun. 2012, pp. 3217–3222.
  • [25] C. Dhahri and T. Ohtsuki, “Q-learning cell selection for femtocell networks: Single and multi-user case,” in Proc. IEEE GLOBECOM 2012, Anaheim, CA, USA, Dec. 2012, pp. 4975–4980.
  • [26] X. Tan, X. Luan, Y. Cheng, A. Liu, and J. Wu, “Cell selection in two-tier femtocell networks using Q-learning algorithm,” in Proc. ICACT 2014, PyeongChang, Korea, Feb. 2014, pp. 1036–1040.
  • [27] C. Dhahri and T. Ohtsuki, “Adaptive Q-learning cell selection method for open-access femtocell networks: Multi-user case,” IEICE Trans. Commun., vol. 97, no. 8, pp. 1679–1688, Aug. 2014.
  • [28] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 529, pp. 529–533, Feb. 2015.
  • [29] W. Jiao, P. Jiang, and Y. Ma, “Fast handover scheme for real-time applications in mobile WiMAX,” in Proc. IEEE ICC 2017, Glasgow, Scotland, Jun. 2017, pp. 6038–6042.
  • [30] T. W. Anderson and L. A. Goodman, “Statistical inference about Markov chains,” Ann. Math. Stat., vol. 28, no. 1, pp. 89–110, Mar. 1957.
  • [31] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning.   MIT Press, 2016.
  • [32] Y. Koda, K. Yamamoto, T. Nishio, and M. Morikura, “Measurement method of temporal attenuation by human body in off-the-shelf 60 GHz WLAN with HMM-based transmission state estimation,” Wireless Commn. Mobile Compt., vol. 2018, no. 7846936, pp. 1–9, Apr. 2018.
  • [33] T. Nische, G. Bielsa, A. Loch, and J. Widmer, “Boon and bane of 60 GHz networks: Practical insights into beamforming, interference, and frame level operation,” in Proc. ACM CoNEXT 2015, Heidelberg, Germany, Dec. 2015, pp. 1–6.
  • [34] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning.” in Proc. AAAI 2016, Phoenix, AZ, USA, Feb. 2016, pp. 1–5.
  • [35] T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” in Proc. ICLR 2016, San Juan, PR, USA, May 2016, pp. 1–21.