Deep Learning on Multimodal Sensor Data at the Wireless Edge for Vehicular Network

01/12/2022
by   Batool Salehi, et al.
Northeastern University
111

Beam selection for millimeter-wave links in a vehicular scenario is a challenging problem, as an exhaustive search among all candidate beam pairs cannot be assuredly completed within short contact times. We solve this problem via a novel expediting beam selection by leveraging multimodal data collected from sensors like LiDAR, camera images, and GPS. We propose individual modality and distributed fusion-based deep learning (F-DL) architectures that can execute locally as well as at a mobile edge computing center (MEC), with a study on associated tradeoffs. We also formulate and solve an optimization problem that considers practical beam-searching, MEC processing and sensor-to-MEC data delivery latency overheads for determining the output dimensions of the above F-DL architectures. Results from extensive evaluations conducted on publicly available synthetic and home-grown real-world datasets reveal 95 beam sweeping, respectively. F-DL also outperforms the state-of-the-art techniques by 20-22

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

page 16

11/15/2021

Vision-Position Multi-Modal Beam Prediction Using Real Millimeter Wave Datasets

Enabling highly-mobile millimeter wave (mmWave) and terahertz (THz) wire...
09/09/2018

Online Learning for Position-Aided Millimeter Wave Beam Training

Accurate beam alignment is essential for beam-based millimeter wave comm...
09/23/2020

Real-time Millimeter Wave Omnidirectional Channel Sounder Using Phased Array Antennas

Characterization of the millimeter wave wireless channel is needed to fa...
03/05/2021

Performance Impact Analysis of Beam Switching in Millimeter Wave Vehicular Communications

Millimeter wave wireless spectrum deployments will allow vehicular commu...
02/04/2021

Federated mmWave Beam Selection Utilizing LIDAR Data

Efficient link configuration in millimeter wave (mmWave) communication s...
04/29/2021

A Novel Look at LIDAR-aided Data-driven mmWave Beam Selection

Efficient millimeter wave (mmWave) beam selection in vehicle-to-infrastr...
11/15/2021

LoS-Map Construction for Proactive Relay of Opportunity Selection in 6G V2X Systems

The evolution of connected and automated vehicles (CAVs) technology is b...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Emerging vehicular systems are equipped with a variety of sensors that generate vast amounts of data and require multi-Gbps transmission rates [8]. These sensor inputs may be needed for safety-critical vehicle operation as well as for gaining situational awareness while in motion, which needs to be timely processed at a mobile edge computing (MEC) center to generate driving directives. Such a large data transfer volume at short contact times can quickly saturate the sub-6 GHz band. Thus, the millimeter-wave (mmWave) band is widely considered as the ideal candidate for vehicle-to-everything (V2X) communications [35], given the promise of 2 GHz wide channels and vast under-utilized spectrum resources in the 57-72 GHz band. However, transmission in the mmWave band has associated challenges related to severe attenuation and penetration loss. Phased arrays with directional beamforming can compensate these issues by focusing RF energy at the receiver [38].

Fig. 1: Our fusion pipeline exploits GPS, camera and LiDAR sensor data to restrict the beam selection to top- beam pairs.

Hence, in the so called beam selection process, the nodes on either end of the link attempt to converge to the optimal beam pairs, where each beam pair is a tuple of transmitter and receiver beam indices, by mutually exploring the available space uniformly partitioned into discrete sectors [22]. However, exploring all possible beam directions in the existing IEEE 802.11ad [56] and 5G New Radio (5G-NR) [13] standards can consume up to tens of milliseconds and must be repeated constantly during vehicular mobility [26, 41]. To address this problem, we propose to exploit the side out-of-band information to restrict the searching to a subset of most likely beam pair candidates. As shown in Table. I, reducing the number of beam pairs from 60 to 30 significantly decreases the beam selection overhead by 50% and 80% for IEEE 802.11ad and 5G-NR standards, respectively.

I-a Use of Sensors to Aid the Beam Selection

Due to the directional transmissions at mmWave band, the beam selection process can be interpreted as locating the paired user or detecting the strongest reflection in the case of line of sight (LOS) and non-line of sight (NLOS) path, respectively. Hence, the location of the transmitter, receiver, and potential obstacles are the key factors in beam initialization. Interestingly, this information is also embedded in the situational state of the environment that can be acquired through monitoring sensor devices.

Standard Time ()
30 beam pairs 60 beam pairs
802.11ad 9.09 18.18
5G-NR 4.68 24.37
TABLE I: The reduction in beam selection time while reducing the beam search space from 60 to 30 beam pairs.

Fig. 1 shows our scenario of interest with a moving vehicle and a road-side base station (BS) attempting to find the best beam pair with multiple reflectors and blocking objects. We assume the state of the environment is captured by a combination of GPS (Global Positioning System) and LiDAR (Light Detection and Ranging), which provides a 3-D representation of the surroundings, sensors in the moving vehicle, and a camera at the BS. We use a sub-6 GHz data channel for exchanging this sensor data between the vehicle and MEC. We then propose to use these non-RF sensor data to suggest a subset of “top-” beam pairs and speed up the beam selection, consequently. The candidate set of selected beam pairs is communicated to both the BS and the vehicle over the sub-6 GHz control channel. After this, both the vehicle and the BS execute the standards-defined beam-searching algorithms, but only on the subset of top- suggested beam pairs.

It should be noted that, with the widespread of IoT devices, multiple sensors are now available as standard installations for the majority of electronic devices as well as fixed roadside infrastructures [15]. LiDAR sensors are an indispensable part of modern vehicles that are used for either automated driving or collision avoidance [34]. The GPS data are regularly collected and transmitted as part of basic safety messages frame in V2X applications [11], and surveillance cameras have been in use for decades with the growth of smart cities [28]. The Yole Dévelopment report anticipates that the global market for GPS, radar, cameras, and LiDARs will increase from in 2020 to in 2025 [57].

I-B Deep Learning on Multimodal Sensor Data

While using sensor data for out-of-band beam selection is an exciting new approach there some challenges that need to be addressed. First, since the physical environment influences signal propagation in ways that are hard to computationally model in real time, hand engineering features extracted from such sensor data that could be discriminative is infeasible, as there could be a vast multitude of reasons impacting the signal propagation. Second, a systematic approach is required to properly join the information from sensor modalities with different properties to predict the optimality of each beam pair. Note that while the beam pair can be inferred through basic geometry under ideal LOS conditions, such an approach fares poorly in scenarios with multiple reflections, such as in NLOS situations. Third, since the sensors are not all available at one site, both on the vehicle and BS, the secondary channels are required to maintain the connectivity between the vehicle and MEC. The communication constraints in these secondary channels need to be fully accounted for: the relaying cost of data exchange, especially massive LiDAR point cloud, might undermine the performance with respect to end-to-end latency. Finally, the beam search dimension

is a control parameter that needs to set prior to starting the beam-searching process. Hence, an algorithm is required to select the appropriate to fully determine the system design.

Our approach directly addresses these challenges. First, we design a fusion-based deep learning (F-DL) framework operating on all these different modalities to predict a subset of top-

beam pairs that includes the globally optimal solution with high probability. Additionally, we adopt a distributed inference scheme to compress the raw data into high level extracted features at the vehicle to reduce the overhead on the wireless backchannel, accounting for end-to-end latency in the selection of the optimal beam. Finally, we take into account the prediction from our proposed F-DL framework along with mmWave channel efficiency to properly adjust the beam search space

, on a case-by-case basis.

I-C Summary of Contributions

Our main contributions are as follows:

  • We design deep learning architectures that predict the set of top- beam pairs using non-RF sensor data such as GPS, camera, and LiDAR, wherein the processing steps are split between the source sensor and the MEC. We validate the improvement achieved by fusing available modalities versus unimodal data on a simulation as well as a home-grown real-world datatset. Our results show that fusion improves the prediction accuracy by 3.32–43.9%. The proposed fusion network exhibits 20–22% improvement in top-10 accuracy with respect to the state-of-the-art techniques.

  • We formulate an optimization problem to appropriately select the set of candidate beam pairs, which takes into account mmWave channel efficiency while trying to maximizing the alignment probability, i.e. the case where the optimum beam pair is included within the suggested subset. Thus, the control variable is not arbitrarily chosen, but tightly coupled to scenario constraints.

  • We rigorously analyze the end-to-end latency of our proposed non-RF beam selection method and compare it with the state-of-the-art standard for mmWave communication, namely 5G-NR and demonstrate that the beam selection time decreases by 95–96% on average while maintaining 97.95% of the throughput, considering all the overhead of control/data signaling for both approaches.

Ii Related Work

Leveraging out-of-band data, both in RF and non-RF domains, can speed up the beam selection. RF-based out-of-band beam selection is possible via simultaneous multi-band channel measurements, when there exists a mapping between mmWave and the channel state information (CSI) from the another band [44]. However, this method does not support simultaneous beamforming at both the transmitter and receiver ends. As opposed to the RF-only approach, non-RF out-of-band beam selection leverages data from different sensors and generates a mutual decision for both transmitter and receiver. Fig. 2 summarises the emphasis of this paper and different beam selection strategies.

Fig. 2: Deterministic and ML-aided beam selection strategies.

Ii-a Traditional

Ii-A1 In-band RF

Yang et al. [54] adopt a hierarchical search strategy where the mmWave channel is first tested with comparatively wider beams by using a reduced number of antenna elements. The beam width is then narrows until the best beam is obtained. Wang et al. [50] show that mmWave links preserve sparsity even across locations in mobile V2X scenarios. Hence, they utilize the angle of departure (AoD) to search for beams only within this range, thereby reducing beam selection overhead.

Ii-A2 Out-of-band RF

Steering with eyes closed [32] exploits omni-directional transmissions within the legacy 2.4/5 GHz band to infer the LOS direction between the communicating devices to speed up the mmWave beam selection. González-Prelcic et al. [16]

exploit the side information derived from RAdio Detection And Ranging (RADAR) data to adapt the beams in a vehicle to infrastructure network, where a compressive covariance estimation approach is used to establish a mapping between RADAR and mmWave bands.

Ii-B ML-based

Ii-B1 RF-only

He et al. [19]

design a deep learning based channel estimation approach using iterative signal recovery, wherein the channel matrix is regarded as a noisy 2D natural image. Learnt denoising-based approximate message passing (LDAMP) neural networks are applied on the input for channel estimation. Hashemi et al. 

[18]

model the mmWave beam selection as a MAB (Multi-armed bandit) and use the reinforcement learning to maximize the directivity gain (i.e., received energy) of the beam alignment policy.

Ii-B2 ML using single non-RF modality

Va et al. [46]

consider a setting where the location of all vehicles on the road, including the target receiver, is used as input to a machine learning algorithm to infer the best beam configuration. Vision-aided mmWave beam tracking in

[1] models a dynamic outdoor mmWave communication setting where the sequence of previous beams and visual images are used to predict future best beam pairs.

Ii-B3 ML with sensor fusion

The proposed setting by Klautau et al. [24] and Dias et al. [9] comes closest to ours with GPS and LiDAR being used as the side information for LOS detection and also reducing the overhead in a vehicular setting.

The state-of-the-art [24, 9] does not consider the deep learning based fusion for more than two non-RF modalities to fully exploit the latent features within the data. The GPS coordinates are only used in the preprocessing pipeline to identify the target receiver. There also has not been any effort to decouple the expert knowledge for dynamically reducing the beam search space depending on specific user constraints. Our proposed method exploits a customized deep learning fusion approach that is carefully designed to maximize the beam selection accuracy. Moreover, completed by an algorithm that automatically chooses a dynamic subset of beam pairs, our method can run end-to-end without any hand engineering.

Iii System Model and Overview

In this section, we first review classical beam selection and discuss it’s limitations. We then propose to use non-RF data from multiple sensors to facilitate– and accelerate–beam selection.

Iii-a Beam Selection Problem Formulation

We denote the codebook of transmitter and receiver radios:

(1)

where are the number of transmitter and receiver codebook elements, respectively. Each element of the codebook represents a particular beam orientation that can be utilized by the radio. Thus, the set of all possible beam pairs is:

(2)

with . For a specific beam pair , the normalized signal power is obtained as:

(3)

where is the channel matrix and is the conjugate transpose operator. The weights and

indicate the corresponding beam weight vectors associated with the codebook element

and , respectively (). The goal of the beam selection process is to identify the best beam configuration, , that maximizes the normalized signal power, given by:

(4)

In classical beam selection, such as the approach defined in the IEEE 802.11ad [31] and 5G-NR [14] standards, the transmitter and receiver sweep all beam pairs sequentially in order to select the best beam pair.

Iii-B Subset Selection

While exhaustive searching through all candidate options ensures the beam alignment, the typical time to complete the entire procedure is in the order of 10 ms for IEEE 802.11ad [56] and 5 ms for 5G-NR [13] with only 30 beam pairs, respectively. To address this, we propose an out-of-band beam selection framework that uses out-of-band data to identify a subset of candidate beams, which are subsequently swept to select the one that maximizes the normalized signal power. More specifically, the key algorithmic component of our system amounts to proposing a means for identifying a subset of beam pairs such that

with high probability. Formally, assuming that we have a probability distribution for the optimal pair

, we wish to find:

(5)

Having obtained , we then restrict the search for the optimal pair to this set. Our solution uses a neural network to leverage out-of-band data to determine the probability distribution . Parameter establishes a trade-off between throughput performance, obtained by the best beam in , and latency, as a larger results in more processing time to search through the candidate options. Thus, our end-to-end design includes a means for appropriately determining , where the boundary condition of represents selecting the optimal beam pair.

Overall, this auxiliary parameter enables the users to adjust the system according to their specific constraints on establishing a low-latency or ultra-reliable communication. Moreover, it gives the flexibility to analyze the adjacent beam patterns with relatively closer performance or irregular radiation patterns under NLOS conditions.

Iii-C System Overview

Overall our framework consists of three main components.

  • Data Preprocessing: For the collected data to be effective, it is crucial to mark the transmitter, target receiver, and blocking objects. Thus, we exploit the prepossessing step described in Sec. IV for image and LiDAR.

  • Beam Prediction using Fusion-based Deep Learning: Given the multimodal sensor data, we design a F-DL architecture that predicts the optimality of each beam pair. Our approach consists of custom-designed feature extractors for each sensor modality, followed by a fusion network that joins the information for the final prediction. Our proposed fusion approach is presented in Sec. V.

  • Top- Beam Pair Construction: We select, the beam search space dimension, by defining an optimization problem (see Sec. VI) that takes into account the mmWave channel efficiency and probability of including the globally optimum beam pair.

In summary, our proposed beam selection approach runs in four steps end-to-end. First, the sensors at the vehicle collect GPS and LiDAR data, and the camera at the BS captures an image. The collected raw data is then preprocessed on site. Second, having the feature extractors of GPS and LiDAR being deployed at the vehicle, the high level features are generated and shared with the MEC over the sub-6 GHz data channel. This approach avoids sharing unnecessary amounts of data and helps mitigating potential privacy concerns. The high-level features of the image are generated in parallel. Third, given the extracted features of all three modalities at MEC, our method suggest a set of top- candidates for sweeping. The subset of beam pair is shared with the vehicle over the sub-6 GHz control channel. Finally, the beam sweeping runs at mmWave band (60 GHz) in a reduced search space of selected top- candidates to select the best beam pair and establish the link.

Iii-D Sensor Modalities

The details of the three sensor modalities are given below:

  • GPS: This sensor generates readings in the decimal degrees (DD) format, where the separation between each line of latitude or longitude (representing difference) is expressed as a float number with 5 digit precision. Each measurement results in two numbers that together pinpoints the location on the earth’s surface. We do not assume any satellite link outages due to terrain or man-made structures.

  • Image: This sensor captures still RBG images of the environment. Although images allow comprehensive environmental assessment, they are impacted by low-light conditions and obstructions (such as a different vehicle in the LOS path)

  • LiDAR: This sensor generates a 3-D representation of the environment by emitting pulsed laser beams. The distance of each individual object from the origin (i.e., the sensor location) is calculated based on reflection times. The raw LiDAR point clouds are data intensive (1.5 Mb for sparse settings), necessitating processing at the vehicle itself.

Iv Data Preprocessing

In this section, we describe our preprocessing pipeline for image and LiDAR.

Iv-a Processing Images

The raw images collected at the BS provide a snapshot of the present objects in the scene. In this case, it is crucial to detect the region of the target receiver among other vehicles that correspond to the blocking objects. Hence, we design a preprocessing step as follows. First, we employ a multi-object detection approach that enables us to flexibly distinguish the spatial boundaries of different vehicle types in the same frame. Second, given the type of target vehicle, we separate the region of the target receiver and blocking vehicles. On the other hand, the background with static walls and buildings is invariant over different scenes and consequently does not affect the decision and can be further removed. In summary, our approach (i) detects multiple vehicle types present in the same scene, (ii) separates the receiver and obstacle regions, and (i) removes the static background. Since the focus of this paper is not directly on image processing, we include details of our custom designed approach in Appendix A. The output of this image preprocessing step is the bit map of the raw input camera image, and it serves as the input to our fusion pipeline.

Iv-B Processing LiDAR Point Clouds

Fig. 3: The LiDAR preprocessing pipeline.

The raw LiDAR point cloud is a collection of points that correspond to the location of detected objects in the environment. Directly exploiting the raw point cloud (with varying number of points depending on traffic density) not only comes with huge computational cost but also raises ML architecture design challenges as the input to a neural network must be preferably fixed in size. Hence, we use a preprocessing step as shown in Fig. 3 first proposed in [24] that considers a limited spatial zone for each axis. This space corresponds to coverage range of BS and is denoted as , , and . Then, we construct a 3-D histogram that corresponds to a quantized 3-D representation of the space. The histogram bin size along the three spatial dimensions can be set based on desired resolution. The LiDAR point clouds lie in the corresponding bins of the histogram based on their location. Since the BS is fixed in our setting, it always occupies the same cell of the histogram with indicator (). The corresponding cell of the target receiver is also acquired with GPS data and indicated with (). The remaining elements are mapped to the corresponding histogram elements with (), which implies the presence of obstacles. This leads to a compact 3-D representation of the environment that we use as input for our pipeline.

V Beam Prediction using fusion-based
deep learning

In the second step of our proposed framework, we design a multimodal data fusion pipeline to combine the available sensing modalities together and predict the optimality of each beam pair. First, we describe the methodology for training the fusion pipeline, followed by the proposed distributed inference approach as shown in Fig. 4.

V-a Training Phase

We define the data matrices for GPS, LiDAR and images as: , respectively, where is the number of training samples. Furthermore, and give the dimensionality of preprocessed LiDAR and image data, while the GPS coordinate has 2 elements. We consider the label matrix

to represent the one-hot encoding of

beam pairs, where the optimum beam pair is set to , and rest are as per Eq. (4). As mentioned in Sec. III-A, we have one optimal beam pair per sample, so we opted for one-hot encoding which enables having just one class per sample. Overall, we design a fusion framework to combine different data modalities that contains two main components: (i) base unimodal networks and (ii) the fusion network.


Base Unimodal Neural Network: We use the base unimodal neural network to (i) benchmark the performance of our fusion-based approach with respect to what can be achieved using only a single sensor type, and (ii) extract latent features from the penultimate (second last) layer of each that we use as input to our fusion network.

A deep neural network (DNN) can be considered as a combination of a non-linear feature extractor followed by a softmax classifier, i.e., the first layer until the penultimate layer of the DNN constitute the feature extractor

[52]

. The feature extractor maps an input to a point in a multi-dimensional space called as the latent embedding space. The dimension of this high-level data representation is equal to the number of neurons in the penultimate layer. Then, in the final layer, the softmax activation function maps the high level representation of input data to a probability distribution over classes. As a result, the penultimate layer captures the unique properties of input data through a latent embedding space that is the key to making the final decision.

In this work, we propose to use the output of unimodal feature extractors as the high level data representation of each sensor modality. We assume that the penultimate layer of all three unimodal networks has neurons. As a result, each sensor modality sample input maps to a vector with dimension after passing through the feature extractors. We denote the feature extractor of each modality as , and for coordinate, LiDAR, and image data, respectively, each parametrized by weight vectors , for . We refer to the output of these feature extractors as the latent embedding of each modality. Formally,

(6a)
(6b)
(6c)

where , and show the extracted latent embeddings for input data , and , respectively. We then apply a tanh activation on extracted latent features to regularize them in a range [-1, 1]. Note that the input to the base unimodal networks may contain negative values, which motivates the choice of tanh as the regularization function.


Fusion Neural Network: Each of the modalities capture different aspects of the environment. For instance, the GPS coordinates provide the precise location of the target receiver but it is blind to the shifts in the other objects in the environment and fails to provide any information about the dimensions of the vehicles. LiDAR accuracy degrades in bright sunshine with many reflections [21]. Hence, fusing different modalities can compensate for the partial or inaccurate information and increase the robustness of the prediction.

Given the latent feature embedding of all modalities, we propose a fusion approach as follows: We explore that feature concatenation is an effective strategy for feature-level fusion in machine learning [33]. Hence, our proposed fusion method is comprised of concatenation of latent feature embedding from each unimodal network to account for all sensor modalities, simultaneously. Thus, given , and , we first concatenate them and generate the combined latent feature matrix as:

(7)

Moreover, using multiple layers after concatenation of extracted features allows our fusion architecture to learn about the relevance of modalities altogether, and therefore, it intelligently assigns higher weights to the features of the more relevant modalities. We pass the combined latent feature matrix

to another convolutional neural network (CNN) that we refer as

fusion network to properly learn the relation of extracted latent embedding and the corresponding optimum beam pair. We denote the fusion network as . Finally, we use a softmax activation function to predict the optimality of each beam pair as:

(8)

where denotes the softmax activation function defined as , and indicates the predicted score of each beam pair. Note that forms a probability distribution, with , . We train this network offline using a cross-entropy penalty, over data in which the optimal pair is one-hot encoded.

Fig. 4: Proposed fusion framework. In the training phase, the pipeline is trained offline, and during the distributed inference, the trained model is disseminated over the system.

V-B Distributed Inference Phase

Unlike the training phase that occurs offline, the inference needs to occur real-time. To that end, the MEC receives instantaneous data from three sensor modalities, which is passed to the trained fusion pipeline for predicting the top- beam pairs. Since the sensors are not co-located, to accelerate inference, we distribute the ML architecture taking account the limitations of the control channel delivering the sensor data to the MEC. Our distributed inference scheme is illustrated in Fig. 1. The trained base unimodal networks for GPS coordinates and LiDAR are deployed at the vehicle to locally generate the high level latent embeddings and . The extracted features are then concatenated as and sent over the sub-6 GHz data channel. Similarly, the base (unimodal) network of the image generates the features for this modality at the BS, which is then combined with at the MEC as . Note that this methodology results in the same combined latent feature matrix as Eq. (7), we analyze the improvement in end-to-end latency with this distributed inference approach in Sec. VIII. Finally, given the latent feature embedding of all modalities available at the MEC, we use the fusion network, , followed by a softmax activation to predict the score of each beam pair according to Eq. (8). Fig. 4 depicts the dissemination of the fusion pipeline over the system.

Vi Top- Beam Pair Construction

The proposed fusion pipeline outputs a softmax score for each of the possible beam pairs given the different sensor modalities. Recall that our goal is to identify a subset of beam pairs such that with high probability. We describe in this section how the neural network outputs are used for that purpose, as well as how we select parameter .

Vi-a Selection Problem Formulation

Consider the softmax score vector outputed by the neural network via Eq. (8). Recalling that provides a probability distribution for over , the top- beam configurations Eq. (5) becomes:

(9)

Hence, given scores and parameter , can be easily constructed by sorting and identifying the top- elements.

Vi-B Selecting

Parameter establishes a tradeoff between the probability that the optimal beam pair is in and the time it takes to determine the best (but possibly sub-optimal) beam within . This suggests selecting by optimizing an objective of the form:

where is a penalty increasing with the latency incurred by the choice of . We discuss how to set these terms, and additional constraints we introduce, in this section.

Modeling Probability of Inclusion. A simple way to model the probability of the event is via the softmax scores , as in Eq. (9). We observed however that this tends to overestimate the probability of this event in practice: even if softmax scores are good for selecting the set quickly and efficiently, a more careful approach is warranted when selecting .

To that end, we leverage the empirical distribution of scores in our training set. In particular, given a score vector and let

(10)

be the sum of the largest scores in . Let be a sample selected uniformly at random from our training set. Let also be the corresponding softmax output layer associated with , and the optimal pair associated with this sample. Then, given a score vector generated at runtime and the corresponding , we estimate the probability of the event via:

(11)
(12)

where the probability is w.r.t the random sample in the dataset. Intuitively, this captures the empirical probability that is in a random set constructed in the training set, conditioned on the fact that our choice of restricts these sets by bounding the quantity to be at most . In some sense, this allows us to link softmax scores to the variability of confidence in the construction of , itself depending upon different LOS/NLOS conditions, vehicular traffic patterns, etc., the training set is used to statistically quantify this variability.

We note that Eq. (12) can be computed efficiently via Bayes rule, without the need to access the training set at runtime. In particular, for , is equal to:

(13)

The constituent cumulative density functions can be computed directly from the dataset for each , and then used at runtime.

Incorporating Latency. Since the transmitter and receiver sweep all suggested beam pairs in , we include a second term mmWave channel efficiency in the objective defined as:

(14)

with and being the total time for which a certain beam pair is valid and the end-to-end latency imposed by our proposed fusion based beam selection approach, respectively. We precisely analyze the end-to-end latency of our proposed beam selection approach in Sec. VIII. Note that the is an increasing function of . Hence, the mmWave channel efficiency is a decreasing function with respect to .

Optimization. Combining the above terms, the final optimization problem we solve to determine given a run-time score vector is:

(15a)
s.t. (15b)
(15c)

In Eq. (15), the first term in objective enforces the algorithm to select higher values of and ensure the alignment, when the optimum beam pair is included in the suggested beams. On the contrary, the second item avoids selecting unnecessarily high values. The control parameter in Eq. (15) weights the importance between the two terms in the objective function.

Inputs: softmax score generated by F-DL framework in Sec. V, ;
Output:
1. Compute probability of inclusion Eq. (13)
2: Compute channel efficiency Eq. (14)
3: ;
4. Construct according to Eq. (9)
Algorithm 1 Top- Beam Pair Selection

Vii Dataset Description and DNN Architectures

In this section, we introduce two datasets which we use to evaluate the F-DL framework. The Raymobtime dataset [25] is one of the widely used comprehensive multimodal dataset which has been basis of many state-of-the-art techniques. However, to give more perspective on applicability of the proposed F-DL architecture, we also collect our own “real-world” multimodal data, which includes real sensors, urban environment, and RF ground-truth. Further, we detail the preprocessing and implementation steps used in the proposed framework.

Vii-a Datasets:

Vii-A1 Simulation Dataset

The Raymobtime multimodal dataset captures virtually with high fidelity V2X deployment in the urban canyon region of Rosslyn, Virginia for different types of traffic. A static roadside BS is placed at a height of meters, alongside moving buses, cars, and trucks. The traffic is generated using the Simulator for Urban MObility (SUMO) software [29], which allows flexibility in changing the vehicular movement patterns. The image and LiDAR sensor data are collected by Blender [5], a 3D computer graphics software toolkit, and Blender Sensor Simulation (BlenSor) [17] software, respectively. For a so called scene, the framework designates one active receiver out of three possible vehicle types i.e. car, bus and truck. For each scene, (i) the receiver vehicle collects the LiDAR point clouds and the GPS coordinates, (ii) a camera at the BS takes a picture, and (iii) the combined channel quality of different beam pairs are generated using Remcom’ Wireless Insite ray-tracing software [36]. The BS and receiver vehicle have uniform linear arrays (ULAs) with element spacing of , where denotes the signal wavelength. The number of codebook elements for BS and the receiver is 32 and 8, respectively, leading to 256 beam pairs. The gap between two consecutive scenes is 30 seconds which corresponds to sampling rate of 2 samples/minute. A python orchestrator is responsible for data flow across the system to ensure the different software operations are synchronized.

dataset # of Samples LOS NLOS NLOS Percentage
S008 11194 6482 4712 42%
S009 9638 1473 8165 85%
TABLE II: Statistics of S008 and S009 datasets.

The simulation is repeated for the same scenario with two different traffic rates. We refer to these datasets as S008 and S009, which correspond to regular and rush-hour traffic, respectively. Since there are more vehicles in S009, the number of NLOS cases is higher. Tab. II denotes the number of LOS and NLOS cases for both datasets. We use the S008 dataset for training and validation and S009 as the testing set. Fig. 5 illustrates the distribution of the classes over S008 and S009. We observe that the dataset is highly imbalanced, i.e., there is a huge variation in the number of different classes, a property that is expected due to the sparsity of mmWave links.

(a) S008
(b) S009
Fig. 5: Distribution of S008 and S009 datasets.

Vii-A2 Real-world NEU Dataset

This dataset contains multimodal sensor observations collected in the greater metropolitan area of Boston. The experiment setting is an outdoor urban road with two-way traffic surrounded by high-rising buildings on both sides. An autonomous vehicle equipped with GPS (sampling rate 1Hz) and Velodyne LiDAR [48] (sampling rate 10Hz) sensors establishes connection with a mmWave base station located at a road-side cart. The RF grand-truth is acquired using Talon AD7200 60 GHz mmWave routers with a codebook of 64 beam configurations [43]. Each dataset sample includes the synchronized recordings of GPS and LiDAR sensors along with the grand-truth RF measurements. The data collection vehicle maintains speeds between 10-20 mph following the speed-limit of inner-city roads. The dataset setting spans a variety of four categories, including the LOS passing, blockage by pedestrian, static, and moving car with 10853 samples (116.7 GB) overall (see Tab. IV). Fig. 6 denotes a diagram of the experiment setting top view. The dataset is collected during three days with different levels of humidity and weather conditions. The weather forecast information during data collection days is presented in Tab. III [53]. In particular, the humidity and maximum wind speed change between 53–75% and 8–17 mph, respectively, resulting in a rich representation of weather in the dataset and better generalization.

The NEU dataset is collected to expand the feasibility study of the F-DL architecture. However, to resemble the futuristic V2X architecture, the considered framework requires tower-mounted base stations equipped with a camera. As we did not have access to such infrastructure, we collect the NEU dataset with LiDAR and GPS sensors deployed in a car. This fact does not diminish the applicability of the collected dataset, as the processed fused features from LiDAR and GPS are transmitted from car to mmWave base station following the same architecture as mentioned in Fig. 1. Hence, we argue that the NEU dataset can be considered as a solid reference dataset for the beam selection task, considering the scarcity of real datasets for mmWave experiments. We pledge to release the collected real-world dataset to the community upon acceptance of this paper in our public dataset repository [12].

Fig. 6: NEU dataset collection environment includes for categories as: LOS passing, NLOS by pedestrian, NLOS by static car and NLOS by moving car.
Day Temperature (F) Humidity
Max Wind
Speed (mph)
Atmospheric
Pressure (Hg)
Precipitation
(Inches)
1 53-75 48-74% 17 30.13 2.90
2 59-67 75-87% 13 30 3
3 56-68 54-84% 8 30.37 3.10
TABLE III: Weather forecast on three days of data collection.
Category Speed (mph) Scenarios Samples
LOS passing 10 1568
NLOS by pedstrain 15
standing
walk right to left
walk left to right
4791
NLOS by static car 15 in front 1506
NLOS by moving car 20
15mph same lane
15mph opposite lane
2988
TABLE IV: Summary of different categories of NEU dataset.

Vii-B Preprocessing

Vii-B1 Image

To construct the dataset for the image preprocessing classifier, we manually identify and close in bounding boxes samples of bus, car, truck and background and quantize them by following the steps mentioned in Appendix A

. We label these as background (0), bus (1), car (2), truck (3). The constructed dataset contains 22482 samples per class on average. We then train a classifier as follows. The input crops are first passed to a convolutional layer with 20 filters of kernel size (15, 15) followed by a max-pooling layer with the pool size of (3, 3) and stride size of (2, 2). The output is fed to two consecutive dense layers with 128 and 4 neurons (number of classes). Our trained classifier achieves 84% accuracy in separating the samples of each class. In the Raymobtime dataset, the camera generates (540, 960, 3) RGB images. We empirically choose the window size of 40 and stride size 3 for our task that results in the output bit map of size (101, 185). Fig.

7 shows a sample from the dataset and the generated bit map. Note that the multi-object detection algorithm can be easily extended to any type of vehicle by including the samples from new vehicles in the training set [10, 7]. We evaluate the delay cost of image preprocessing in Sec. VIII-A.

(a) Raw image

(b) Generated bit map
Fig. 7: An example of input and output of image preprocessing.

Vii-B2 LiDAR

The maximum distance for LiDAR is set to 100 meters in the Raymobtime dataset, and the zone of space is limited in each axis as, , , , where the static base station is located at

within this Cartesian coordinate system. Moreover, the histogram bin size along the three spatial dimensions is set as

, respectively. By following the steps mentioned in Sec. IV-B, we generate a compact representation of the environment where the BS, target vehicle, and obstacles are identified and marked with different indicators. For NEU dataset, we use the maximum LiDAR distance of 80 meters and map the LiDAR point clouds to a compact representation in each axis.

Vii-C Implementation Details

Our proposed fusion pipeline consists of three unimodal networks per modality followed by a fusion network as presented in Fig. 4. We first design each unimodal network tuned to each dataset which takes either raw (for coordinate) or preprocessed (LiDAR and image) data as input and generate the latent embeddings to be fed to the fusion network. For GPS unimodal network, we design a model that uses 1-D convolutional layers (see Fig. 7(a)). This enables capturing the correlation between the latitude and longitude, simultaneously. Our custom designed model for the preprocessed images (see Sec. IV-A) is inspired by ResNet [20]

that uses identity connections to avoid the gradient vanishing problem commonly seen in deep architectures, by creating a direct path for the gradient during backpropagation.

Each such identity block contains 2 convolutional layers and an identity shortcut that skips these 2 layers, followed by a max-pooling layer, as shown in Fig. 7(e). For LiDAR input, we also design a model structure similar to ResNet (see Fig. 7(c)). Note that while the input to image and LiDAR models are 2D and 3D, the majority of elements are zero due to filtering the irrelevant data during preprocessing. We also use max-pooling layers after convolutional layers for feature down-sampling and dropout of 0.25 after fully-connected layers to avoid overfitting.

The representation capacity of each network including latent embedding generators scales with the number of classes in each dataset, 256 and 64 for Raymobtime and NEU, respectively. Though increasing the number of neurons generally improves the representation capacity of base unimodal architectures, we find having neurons equal to the number of classes to be sufficient for our task. We design a fusion network as depicted in Fig. 7(d)

that takes as input the concatenated latent embedding of each modality. Ultimately, the last dense layer with the number of classes outputs the predicted score of each beam pair. For all models, we exploit categorical cross-entropy loss with batch size of 32 and training epochs of 100

and 400 for Raymobtime and NEU dataset with an earlier stopping point of patience 10. Moreover, we apply and kernel regularizers on dense layers with parameters and , respectively. We use Adam [23] as optimizer with

and initialize the learning rate to 0.0001. We tuned all the hyperparameters on a validation set via holding out 17%

and 10% of the training data for each dataset.

(a)  GPS
(b) Image
(c) LiDAR
(d) Fusion
(e) Identity
Fig. 8: Proposed architectures for unimodal and fusion networks.

Viii End-to-End Latency Analysis with Distributed Inference

In this section, we explore the important design details and performance trade-offs related to centralized/distributed inference. Moreover, we answer the following question: What is the end-to-end latency of beam selection with our proposed method?

Viii-a Data Collection and Preprocessing

Current LiDAR sensors support pulse rate, i.e., the number of discrete laser “shots” per second that the LiDAR is firing, of 50,000 to 150,000 pulses per second, while 35 cm precision can be achieved with 8 pulses/ [6]. The GPS sensor data does not require any preprocessing and the LiDAR preprocessing has a negligible latency that can be further reduced by exploiting parallel processing. For image sensor data, we measure the delay of our proposed object detection algorithm described in Appendix. A by passing a single sample 100 times and calculating the average required time for generating bit maps. Accordingly, our proposed image preprocessing pipeline generates the bit maps in on average. As a result, our preprocessing pipeline runs in 1.30 ms on average (). Note that image preprocessing is applied on Raymobtime dataset only.

Viii-B Sharing Features between Vehicle and MEC

Data collected at vehicular locations can incur different relaying costs to the MEC, depending upon the sensor modality. GPS coordinates, both latitude and longitude, can be expressed in 6 Bytes, while the raw LiDAR point cloud requires 1-1.5 MBytes for complete transfer. One possible approach is to relay the GPS measurements as is while subjecting the LiDAR data to additional preprocessing step as discussed in Sec. IV-B. This step maps the raw LiDAR point clouds to a ridge representation with size (20, 200, 10) that can be shown with 320 KBytes (78% less than raw LiDAR point clouds) for Raymobtime dataset. Using the aforementioned prepossessing reduces the data from 0.9 MByte to 64 KByte for NEU dataset as well. We can further improve the data transmission speed from vehicle to the MEC by sending the fused high level latent embeddings of LiDAR and GPS. Recall that we extract this information at an intermediate layer of the neural network (see Sec. V-A). With our proposed distributed inference design, the raw coordinates and LiDAR data is translated to an array with elements that is expressed with only 4 KBytes and 1 KBytes for Raymobtime and NEU datasets, respectively ( reduction in size than raw data), which is even more compressed and requires less bandwidth within the sub-6 GHz control channel.

Tab. V illustrates the number of bytes and the minimum/maximum experienced delay while transmitting the compressed extracted features of coordinate and LiDAR over the sub-6 GHz data channel. The achievable throughput is assumed to be 3-27 Mbs and 4.4-75 Mbs for 802.11p [49] and single input single output (SISO) LTE [42], respectively.

Additionally, the fused features are difficult to interpret by third parties and provide a level of abstraction to the raw data. From Tab. V, we observe that the data channel delay reduces drastically with the distributed inference. Without loss of generality, we use the maximum imposed delay of control signaling from vehicle to MEC being for Raymobtime and for NEU datasets to calculate the overall end-to-end latency.

Raymobtime NEU
Method # Bytes Min Reqd. time () Max Reqd. time () # Bytes Min Reqd. time () Max Reqd. time ()
802.11p LTE 802.11p LTE 802.11p LTE 802.11p LTE
Preprocessed 326 KB 12.07 4.34 108.66 74.09 64 KB 2.37 0.85 21.33 14.55
High-level
fused features
4 KB 0.148 0.053 1.332 0.90 1 KB 0.037 0.013 0.33 0.225
TABLE V: The required time for sharing the data with MEC for three data sharing strategies for Raymobtime and NEU datasets.

Viii-C Inference and Sharing Selected Beams with Vehicle

In order to evaluate the inference delay, we pass input data, i.e., the latent embedding of all modalities, through our pipeline and measure the prediction time by setting a timer and subtracting the timestamp before and after prediction. We note that the average inference time of our proposed fusion approach is . On the other hand, sending the selected beams from MEC to vehicle over the sub-6 GHz control channel requires at most 2KB (256 elements) and 0.5KB (64 elements) for Raymobtime and NEU datasets, respectively. That takes and as maximum required time, and results in a cumulative delay () of and for each dataset, respectively. Similar to the previous section, we consider the highest imposed delay related to using IEEE 802.11p standard as our reference.

Viii-D Impact on Beam-sweeping Latency: Case Study in 5G-NR

We first discuss the time requirement of exhaustive beam search in 5G-NR standard. Next, we calculate the required time for sweeping only the selected beam pairs by following the same norms as 5G-NR standard.

Viii-D1 Beam Selection Latency in 5G-NR

For evaluating a 5G-NR standard compliant beam selection process in the mmWave band, we consider a transmitter-receiver pair with the codebook sizes and , respectively. With analog beamforming, we have a total of combinations (see Sec. III). During the initial access, the gNodeB and user exchange a number of messages to find the best beam pair. In particular, the gNodeB sequentially transmits synchronization signals (SS) in each codebook element . Meanwhile, the receiver also tunes its array to receive in different codebook elements until all possible beam configurations are swept. The SS transmitted in a certain beam configuration is referred as the SS block, with multiple SS blocks from different beam configurations grouped into one SS burst. The NR standard defines that the SS burst duration () is fixed to , which is transmitted with a periodicity () of  [4]. In the mmWave band, a maximum of 32 SS blocks fit within a SS burst, which allows for 32 different beam pairs to be explored within one SS burst. Hence, in order to explore all beam pair combinations, a total of SS blocks are required to be transmitted. Given the limit on SS blocks within a SS burst, the total time to explore all beam pairs () can be expressed as:

(16)

where and correspond to periodicity and SS burst duration, respectively. Note that if a certain number of beam pairs are not explored within the first SS burst (), there is an increasing delay given the separation between SS bursts. On the other hand, exploring a number of pairs smaller than 32 will introduce the same overhead as if a total of 32 options were searched, given that has a fixed duration of . Similarly, this can be extended to any number that is not a multiple of 32.

Viii-D2 Improvement in Latency through Proposed Approach

Our proposed approach reduces the beam search space from to a subset of most likely beam candidates, derived from Algorithm 1. We recall that the NR standard assumes that up to 32 can be swept within . Thus, we define the time to explore one single beam as . Then, the required time for sweeping the selected top- beam pairs can be expressed as:

(17)

Viii-E End-to-end Latency Calculation

Considering the aforementioned four steps, the overall beam selection overhead following our proposed data fusion approach () with distributed inference is expressed as:

(18)

where the first three terms can be approximated by and for Raymobtime and NEU datasets, respectively. Note that the distributed inference play a pivotal role in reducing the overhead associated with sharing the situational state of the vehicle with the MEC (). We validate the improvement in overall beam selection time using the proposed distributed inference (Eq. 18) approach rather than the traditional brute-force approach offered by the state-of-the-art 5G-NR (Eq. 16) standard in Sec. IX-E.

Ix Results and Discussions

Modalities Top- Top- Top- Top- Top- Top- Weighted Weighted Weighted KL
Accuracy Accuracy Accuracy Accuracy Accuracy Accuracy Recall Precision F1 score divergence
Coordinates 12.32% 31.51% 55.61% 77.93% 88.5% 95.14% 2% 12% 3% 3.02
Image 12.39% 26.84% 55.38% 71.65% 88.05% 95.01% 7% 12% 3% 2.9051
LiDAR 46.23% 64.67% 82.43% 89.95% 96.11% 98.13% 47% 46% 45% 0.1738
Coordinates, Image 25.76% 44.88% 74.18% 86.29% 94.78% 97.89% 21% 26% 22% 0.5432
Coordinates, LiDAR 55.42% 74.54% 85.51% 91.41% 96.75% 98.56% 55% 55% 54% 0.1357
Image, LiDAR 54.52% 73.08% 84.83% 91.23% 96.78% 98.50% 55% 55% 54% 0.1428
Coordinate, Image, LiDAR 56.22% 74.08% 85.53% 91.11% 96.56% 98.60% 55% 56% 55% 0.1314
TABLE VI: Performance of proposed unimodal and fusion when trained on S008 and tested on S009 Raymobtime dataset.

In this section, we provide the results of our proposed method using the datasets described in Sec. VII-A

. We use Keras 2.1.6 with Tensorflow backend (version 1.9.0) to implement our proposed beam selection approach. The source codes for our implementation are available in 

[40]. To judge the efficiency of proposed beam selection approach on multi-class, highly-imbalanced, multimodal Raymobtime [25] and NEU

datasets, we use four evaluation metrics that capture the performance from different aspects, including top-

accuracy, weighted F-1 score, KL divergence and throughput ratio. We provide the detailed definitions of these metrics in Appendix B. We first analyze the performance of proposed fusion deep learning method on Raymobtime dataset, and then further justify the improvement in beam selection accuracy on real-world NEU dataset in Sec. IX-F.

Ix-a Performance of Base Unimodal Architectures

We assess the performance of beam selection by only relying on unimodal data. First, we preprocess image and LiDAR raw data using the methods proposed in Sec. IV. Then, we normalize and feed it to corresponding base unimodal architectures presented in Fig. 8 followed by a softmax activation at the output layer. The experimental results of predicting top- beam pairs are presented in Tab. VI, for each proposed unimodal architectures. In the table, we report the top- (=1, 2, 5, 10, 25, 50) accuracy along with weighted recall, precision and F1 score and the KL divergence of the predicted labels and true labels on Raymobtime dataset. We observe that the LiDAR outperforms coordinate and image in all metrics with 46.23% top-1 accuracy, which makes it the best single modality. Moreover, to justify the improvement achieved by using the image preprocessing step described in Appendix A, we compare the weighted recall on raw and preprocessed image data. Interestingly, we observed that by using the raw images, the model always predicts the class with the highest occurrence in the training set that results in the weighted recall of 0.01%. Intuitively, in the case of using raw images, the model cannot find a relation between the input image and the labels since from a raw image perspective any vehicle captured in the image can be the target receiver. On the other hand, using the image preprocessing step increases the weighted recall to 7% as presented in Tab. VI.

Ix-B Performance of Fusion Framework

The results of fusion on different combinations of unimodal data are presented in Tab. VI for Raymobtime dataset. We observe that the fusion increases the beam prediction accuracy in all combinations. Moreover, the best result is achieved when all modalities are fused together with improvement in top-1 accuracy in comparison with the best unimodal data i.e., LiDAR. The improvement with fusion can be also justified by tracking the validation accuracy during training. Fig. 9 compares the top- validation accuracy of fusion of all three modalities with LiDAR-only (best single modality). We observe that although the top- validation accuracy of fusion is lower in early epochs, it outperforms the LiDAR after five epochs.

Since the dataset is highly imbalanced, we report results using metrics like weighted precision, recall, and F1 score to confirm the improvement. Furthermore, we use KL divergence metric to measure the overall performance of the fusion pipeline. The lower the divergence, the more is the similarity between true and predicted labels. We also use KL divergence to show the relative entropy between train (S008) and test (S009) data labels (Shown in Fig. 5). We get KL divergence of 0.57 signifying high relative entropy between the train/test label distributions. From Tab. VI, we observe that the fusion with all unimodal data leads to the lowest KL scores. Hence, we deduce that fusion among all three modalities is the most successful scheme to capture the label distribution in the test set. Hence, we choose the proposed fusion-based approach comprising of all three modalities as beam selector for the rest of the performance evaluation.

Fig. 9: Comparing top-1 validation accuracies of LiDAR-only and fusion with all three modalities on the Raymobtime dataset.

Ix-C Studying the Impact of

To analyze the impact of different values in the overall performance, we point out that failure in selecting the optimum beam pair within the suggested subset () results in the drop in the received signal power. Hence, we choose the throughput ratio (see Appendix B) as our metric to assess the QoS of the system. Intuitively, the throughput ratio depicts the ratio of average throughput when sweeping only beam pairs predicted by the model with reference to what could be achieved with exhaustive search. Fig. 9(a) compares the throughput ratio and normalized beam selection accuracy with varying from 1 to 30 for Raymobtime dataset. As expected, both increase with since it is more likely to include the optimum beam pair with higher . We observe the gap between the accuracy and throughput ratio starts with 16.90% for =1, and it decreases as increases. We do not observe significant improvement in throughput ratio after ; however, the accuracy keeps on improving until . Note that while increasing improves the quality of service (QoS), it results in higher beam selection overhead as well. Hence, it is crucial to balance the tradeoff between the two as proposed in dynamic selection of top- beam pairs algorithm in Sec. VI.

(a)
(b)
(c)
Fig. 10: (a) Comparison of throughput ratio and beam selection accuracy with varying (b) LOS/NLOS accuracy for (c) Analysis of throughput ratio, accuracy and average selected for different values in Eq. (15).

Ix-D Impact of LOS and NLOS

The presence of obstacles leads to massive drops in channel quality given the high attenuation in the mmWave band. Additionally, users might experience a considerable reduction in their QoS to tens of Gbps. In the case of LOS scenario, the corresponding best beam pair distinctively outperforms the others. However, the presence of blockage in LOS path causes unexpected beams to achieve the highest signal strength through multiple reflections. We show this in Fig. 9(b), which compares the accuracy of our proposed fusion where the sample of test data are separated based LOS/NLOS scenario in Raymobtime dataset. As expected, prediction in the case of complex reflections of NLOS links is more challenging, showing a maximum drop of 8.3% in beam selection accuracy against LOS scenarios.

Ix-E Impact on Beam Selection Speed

As discussed in Sec. VIII-D1, the 5G-NR standard define a brute-force beam sweeping process that sequentially explores all possible directions. In addition, according to Eq. (16), only up to 32 directions can be explored within one SS burst, which creates additional waiting time within one beam selection process. In order to decrease such overhead, we propose a solution that selects a reduced set of beam pairs and performs a brute-force search only on those ones. Also, given the different confidence levels of our prediction model due to potential scenario variations, we propose an algorithm that selects flexibly to avoid unnecessary overhead.

In the Raymobtime dataset, the road length is 200 meters and the BS is located in the middle. On the other hand, the 3-dB beam width of an uniform linear array antenna with elements is approximately equal to radians [37] that results in span of and for each beam of transmitter and receiver codebooks, respectively. Hence, the overall BS coverage angle is equal to and the contact time, i.e., the time that the vehicle remains in the span of one beam, is equal to with and being the height of the BS and the velocity of the vehicle [30]. Consequently, the vehicle remains in the coverage region of each beam pair for while moving with the velocity of 32 km/h (average speed in urban roads). Therefore, the beam selection process needs to be repeated every (). In Fig. 9(c), we analyze the impact of in Eq. (15) on the throughput ratio (), the accuracy and the average selected . We observe how the triplet , accuracy and average selected decreases with , the control parameter in Eq. (15). Intuitively, increasing gives more weight to the second term in Eq. (15) that forces the algorithm to be faster and choose lower which results in lower QoS and beam selection accuracy. Interestingly, we observe that for the maximum average selected is equal to . In this scenario, the objective in Eq. (15) aims to maximize the alignment probability and increasing the and yet it does not exceeds out of 256. We conclude that our proposed fusion method achieves to 100% top- accuracy, so it does not need to sweep any further beam pairs.

The control parameter in Eq. (15) enables us to slide between different accuracy and overhead conditions. Fig. 11 shows that the dynamic selection approach achieves an average throughput ratio of 95.37% and 97.95% while targeting 90% and 95% accuracy, respectively. This implies that the capacity of the proposed F-DL approach is only 4.63% lower than the 5G-NR standard, while targeting the accuracy of 90% for instance. Moreover, the dynamic selection approach offers the corresponding beam sweeping overhead of and , Eq. (17) and the overall beam selection delay of and . Note that the beam selection delay of our proposed dynamic selection method in Fig. 11 corresponds to the end-to-end latency of the proposed F-DL method presented in Eq. (18). In contrast, the 5G-NR standard beam selection procedure requires . Therefore, we notice 96% reduction in overall beam selection overhead while retaining 97.95% relative throughput associated with 95% accuracy. Furthermore, we compare the performance of proposed algorithm for constructing the subset , Algorithm 1, that is generated dynamically per case, with the fixed one (Fig. 9(a)). Note, that fixed

selection is a posterior probability derived after observing all test samples; however, the dynamic

selection selects the for each sample of test set, independently. From this figure, we observe that the proposed dynamic selection approach outperforms the fixed one, providing faster beam selection with close competing relative throughput while targeting the same accuracy. We use the same standard, i.e., 5G-NR for fair comparison (see Fig. 11). Note that our algorithm can be trivially extended to any other exhaustive beam search standards, such as IEEE 802.11ad by modifying Eq. (17), yet it does not negate the improvement achieved by restricting the beam selection to a lower dimension space.

Fig. 11: Comparison of relative throughput and end-to-end beam selection time (Eq. (18)) of proposed approaches, Dynamic  (Algorithm 1) and Fixed  (Eq. (9)), with 5G-NR standard. The actual beam selection time of for 5G-NR is scaled here, for better visibility and comparison purpose.

Ix-F Real-world Implementation

We validate the performance of the proposed fusion deep learning method on the home-grown NEU dataset. As mentioned in Sec. VII-A2, due to the infrastructural limitation, we use only LiDAR and GPS branch of the proposed F-DL (presented in Sec. V, Fig. 4) for this set of experiment. Tab. VII compares the beam selection accuracy while using individual sensor inputs in contrast to the case where the information from GPS and LiDAR sensor are fused together. We observe that fusion improves the Top-1 prediction accuracy from 74.86% for the best modality, i.e., LiDAR to 78.18% for the fusion of GPS and LiDAR sensors. The weighted F1 score also increases by 3.6% denoting better handling of imbalances in ground-truth, which is common in mmWave beams.

Ix-G Accuracy and End-to-End Latency Analysis:

The Raymobtime and NEU datasets have 256 and 64 possible beam pairs each; hence, sweeping the entire codebook elements requires, and , respectively, according to 5G-NR standard explained in Sec. VIII-D1. On the other hand, the proposed beam selection method restricts the beam search space to a subset of beam pairs. We study the trade-off between the accuracy and end-to-end beam selection time (presented in Eq. (18)) versus in Fig. 12 for both datasets. In particular, we observe that for the Raymobtime dataset the accuracy is for while the end-to-end latency is still increasing. On the other hand, for the NEU dataset, the accuracy and end-to-end beam selection time starts with 78.18% and for . The accuracy saturates at and reaches for while the beam selection time keeps on increasing and becomes for . Specifically, Fig. 12 highlights the importance of the selection method to choose the appropriate and avoid unnecessary overhead imposed on the system.

(a)
(b)
Fig. 12: Beam selection accuracy and end-to-end beam selection time versus K on the (a) Raymobtime and (b) NEU datasets.
Modalities Top- Top- Top- Weighted
Accuracy Accuracy Accuracy F1 score
Coordinates 39.94% 54.39% 81.05% 33.63%
LiDAR 74.86% 89.04% 97.57% 75.02%
Coordinates, LiDAR 78.18% 91.02% 98.02% 78.62%
TABLE VII: Performance of proposed unimodal and fusion method on real-world NEU dataset.

Ix-H Comparison with the State-of-the-art

In Tab. VIII, we compare the performance of our proposed models to the state-of-the-art DL based approaches by Klautau et al. [24] and Dias et al. [9], both evaluated on the Raymobtime dataset. To the best of our knowledge, these are the only methods that include equivalent scenarios to the ones considered in this paper. In particular, LiDAR sensor data collected on vehicles is used for beam prediction under both LOS and NLOS conditions. Other works that consider different evaluation metrics ([46, 51, 47, 3]), camera images under LOS-only scenarios ([1, 45, 55]) or RF data [2] have been kept out of the comparison. As we show in Tab. VIII, the proposed LiDAR model and the F-DL architecture outperform the state-of-the-art ([24, 9]) by 18.95-20.45% and 20.11-21.61% respectively in top- accuracy. Moreover, both Klautau et al. [24] and Dias et al. [9] only keep the codebooks elements that are at least seen 100 times in the training set.

Methods Dataset # Beams Modalities Inference Top- Top- Top- Top-
Dias et al. [9] Raymobtime (S007) 264 LiDAR Centralized
Klautau et al. [24] Raymobtime (S008) 240 LiDAR Centralized %
Proposed LiDAR Network Raymobtime (S008) 256 LiDAR Centralized
Proposed F-DL Raymobtime (S008) 256 GPS, Image, LiDAR Distributed 56.22% 74.08% 85.53% 91.11%
NEU 64 GPS, LiDAR Distributed 78.18% 91.02% 98.02% 99.37%
TABLE VIII: Comparison of proposed best performing unimodal and F-DL architectures with two benchmark DL based approaches on Raymobtime dataset [25] and results on the real-world NEU dataset.

Ix-I Discussion

We summarize below interesting observations from the experimental results:

  • When LiDAR and GPS sensors are deployed over the vehicle and data is transmitted to the BS through sub-6 GHz data channel, the wireless control channel may impact the actual delivery at the MEC. On the other hand, cameras at the BS may have a reliable fiber connectivity to the MEC. Hence, in case of unreliable channel conditions or faulty sensors, our fusion framework is still able to make predictions based on any available sensor modality. This robustness to unreliable channel conditions is essential, even if there is no immediate gain from fusing a specific type of modality.

  • Proposed beam selection technique with dynamically chosen automatically selects the top- best beam pairs, with performance closed to a fixed when the latter is identified via expert knowledge. Thus our approach eliminates the need to include expert domain knowledge (know what is needed to achieve certain amount of accuracy), by automating the beam selection process.

  • We show that it is possible to reduce the beam-selection overhead in a practical and emerging 5G-NR standard by 95–96%, while maintaining 97.95% relative throughput.

X Conclusions

Increasing softwarization and ability to automatically configure parameters [27] within generation 5G and beyond networks will necessitate the use of ML-based methods distributed at the MEC. In this paper, we propose an approach for ML-aided fast beam selection technique, where multimodal non-RF sensor data is exploited to reduce the search space for identifying best performing mmWave beam beams. Our proposed fusion method exploits the latent embeddings from each unimodal feature representation and the overall framework is evaluated in realistic emulated settings. We observe around 20-22% increase in performance for top-10 accuracy than the state-of-the-art using the proposed F-DL architecture. We also achieve 95–96% decrease in beam selection time compared to the exhaustive search defined by the 5G-NR standard in the high-mobility urban scenarios. We propose to extend this framework ahead to multiple-receiver scenarios, incorporate federated learning among the sensors, implement network compression and pruning for feasible deployment over IoT edge devices.

Appendix A Object Detection Algorithm

Our proposed image preprocessing step is a combination of a standard multi-object detection approach followed by a refinement step where each detected object is denoted by a unique indicator according to their role, i.e., target receiver or obstacle. It is constituted of a classifier that is capable to predict the presence of objects in the small bounding boxes. In the training phase, we separately label the examples from the valid items in the environment. We then quantize the samples by filtering the images with a moving square-shaped window of size pixels. Starting from the top left side of the image, and after generating the first crop, we move the window by pixels. This process results in a dataset of cropped samples from each of possible items in the environment. Since the dimensions of items vary, we end up with different number of samples for each class. To achieve a balanced dataset, we augment the minority classes by applying different light conditions, until we reach the same number of samples per class. We split the final balanced dataset in (70%,15%,15%) proportion, and train the classifier.

Similarly, in testing phase, we quantize the image by sweeping it with a window of dimension and step size . Next, we feed each crop to the trained classifier and arrange the predictions in the same order as the crop generation. This process leads to a quantized representation of the image, where each element gives the prediction of the classifier for the object in the corresponding window. We refer to this representation as the bit map of the raw input camera images. Given an input image with dimension , the shape of generated bit map will be .

We can refine our bit map further if the specific vehicle type is also transmitted directly by the receiver, as part of the basic safety message in IEEE 802.11p standard for instance. Therefore, given the generated bit map and the reported type of the target vehicle, we (i) keep the label of legitimate receiver vehicle type, (ii) map other vehicles to obstacles. This process designates the potential location of the target receiver as well as the location of obstacles with much more information than the raw images. Finally, to address the concern that the image preprocessing may introduce significant delay as it requires multiple forward passes, we convert the trained model to an equivalent fully convolutional network. We have previously explored such an approach in [39], which enables us to generate the entire bit map in a single forward pass.

Appendix B Evaluation Metrics

Top- accuracy calculates the percentage of times that the model includes the correct prediction among the top- probabilities. Formally, given a Boolean predicate, let to be 1 if is true, and 0 otherwise. Moreover, given ground-truth beam pair , the prediction probability score , top- accuracy is defined as:

(19)

where denotes the number of test samples. Note that for we get the conventional top-1 accuracy that only the highest probability prediction is taken into account.

The F1 score measures a model’s ability to perform with imbalanced class distribution. The F1 score is the harmonic mean of precision and recall given as

. Precision denotes how many of the predicted true labels are actually in the ground-truth, while recall denotes how many of the actual labels are predicted. Here, to combine the per-class F1 scores into a multi-class version, we weight the F1-score of each class by the number of samples from that class. Weighted precision and recall are also calculated in similar manner.

KL divergence measures the divergence of the predicted probability distribution from the true one. Given the one-hot encoding of the ground-truth labels and the prediction , KL divergence is defined as

Finally, we evaluate the performance of our fusion based beam selector with respect to achieved throughput ratio that is defined as , where and show the best beam pair in and (as defined in Sec. III-A and III-B), respectively, and is the total number of test samples.

References

  • [1] M. Alrabeiah, J. Booth, A. Hredzak, and A. Alkhateeb (2020) ViWi Vision-Aided mmWave Beam Tracking: Dataset, Task, and Baseline Solutions. arXiv preprint arXiv:2002.02445. Cited by: §II-B2, §IX-H.
  • [2] M. Alrabeiah, A. Hredzak, and A. Alkhateeb (2020) Millimeter Wave Base Stations with Cameras: Vision-Aided Beam and Blockage Prediction. In 2020 IEEE 91st Vehicular Technology Conference (VTC2020), pp. 1–5. Cited by: §IX-H.
  • [3] J. C. Aviles and A. Kouki (2016) Position-aided mm-wave beam training under nlos conditions. IEEE Access 4, pp. 8703–8714. Cited by: §IX-H.
  • [4] C. N. Barati, S. Dutta, S. Rangan, and A. Sabharwal (2020) Energy and Latency of Beamforming Architectures for Initial Access in mmWave Wireless Networks. Journal of the Indian Institute of Science, pp. 1–22. Cited by: §VIII-D1.
  • [5] (Website) Cited by: §VII-A1.
  • [6] J. Carter, K. Schmid, K. Waters, L. Betzhold, B. Hadley, R. Mataosky, and J. Halleran (2012) An Introduction to LiDAR Technology, Data, and Applications. NOAA Coastal Services Center 2. Cited by: §VIII-A.
  • [7] H. Cheng, X. H. Jiang, Y. Sun, and J. Wang (2001) Color image segmentation: advances and prospects. Pattern recognition 34 (12), pp. 2259–2281. Cited by: §VII-B1.
  • [8] J. Choi, V. Va, N. González-Prelcic, R. Daniels, C. R. Bhat, and R. W. Heath (2016) Millimeter-Wave Vehicular Communication to Support Massive Automotive Sensing. IEEE Communications Magazine 54 (12), pp. 160–167. Cited by: §I.
  • [9] M. Dias, A. Klautau, N. González-Prelcic, and R. W. Heath (2019) Position and LIDAR-aided mmwave Beam Selection using Deep Learning. In 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5. Cited by: §II-B3, §II-B3, §IX-H, TABLE VIII.
  • [10] P. F. Felzenszwalb and D. P. Huttenlocher (2004) Efficient graph-based image segmentation.

    International journal of computer vision

    59 (2), pp. 167–181.
    Cited by: §VII-B1.
  • [11] A. Festag (2015) Standards for Vehicular Communication—from IEEE 802.11 p to 5G. e & i Elektrotechnik und Informationstechnik 132 (7), pp. 409–416. Cited by: §I-A.
  • [12] (Website) Cited by: §VII-A2.
  • [13] M. Giordani, M. Polese, A. Roy, D. Castor, and M. Zorzi (2018) A Tutorial on Beam Management for 3GPP NR at mmWave Frequencies. IEEE Communications Surveys & Tutorials 21 (1), pp. 173–196. Cited by: §I, §III-B.
  • [14] M. Giordani, M. Polese, A. Roy, D. Castor, and M. Zorzi (2019) Standalone and Non-standalone Beam Management for 3GPP NR at mmWaves. IEEE Communications Magazine 57 (4), pp. 123–129. Cited by: §III-A.
  • [15] N. González-Prelcic, A. Ali, V. Va, and R. W. Heath (2017) Millimeter-Wave Communication with Out-of-Band Information. IEEE Communications Magazine 55 (12), pp. 140–146. Cited by: §I-A.
  • [16] N. González-Prelcic, R. Méndez-Rial, and R. W. Heath (2016) Radar Aided Beam Alignment in MmWave V2I Communications Supporting Antenna Diversity. In 2016 Information Theory and Applications Workshop (ITA), pp. 1–7. Cited by: §II-A2.
  • [17] M. Gschwandtner, R. Kwitt, A. Uhl, and W. Pree (2011) BlenSor: blender sensor simulation toolbox. In Advances in Visual Computing, pp. 199–208. Cited by: §VII-A1.
  • [18] M. Hashemi, A. Sabharwal, C. E. Koksal, and N. B. Shroff (2018) Efficient Beam Alignment in Millimeter Wave Systems Using Contextual Bandits. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications, pp. 2393–2401. Cited by: §II-B1.
  • [19] H. He, C. Wen, S. Jin, and G. Y. Li (2018) Deep Learning-Based Channel Estimation for Beamspace mmWave Massive MIMO Systems. IEEE Wireless Communications Letters 7 (5), pp. 852–855. Cited by: §II-B1.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep Residual Learning for Image Recognition. In CVPR, pp. 770–778. Cited by: §VII-C.
  • [21] R. Heinzler, P. Schindler, J. Seekircher, W. Ritter, and W. Stork (2019) Weather influence and classification with automotive lidar sensors. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 1527–1534. Cited by: §V-A.
  • [22] Junyi Wang, Zhou Lan, Chang-woo Pyo, T. Baykas, Chin-sean Sum, M. A. Rahman, Jing Gao, R. Funada, F. Kojima, H. Harada, and S. Kato (2009) Beam Codebook Based Beamforming Protocol for Multi-Gbps Millimeter-Wave WPAN Systems. IEEE Journal on Selected Areas in Communications 27 (8), pp. 1390–1399. External Links: Document Cited by: §I.
  • [23] D. P. Kingma and J. Ba (2014) Adam: A Method for Stochastic Optimization. External Links: Link Cited by: §VII-C.
  • [24] A. Klautau, N. González-Prelcic, and R. W. Heath (2019) LIDAR Data for Deep Learning-Based mmWave Beam-Selection. IEEE Wireless Communications Letters 8 (3), pp. 909–912. External Links: Document Cited by: §II-B3, §II-B3, §IV-B, §IX-H, TABLE VIII.
  • [25] A. Klautau, P. Batista, N. González-Prelcic, Y. Wang, and R. W. Heath (2018) 5G MIMO Data for Machine Learning: Application to Beam-selection Using Deep Learning. In 2018 Information Theory and Applications Workshop (ITA), pp. 1–9. Cited by: §VII, TABLE VIII, §IX.
  • [26] L. Kong, M. K. Khan, F. Wu, G. Chen, and P. Zeng (2017) Millimeter-Wave Wireless Communications for IoT-Cloud Supported Autonomous Vehicles: Overview, Design, and Challenges. IEEE Communications Magazine 55 (1), pp. 62–68. Cited by: §I.
  • [27] K. Li, U. Muncuk, M. Y. Naderi, and K. R. Chowdhury (2021) ISense: intelligent object sensing and robot tracking through networked coupled magnetic resonant coils. IEEE Internet of Things Journal 8 (8), pp. 6637–6648. External Links: Document Cited by: §X.
  • [28] X. Liu, W. Liu, H. Ma, and H. Fu (2016) Large-scale vehicle re-identification in urban surveillance videos. In 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. Cited by: §I-A.
  • [29] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y. Flötteröd, R. Hilbrich, L. Lücken, J. Rummel, P. Wagner, and E. Wiessner (2018) Microscopic Traffic Simulation using SUMO. In International Conference on Intelligent Transportation Systems (ITSC), Vol. , pp. 2575–2582. Cited by: §VII-A1.
  • [30] G. R. Muns, K. V. Mishra, C. B. Guerra, Y. C. Eldar, and K. R. Chowdhury (2019) Beam Alignment and Tracking for Autonomous Vehicular Communication using IEEE 802.11 ad-based Radar. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 535–540. Cited by: §IX-E.
  • [31] T. Nitsche, C. Cordeiro, A. B. Flores, E. W. Knightly, E. Perahia, and J. C. Widmer (2014) IEEE 802.11 ad: Directional 60 GHz Communication for Multi-Gigabit-per-Second Wi-Fi. IEEE Communications Magazine 52 (12), pp. 132–141. Cited by: §III-A.
  • [32] T. Nitsche, A. B. Flores, E. W. Knightly, and J. Widmer (2015) Steering with Eyes Closed: mm-Wave Beam Steering without In-band Measurement. In 2015 IEEE Conference on Computer Communications (INFOCOM), pp. 2416–2424. Cited by: §II-A2.
  • [33] J. Perez-Rua, V. Vielzeuf, S. Pateux, M. Baccouche, and F. Jurie (2019-06) MFAS: Multimodal Fusion Architecture Search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §V-A.
  • [34] C. Premebida, G. Monteiro, U. Nunes, and P. Peixoto (2007) A lidar and vision-based approach for pedestrian and vehicle detection and tracking. In 2007 IEEE intelligent transportation systems conference, pp. 1044–1049. Cited by: §I-A.
  • [35] I. Rasheed, F. Hu, Y. Hong, and B. Balasubramanian (2020) Intelligent Vehicle Network Routing With Adaptive 3D Beam Alignment for mmWave 5G-Based V2X Communications. IEEE Transactions on Intelligent Transportation Systems (), pp. 1–13. Cited by: §I.
  • [36] (Website) Cited by: §VII-A1.
  • [37] M. A. Richards (2014) Fundamentals of Radar Signal Processing. McGraw-Hill Education. Cited by: §IX-E.
  • [38] W. Roh, J. Seol, J. Park, B. Lee, J. Lee, Y. Kim, J. Cho, K. Cheun, and F. Aryanfar (2014) Millimeter-Wave Beamforming as an Enabling Technology for 5G Cellular Communications: Theoretical Feasibility and Prototype Results. IEEE communications magazine 52 (2), pp. 106–113. Cited by: §I.
  • [39] B. Salehi, M. Belgiovine, S. G. Sanchez, J. Dy, S. Ioannidis, and K. Chowdhury (2020) Machine learning on camera images for fast mmwave beamforming. In 2020 IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), pp. 338–346. Cited by: Appendix A.
  • [40] B. Salehi, G. Reus-Muns, D. Roy, Z. Wang, and T. Jian (2021) Beam Selection. Note: https://github.com/Batool-Salehi/beam_selection Cited by: §IX.
  • [41] S. G. Sanchez and K. R. Chowdhury (2020) Robust 60GHz Beamforming for UAVs: Experimental Analysis of Hovering, Blockage and Beam Selection. IEEE Internet of Things Journal. Cited by: §I.
  • [42] S. Sesia, I. Toufik, and M. Baker (2011) LTE-the UMTS Long Term Evolution: From Theory to Practice. John Wiley & Sons. Cited by: §VIII-B.
  • [43] D. Steinmetzer, D. Wegemer, M. Schulz, J. Widmer, and M. Hollick (2017) Compressive Millimeter-Wave Sector Selection in Off-the-Shelf IEEE 802.11ad Devices. International Conference on emerging Networking EXperiments and Technologies (CoNEXT). External Links: Document Cited by: §VII-A2.
  • [44] S. Sur, I. Pefkianakis, X. Zhang, and K. Kim (2017) Wifi-Assisted 60 GHz Wireless Networks. In Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking (MobiCom), pp. 28–41. Cited by: §II.
  • [45] Y. Tian, G. Pan, and M. Alouini (2020) Applying deep-learning-based computer vision to wireless communications: methodologies, opportunities, and challenges. IEEE Open Journal of the Communications Society. Cited by: §IX-H.
  • [46] V. Va, J. Choi, T. Shimizu, G. Bansal, and R. W. Heath (2017) Inverse Multipath Fingerprinting for Millimeter Wave V2I Beam Alignment. IEEE Transactions on Vehicular Technology 67 (5), pp. 4042–4058. Cited by: §II-B2, §IX-H.
  • [47] V. Va, T. Shimizu, G. Bansal, and R. W. Heath (2016) Beam design for beam switching based millimeter wave vehicle-to-infrastructure communications. In 2016 IEEE International Conference on Communications (ICC), pp. 1–6. Cited by: §IX-H.
  • [48] (Website) Cited by: §VII-A2.
  • [49] Q. Wang, S. Leng, H. Fu, and Y. Zhang (2011) An IEEE 802.11p-Based Multichannel MAC Scheme With Channel Coordination for Vehicular Ad Hoc Networks. IEEE Transactions on Intelligent Transportation Systems 13 (2), pp. 449–458. Cited by: §VIII-B.
  • [50] S. Wang, J. Huang, and X. Zhang (2020) Demystifying Millimeter-Wave V2X: Towards Robust and Efficient Directional Connectivity Under High Mobility. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking (MobiCom), pp. 1–14. Cited by: §II-A1.
  • [51] Y. Wang, A. Klautau, M. Ribero, M. Narasimha, and R. W. Heath (2018) MmWave Vehicular Beam Training with Situational Awareness by Machine Learning. In 2018 IEEE Globecom Workshops (GC Wkshps), Vol. , pp. 1–6. External Links: Document Cited by: §IX-H.
  • [52] Z. Wang, B. Salehi, A. Gritsenko, K. Chowdhury, S. Ioannidis, and J. Dy (2020) Open-World Class Discovery with Kernel Networks. arXiv preprint arXiv:2012.06957. Cited by: §V-A.
  • [53] (Website) Cited by: §VII-A2.
  • [54] Z. Xiao, T. He, P. Xia, and X. Xia (2016) Hierarchical Codebook Design for Beamforming Training in Millimeter-Wave Communication. IEEE Transactions on Wireless Communications 15 (5), pp. 3380–3392. Cited by: §II-A1.
  • [55] W. Xu, F. Gao, S. Jin, and A. Alkhateeb (2020) 3D scene-based beam selection for mmwave communications. IEEE Wireless Communications Letters 9 (11), pp. 1850–1854. Cited by: §IX-H.
  • [56] Y. Yaman and P. Spasojevic (2016) Reducing the LOS Ray Beamforming Setup Time for IEEE 802.11 ad and IEEE 802.15. 3c. In MILCOM 2016-2016 IEEE Military Communications Conference, pp. 448–453. Cited by: §I, §III-B.
  • [57] Yole Dévelopment. External Links: Link Cited by: §I-A.