Security Concerns on Machine Learning Solutions for 6G Networks in mmWave Beam Prediction

6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous car, and many more. Those algorithms have been also using in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of machine learning techniques, especially deep learning, it is critical to take the security concern into account when applying the algorithms. While machine learning algorithms offer significant advantages for 6G networks, security concerns on Artificial Intelligent (AI) models is typically ignored by the scientific community so far. However, security is also a vital part of the AI algorithms, this is because the AI model itself can be poisoned by attackers. This paper proposes a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction using adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction. We also present the adversarial learning mitigation method's performance for 6G security in mmWave beam prediction application with fast gradient sign method attack. The mean square errors (MSE) of the defended model under attack are very close to the undefended model without attack.



There are no comments yet.


page 9

page 10


Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case

6G is the next generation for the communication systems. In recent years...

A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models

In recent years, machine learning algorithms have been applied widely in...

The Adversarial Security Mitigations of mmWave Beamforming Prediction Models using Defensive Distillation and Adversarial Retraining

The design of a security scheme for beamforming prediction is critical f...

Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond

Deep learning provides powerful means to learn from spectrum data and so...

Beam Learning -- Using Machine Learning for Finding Beam Directions

Beamforming is the key enabler for wireless communications in the mmWave...

Physical Layer Identification based on Spatial-temporal Beam Features for Millimeter Wave Wireless Networks

With millimeter wave (mmWave) wireless communication envisioned to be th...

Practical Fast Gradient Sign Attack against Mammographic Image Classifier

Artificial intelligence (AI) has been a topic of major research for many...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cellular network has been the most popular wireless communication technology in the last three decades (1G-2G in the early 1990s, 3G in the early 2000s, 4G in 2010s, 5G in 2020s), which can support high data rate with long distance for voice and data information. It has been significantly improved to meet requirements over time in terms of data transmission speed and the number of users, and also called various versions as 1G, 2G (GSM), 2.5G (GPRS and EDGE), 3G (UMTS), 3.5G (HSPA), 4G (WiMAX and LTE), 4.5G (LTE Advanced Pro), 5G and 6G. Cellular systems typically operate over land areas, called cells, served by fixed-based transceiver stations, i.e., base stations (BSs), in various frequency bands from 850 Mhz to 95 GHz [18]. Latest cellular technologies (4G/5G/6G) support higher data rates, i.e., approximately 33.88 Mbps, 1.100 Mbps, and 1 Tbps, respectively, and low latency, i.e., milliseconds. However, they are still suffering congestion and reduced network performance due to sharing the frequency spectrum with other mobile users.

Introducing the 5G with super-fast data-speeds is a breakthrough and significant transform in mobile networking and data communication. It offers a data transmission speed of 20 times faster than the 4G networks and delivers less than a millisecond data latency [24], [6], [7]. The main difference of 5G is to use a new digital technology called massive multiple-input multiple-output (MIMO) and using multiple targeted beams to spotlight [19], [12]. Authors in [25] investigate several MIMO architectures as well as MIMO and beamforming solutions for 5G Technology. According to results, the precise antenna array calibration with large scale antenna arrays for multi-user-MIMO (MU-MIMO) is needed. MIMO can also enable more devices to be used within the same geographic area, i.e., 4.000 devices per square kilometers for 4G, while around one million for 5G [5]. 6G is the last version of this series, which follows up on 4G and 5G. It promises mobile data speeds 100 times faster with lower latency than the 5G network, i.e., approximately 1 Tbps and 1 ms, respectively. It is still tough what 6G is exactly, but it is clear that 6G will be used in the connectivity in cars, drones, mobile devices, IoT devices, homes, industries, and many more, in the near future. One of the main and fundamental differences of 6G technologies is the use of artificial intelligence (AI) and edge computing to make data communication networks more sophisticated [11], [2]. Using the benefits of the AI algorithms provide novel solutions for massive MIMO system involving a large number of antennas and beam arrays. A beam codeword consists of analogue phase-shifted values and applied to the antenna elements to form an analogue beam in [21], base beam selection with deep learning algorithms are proposed for using channel state information for the sub-6 GHz links. In addition to beam prediction, location and size of vehicles information are used to predict the optimal beam pair [27]. Beamforming solutions on location base are more suitable for line-of-sight (LOS) communication. On the other hand, the same locations with the non-line-of-sight (NLOS) transmission need different beamforming solutions.

In the literature, most studies have focused on the communication methods to increase cellular technologies’ performance, but usually ignore the security and privacy issues and the integration of currently emerging AI tools into 6G. It is expected that 6G networks would provide better performance than 5G ones and satisfy emerging services and applications. Authors in [30] and introduce AI as a key enabler and provide a comprehensive review of 6G networks, including usage scenarios, requirements and promising technologies for 6G networks. The review paper indicated that the promising technologies such as blockchain-based spectrum sharing and quantum communications and computing can significantly improve the 6G’s spectrum in terms of efficiency and security while comparing conventional techniques. The study [15] discusses the key trends and AI-powered methodologies for 6G network design and optimization. Authors in [29] introduce and analyze the key technologies and application scenarios brought by the 6G networks. However, there’s also reason to be concerned about security risks. 6G carries over and introduces new risks, which must be addressed to ensure its secure and safe use. The study [17] addresses the fundamental principles of 6G security, discusses major technologies related to 6G security, and presents several issues regarding 6G security. Authors in [26] investigate the fundamental security and privacy challenges associated with each key technology and potential applications, i.e., real-time intelligent edge, distributed AI, intelligent radio, and 3D intercoms, for 6G networks. The study in [8] proposes a framework incorporating context-awareness in quality of security (QoSec) that leverages physical layer security (PLS) for 6G networks, The framework is used to identify the security level required and propose adaptive, dynamic, and risk-aware security solutions. The key component of 6G is the integration of AI, i.e., self-learning architecture based on self-supervised algorithms, to be able to improve the performance of the network for tomorrow’s wireless cellular systems [28]. It is expected that a secure AI-powered structure can protect privacy in 6G. However, AI itself may be attacked or abused, resulting in privacy violations. The authors in [14] also indicate that some attackers simply can replace a legitimate model with an already poisoned model prepared ahead of time, i.e., attacking beneficial AI in such a way that the AI works against its own system. The study in [22]

provides a comprehensive survey of ML and privacy in 6G, with a view to further promoting the development of 6G and privacy protection technologies. With the use of deep learning (DL) algorithms in 6G’s physical layer functions, such as channel estimation, modulation recognition, and channel state information (CSI) feedback, the physical layer faces new challenges caused by an adversarial attack. The authors in

[16] investigate the impact of possible adversarial attacks on DL-based CSI feedback. According to the results, an adversarial attack may cause a destructive effect on DL-based CSI feedback, and transmitted data can be easily tampered with adversarial perturbation by malicious attackers due to the broadcast nature of wireless communication.

To sum up, the integration of the ML algorithms for the 6G and beyond technologies leads to potential security problems. Mainly, most of the studies are focus on building ML algorithms for the 6G communication problems and security concerns are ignored. The proposed method [2] showed promising results for the mmWave beam prediction for several base stations (BSs) with multiple users by using deep learning algorithms for different environmental scenarios. On the other hand, none of the proposed deep learning methods can work under attack. Based on the shortcomings of the literature security concern, in this paper, we deal with the security problem of ML application for beamforming prediction. We consider two research questions: i) Is the proposed deep learning-based mmWave beam prediction model vulnerable for the adversarial attacks, ii) Is iterative adversarial training able to mitigate adversarial attacks. To answer these questions, firstly, we implement beam prediction algorithm with using a deep learning model. Secondly, we attack beam prediction algorithm with Fast-Gradient Sign Method (FGSM) that is a basic and also powerful attack for deep learning models. Eventually, we compare the results of the mean square error (MSE) values of the undefended deep learning model and attacked deep learning model with FGSM. The MSE value increase about 40.14 times higher with attack. Thirdly, we proposed an adversarial training based mmWave beam prediction model to protect the model against FGSM adversarial ML attack. In this manner, in addition to the beam prediction, our new ML model learns the attack noise injection patterns and trains itself with manipulated input data, as denoted adversarial training.

1.1 Contributions

In this paper, our aim is to make more secure deep learning-based mmWave beam prediction against attack for the deep learning model. It is very important that the future of the wireless communication is considered with using AI models. The attack against AI models are different from well-known wireless physical layer security that is achieved by exploiting the properties of the physical layer, as the name suggests, such as interference, thermal noise, channel information, and jamming, etc. The purpose of the attack on wireless physical layer security is to make the transmitted signal non-predictive to decrease the secrecy capacity. In this way, the legitimate users could not demodulate transmitted signal. On the other hand, the purpose of the attacks against the deep learning models to manipulate the transmitted data. The attacker imitates the legitimate user. In this case, we develop an attack model for the base station for imitating transmitted signal of the user. We choose a most common and powerful attacking methods for deep learning model that is Fast-Gradient Sign Method (FGSM). This attract model maximizes the lost values of the classifier by adding modest noise vector. While the traditional FGSM attack only uses real numbers to manipulate data, we modified FGSM attack model to change both of amplitude and phase values of the transmitted signal with complex numbers. Thus, our main contributions for this paper are listed as below:

  • We show that undefended deep learning-based mmWave beam prediction model system is vulnerable against craftily designed adversarial noise.

  • We modified FGSM attack to manipulate the transmitted signal in complex domain for amplitude and phase values. After all, the system achievable rate performance became inoperable.

  • We trained to undefended deep learning-based mmWave beam prediction model by adversarial training with the FGSM attack. Therefore, the system achievable rate performance became very close to the undefended model without attack.

We implemented proposed model with three scenarios that are outdoor, outdoor with LOS and blocked users, and indoor environment. Each scenario is executed under three cases that undefended, undefended under attack and secure model.

1.2 Organization

The rest of the paper is organized as follows: Section 2 describes background information about adversarial machine learning and adversarial training based mitigation methods. The 3 shows our system overview. Section 4 evaluates the proposed mitigation method for deep learning based mmWave beam prediction vulnerabilities, and the section 5 concludes this paper.

2 Preliminaries

2.1 Using Machine Learning Models to estimate RF beamforming vectors

Using the benefits of ML algorithms gives a novel solution for a massive MIMO channel training and scanning a large number of narrow beams. The beams depend on the environmental conditions, like user and BSs locations, furniture, trees, buildings etc. It is too difficult to define all these environmental conditions as a closed-form equation. A good alternative is to use omni and quasi-omni beam patterns to predict the best RF beamforming vectors. Using these beam patterns benefits to take into account the reflection and diffraction of the pilot signal. In this paper, we use the machine learning models for mmWave beam prediction in [2], thanks to their mathematical calculations.

The deep learning solution consists of two states: training and prediction. Firstly, the deep learning model learns the beams according to the omni-received pilots. Secondly, the model uses the trained data to predict the RF beamforming vector for the current condition.

2.1.1 Training Steps:

The user sends uplink training pilot sequences for each beam coherence time . BSs combine received pilot sequences on RF beamforming vector and feed them to the cloud. The cloud uses the received sequences from all the BSs as the input of the deep learning algorithm to find the achievable rate in (1) for every RF beamforming vector to represent the desired outputs,


where is the channel coefficient for omni-beams, and is channel coefficient for -th BS at the -th subcarrier.

2.1.2 Learning Steps:

In this stage, the trained deep learning model is used to predict the RF beamforming vectors. Firstly, the user sends an uplink pilot sequence. The BSs combine these sequences and send them to the cloud. Then, the cloud uses the trained deep learning model to predict the best RF beamforming vectors to maximize the achievable rate for each BS. Finally, BSs use the predicted RF beamforming vectors to estimate the effective channel .

2.2 Attack to Machine Learning Algorithms: Adversarial Machine Learning

Adversarial machine learning is an attack technique that attempts to fool neural network models by supplying craftily manipulated input with a small difference

[13]. Attackers apply model evasion attacks for phishing attacks, spams, and executing malware code in an analysis environment [1]. There are also some advantages to attackers in misclassification and misdirection of models. In such attacks, the attacker does not change training instances. Instead, he tries to make some small perturbations in input instances in the model’s inference time to make this new input instance seem safe (i.e., normal behaviour) [10]. We mainly concentrate on this kind of adversarial attack in this study. There are many attacking methods for deep learning models, and the Fast-Gradient Sign Method (FGSM) is the most straightforward and powerful attack type. We only focus on the FGSM attack, but our solution to prevent this attack can be applied to other adversarial machine learning attacks. FGSM works by utilizing the gradients of the neural network to create an adversarial example to evade the model. For an input instance , the FGSM utilizes the gradients of the loss value for the input instance to build a new instance that maximizes the loss value of the classifier hypothesis . This new instance is named the adversarial instance. We can summarize the FGSM using the following explanation:


By adding a slowly modest noise vector whose elements are equal to the sign of the features of the gradient of the cost function for the input , the attacker can easily manipulate the output of a deep learning model. The Figure 1 shows the details of the FGSM attack.

Figure 1: FGSM attack steps. The input vector is poisoned with loss maximization direction.

2.3 Attack to Training Steps: Adversarial Training

Adversarial training is a widely recommended defense technique that implies generating adversarial instances using the gradient of the victim classifier and then re-training the model with the adversarial instances and their respective labels. This technique has been demonstrated to be efficient in defending models from adversarial attacks.

Let us first think of a common classification problem with training instances of dimension , and a label space . We assume the classifier

has been trained to minimize a loss function

as follows:


Given a classifier model and an input instance with a responding output , then an adversarial instance is an input such that:


where is the distance metric between two input instances, the original input and the adversarial version . Most actual adversarial model attacks transform Equation (4) into the following optimization problem:


where is the loss function between predicted output and correct label . In order to mitigate such attacks, at per training step, the conventional training procedure from Equation 3 is replaced with a min-max objective function to minimize the expected value of the maximum loss, as follows:


3 System Model

mmWave communication system employs a massive amount of antennas with beamforming to control a wave-front direction by weighting the magnitude and phase in each antenna. We assume that each BS has one RF chain to provide analogue beamforming architecture that is not as expensive and complex as the other approaches in [3]. The mmWave communication system model is given in Figure 2. Here, is the number of the BSs that are serving for one mobile user with equipped single antenna. Centralized/cloud processing unit is used to connect all BSc and processing.

Figure 2: Block diagram of the mmWave beamforming system.

The downlink received signal at -th subcarrier is expressed as


where denotes the channel vector between -th BS and the user.

is additive white Gaussian noise (AWGN) with variance

, i.e., for -th subcarrier. Here, is transmitted complex baseband signal from the -th subcarrier and -th BS is given as


where is the data symbol with subcarriers is firstly precoded by code vector at each subcarrier on each base station. Then, every BS applies analog beamforming with beam steering vector to obtain downlink transmitted signal . Beam steering vector defines for each BS antenna as [ where is a quantized angle.

To support mobile users, it is required to recalculate constantly beamforming vectors within channel coherence time, denoted which depends on user mobility and channel multi-path components. Also, the beams stay aligned on beam coherence duration, denoted , and is normally shorter than [23]. The time duration of decreases for the users with higher mobility that causes to lower data rate for the same beamforming vectors and beam training overhead. Thus, the effective achievable rate is defined as follow.


Here, the beamforming vectors are redesigned in each first training time in beam coherence time, and the rest of it is used for the data transmission by using the redesigned beamforming vectors.

3.1 Adversarial Training

The Figure 3 shows the adversarial training process. After the model is trained, adversarial inputs are created using the model itself, combined with legitimate users’ information and added to the training. When the model reaches the steady-state state, the training process is completed. In this way, the model will both predict RF beamforming codeword for legitimate users while at the same time being immune to the craftily designed noise attack that will be added as input.

Figure 3: The diagram of RF beamforming codeword adversarial training.

3.2 Capability of the Attacker

We assumed that the attacker’s primary purpose is to manipulate the RF model by applying carefully crafted noise to the input data. In a real-world scenario, this white-box setting is the most desired choice for an attacker that does not take the risks of being caught in a trap. The problem is that it requires the attacker to access the model from outside to generate adversarial examples. After manipulating the input data, the attacker can exploit the RF beamforming codeword prediction model’s vulnerabilities in the same manner as in an adversary’s sandbox environment. The prediction model predicts the adversarial instances when the attacker can convert some model’s outputs to other outputs (i.e., wrong prediction).

However, to prevent this noise addition from being easily noticed, the attacker must answer an optimization problem to determine which regions in the input data (i.e., beamforming) that must be modified. By solving this optimization problem using one of the available attack, methods [1], the attacker aims to reduce the prediction performance on the manipulated data as much as possible. In this study, to limit the maximum allowed perturbation allowed for the attacker, we used norm, which is the maximum difference limit between original and adversarial instances. Figure 4 shows the attack scenario. The attacker gets an legitimate input, , creates a noise vector with an budget , sums the input instance and the craftily designed noise to create adversarial input .

Figure 4: RF Beamforming manipulation process.

4 Experiments

In the experiments, we tested our model with three different cases for three different scenarios. The cases are given as:

  • Case 1: Undefended model: We implement undefended deep learning-based mmWave beam prediction model which is vulnerable to attacks.

  • Case 2: Undefended model under FGSM attack: We attack with FGSM to undefended model to obtain achievable rate of the ML model under attack. It is the worst case of the model that needs to be overcome.

  • Case 3: Secure model: We adversarial train to deep learning-based mmWave beam prediction model against FGSM attack.

The outcomes of these three cases allow us to compare the model performance under attack with undefended case and secure case. Also, we implemented the proposed model for different scenarios include outdoor and indoor scenarios with the details are below [9]:

Scenario 1- Outdoor scenario: This is a outdoor scenario of two streets with intersection as in given in Figure 6. The scenario include 18 BSs with 16

16 uniform planar array (UPA) and uniformly distributed more than one million users with a single dipole antenna in 3 user grids. The operating frequency is 60 GHz.

Scenario 2- Outdoor scenario with LOS and blocked users: It is also outdoor scenario with LOS and blocked users is given in Figure 10. There is single BS with LOS connections with some of the users and NLOS connections with other users. The operating frequency is 3.5 GHz.

Scenario 3- Indoor scenario with distributed massive MIMO: It is for indoor room scenario and 64 antennas tiling up part of the ceiling at a height of 2.5 m from the floor that is given in Figure 11. The operating frequencies are 2.4 GHz and 2.5 GHz.

Our motivations form these cases to maximize the system effective achievable rate for the system under attack in Eq. (10

). The experiments are performed using the Python scripts and ML libraries: Keras, Tensorflow, and Scikit-learn, on the following machine: 2.8 GHz Quad-Core Intel Core i7 with 16GB of RAM. For all scenarios, two models, undefended and adversarial trained, were built to obtain prediction results. In the first model, the model is trained without any input poisoning. The first model (i.e., undefended model) was used with legitimate users (for C1) and adversaries (for C2). The second model (i.e., the adversarially trained model) was used under the FGSM attack. The hyper-parameters such as the number of hidden layers and the number of neurons in the hidden layers, the activation function, the loss function, and the optimization method are the same for both models.

The model architectures and selected hyper-parameters are given in Table 1 and in Table2 respectively.

Layer type Layer information

Fully Connected + ReLU

Fully Connected + ReLU 100
Fully Connected + ReLU 100
Fully Connected + TanH 1
Table 1: Model architecture
Parameter Value
Optimizer Adam
Learning rate 0.01
Batch Size 100
Dropout Ratio 0.25
Epochs 10
Table 2: Milimater-wave beam prediction model parameters

4.1 Research Questions

We consider the following two research questions (RQs):

  • RQ1: Is the deep learning-based RF beamforming codeword predictor vulnerable for adversarial machine learning attacks?

  • RQ2: Is the iterative adversarial training approach a mitigation method for the adversarial attacks in beamforming prediction?

4.2 RF Beamforming Data Generator

We employed the generic deep learning dataset for millimeter-wave and massive MIMO applications (DeepMIMO) data generator in our experiments [4].

In this section, we conduct experiments on the mmWave communication and massive MIMO applications dataset from the publicly available data set repository. We implemented the proposed mitigation method using Keras and TensorFlow libraries in the Python environment.

4.3 Results for RQ1

The figure 5 shows the training history of the beamforming prediction model with 35.000 training instances. The model is trained with clean (non-perturbated) instances.

Figure 5: The beamforming prediction model training history.
Figure 6: Scenario 1- Outdoor scenario [9].

Figure 7-9 shows the original undefended and defended model under FGSM attack. Here, genie-aided coordinated beamforming is the optimal beamforming vectors with no training overhead and baseline coordinated beamforming calculates with conventional communication system tools [2]. According to the simulation results, the deep learning model’s predictions are very close to the original value. We have used norm as the distance metric, which shows the maximum allowable perturbation amount for each item in the input vector . The green area in the figures shows the acceptable range between optimal and overhead limits. As can be seen from the figures, the predictive performance results of the vulnerable models fall below the green zone with shallow epsilon values. For the results of the models that have been robust with adversarial training to show low performance (i.e. to fall below the green zone), the aggressor must use a very high epsilon value. A high epsilon value (i.e., more noise) will cause the attacker to be exposed. Therefore, we can say that the adversarial training method protects the deep learning model against the FGSM attack.

(a) Undefended
(b) Defended
Figure 7: Beamforming codeword deep learning model results for Scenario-1 (O1) for different values of .
(a) Undefended
(b) Defended
Figure 8: Beamforming codeword deep learning model results for Scenario-2 (I1_2p5) for different values of .
(a) Undefended
(b) Defended
Figure 9: Beamforming codeword deep learning model results for Scenario-3 (I3_60) for different values of .

According to results, the undefended RF beamforming codeword prediction model is vulnerable to the FGSM attack. The MSE performance result of the model under attack is approximately 40 (i.e. ) times higher.

Figure 10: Scenario 2- Outdoor scenario with LOS and blocked users [9].
Figure 11: Scenario 3- Indoor scenario with distributed massive MIMO [9].

4.4 Results for RQ2

Adversarial training is a popularly advised defense mechanism that proposes generating adversarial instances using the victim model’s loss function and then re-training the model with the newly generated adversarial instances and their respective outputs. This approach has proved to be effective in protecting deep learning models from adversarial machine learning attacks. Figure 12 shows the MSE of the performance results for all scenarios with the FGSM attack. According to the figure, defended (adversarial trained) model’s MSE values becomes steady-state after a specific value. On the other hand, the undefended model’s MSE values continue to increase.

Figure 12: The performance results for all scenarios.

Table 3 shows the beamforming codeword prediction results for all scenarios for different values of . The overhead (lower) values for each scenario are 2.86 for O1, 17.81 for I1_2p5 and 9.41 for I3_60. According to the table, the attacker can manipulate the deep learning model with for the O1 scenario. The undefended model’s prediction result is 2.85426, which is lower than the overhead value that is 2.86. Similarly, the values for the successful attacks are 0.05 for I1_2p5 and 0.03 for I3_60.

O1 I1_2p5 I3_60
Undef. Def. Undef. Def. Undef. Def.
0.00 3.11594 3.00674 18.21316 18.23962 9.79951 9.84233
0.01 3.07667 2.88551 18.16247 18.23461 9.61264 9.70870
0.02 3.03850 2.85323 18.07253 18.22359 9.45940 9.57484
0.03 2.99991 2.83773 17.98412 18.20262 9.22683 9.38894
0.04 2.94373 2.87103 17.90667 18.17553 9.06634 9.26890
0.05 2.90929 2.83736 17.80715 18.14055 9.02542 9.15959
0.06 2.85426 2.89408 17.77563 18.09840 8.89587 9.12542
0.07 2.81551 2.83269 17.69611 18.05923 8.84408 9.02040
0.08 2.81535 2.89524 17.61604 18.03128 8.73619 9.05983
0.09 2.76199 2.80589 17.57838 17.98539 8.80412 9.12413
0.10 2.75456 2.86601 17.47225 17.94672 8.68579 9.11946
0.20 2.65541 2.83044 17.01810 17.83537 8.61330 9.51652
0.30 2.58553 2.84275 16.61877 17.71839 8.62319 9.53898
0.40 2.56208 2.84106 16.36651 17.68828 8.56926 9.64404
0.50 2.57365 2.84363 16.26341 17.61559 8.61028 9.77017
Table 3: Beamforming codeword prediction results for all scenarios for different values of .

4.5 Threats to Validity

A key external validity threat is related to the generalization of results [20]. We used only the RF beamforming dataset in our experiments, and we need more case studies to generalize the results. Moreover, the dataset reflects different types of millimeter-wave beams.

Our key construct validity threat is related to the selection of attack type FGSM. Nevertheless, note that this attack is from the literature [20] and applied to several deep learning usage domains. In the future, we will conduct dedicated empirical studies to investigate more adversarial machine learning attacks systematically.

Our main conclusion validity threat is due to finding the best attack budget

that is responsible for manipulating the legitimate user’s signal for poisoning the beamforming prediction model. To mitigate this threat, we repeated each experiment 20 times to reduce the probability that the results were obtained by chance. In a standard neural network training, all weights are initialized uniformly at random. In the second stage, using optimization, these weights are updated to fit the classification problem. Since the training started with a probabilistic approach, there is a possibility of facing optimization’s local minimum problem. To eliminate the local minimum problem, we repeat the training 20 times to find the

value that gives the best attack result. In each repetition, the weights were initialized uniformly at random but with different values. If the optimization function failed to find the global minimum in the next experiment, it is likely to see it as the weights have been initialized with different values.

5 Conclusions and Future Works

This study emphasizes cyber-security issues related to RF beamforming codeword prediction models’ vulnerabilities by satisfying the following research questions: (1) Is the deep learning-based RF beamforming codeword predictor vulnerable for adversarial addressees machine learning attacks? (2) Is the iterative adversarial training approach a mitigation method for the adversarial attacks in beamforming prediction? The experiments with the DeepMIMO’s O1, I1_2p5 and I3_60 ray-tracing scenarios to answer these questions are performed. Our results confirm that the original model is vulnerable to a modified FGSM type of attack. One of the mitigation methods is the iterative adversarial training approach. Our empirical results also show that iterative adversarial training successfully increases the RF beamforming prediction performance and creates a more accurate predictor, suggesting that the strategy can improve the predictor’s performance. As future work, the outcomes of this study has potential to be further used and developed for other future studies to gain more insights in the field of 6 G where adversarial ML based cyber-security issues will be advanced.


  • [1] M. Aladag, F. O. Catak, and E. Gul (2019) Preventing data poisoning attacks by using generative models. In 2019 1st International Informatics and Software Engineering Conference (UBMYK), Vol. , pp. 1–5. External Links: Document Cited by: §2.2, §3.2.
  • [2] A. Alkhateeb, S. Alex, P. Varkey, Y. Li, Q. Qu, and D. Tujkovic (2018) Deep learning coordinated beamforming for highly-mobile millimeter wave systems. IEEE Access 6 (), pp. 37328–37348. External Links: Document Cited by: §1, §1, §2.1, §4.3.
  • [3] A. Alkhateeb, J. Mo, N. Gonzalez-Prelcic, and R. W. Heath (2014) MIMO precoding and combining solutions for millimeter-wave systems. IEEE Communications Magazine 52 (12), pp. 122–131. External Links: Document Cited by: §3.
  • [4] A. Alkhateeb (2019) DeepMIMO: a generic deep learning dataset for millimeter wave and massive MIMO applications. External Links: 1902.06435 Cited by: §4.2.
  • [5] F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, and P. Popovski (2014) Five disruptive technology directions for 5G. IEEE Communications Magazine 52 (2), pp. 74–80. External Links: Document Cited by: §1.
  • [6] E. Catak and L. Durak-Ata (2016) Waveform design considerations for 5G wireless networks. Towards 5G Wireless Networks-A Physical Layer Perspective, pp. 27–48. Cited by: §1.
  • [7] E. Catak and L. Durak-Ata (2017) Adaptive filterbank-based multi-carrier waveform design for flexible data rates. Computers & Electrical Engineering 61, pp. 184–194. External Links: ISSN 0045-7906, Document, Link Cited by: §1.
  • [8] A. Chorti, A. N. Barreto, S. Kopsell, M. Zoli, M. Chafii, P. Sehier, G. Fettweis, and H. V. Poor (2021) Context-aware security for 6G wireless the role of physical layer security. arXiv preprint arXiv:2101.01536. Cited by: §1.
  • [9] (2021)(Website) External Links: Link Cited by: Figure 10, Figure 11, Figure 6, §4.
  • [10] O. Faruk Tuna, F. Ozgur Catak, and M. Taner Eskil (2021-02) Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples. arXiv e-prints, pp. arXiv:2102.04150. External Links: 2102.04150 Cited by: §2.2.
  • [11] T. Huang, W. Yang, J. Wu, J. Ma, X. Zhang, and D. Zhang (2019) A survey on green 6g network: architecture and technologies. IEEE Access 7 (), pp. 175758–175768. External Links: Document Cited by: §1.
  • [12] V. Jungnickel, K. Manolakis, W. Zirwas, B. Panzner, V. Braun, M. Lossow, M. Sternad, R. Apelfröjd, and T. Svensson (2014) The role of small cells, coordinated multipoint, and massive mimo in 5g. IEEE Communications Magazine 52 (5), pp. 44–51. External Links: Document Cited by: §1.
  • [13] A. Kurakin, I. Goodfellow, and S. Bengio (2016-11) Adversarial Machine Learning at Scale. arXiv e-prints, pp. arXiv:1611.01236. External Links: 1611.01236 Cited by: §2.2.
  • [14] M. Kuzlu, C. Fair, and O. Guler (2021) Role of artificial intelligence in the internet of things (IoT) cybersecurity. Discover Internet of Things 1 (1), pp. 1–14. Cited by: §1.
  • [15] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y. A. Zhang (2019) The roadmap to 6g: ai empowered wireless networks. IEEE Communications Magazine 57 (8), pp. 84–90. External Links: Document Cited by: §1.
  • [16] Q. Liu, J. Guo, C. -K. Wen, and S. Jin (2020) Adversarial attack on DL-based massive MIMO CSI feedback. Journal of Communications and Networks 22 (3), pp. 230–235. External Links: Document Cited by: §1.
  • [17] Y. Lu (2020) Security in 6G: the prospects and the relevant technologies. Journal of Industrial Integration and Management 5 (03), pp. 271–289. Cited by: §1.
  • [18] T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi, and F. Gutierrez (2013) Millimeter wave mobile communications for 5G cellular: it will work!. IEEE Access 1 (), pp. 335–349. External Links: Document Cited by: §1.
  • [19] W. Roh, J. Seol, J. Park, B. Lee, J. Lee, Y. Kim, J. Cho, K. Cheun, and F. Aryanfar (2014) Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results. IEEE Communications Magazine 52 (2), pp. 106–113. External Links: Document Cited by: §1.
  • [20] P. Runeson, M. Höst, R. Austen, and B. Regnell (2012) Case study research in software engineering – guidelines and examples. John Wiley and Sons Inc., United States (English). External Links: ISBN 978-1-118-10435-4 Cited by: §4.5, §4.5.
  • [21] M. S. Sim, Y. Lim, S. H. Park, L. Dai, and C. Chae (2020) Deep learning-based mmwave beam selection for 5G NR7/6G with sub-6 GHz channel information: algorithms and prototype validation. IEEE Access 8 (), pp. 51634–51646. External Links: Document Cited by: §1.
  • [22] Y. Sun, J. Liu, J. Wang, Y. Cao, and N. Kato (2020) When machine learning meets privacy in 6G:A survey. IEEE Communications Surveys Tutorials 22 (4), pp. 2694–2724. External Links: Document Cited by: §1.
  • [23] V. Va, J. Choi, and R. W. Heath (2017) The impact of beamwidth on temporal channel variation in vehicular channels and its implications. IEEE Transactions on Vehicular Technology 66 (6), pp. 5014–5029. External Links: Document Cited by: §3.
  • [24] H. Viswanathan and P. E. Mogensen (2020) Communications in the 6G era. IEEE Access 8 (), pp. 57063–57074. External Links: Document Cited by: §1.
  • [25] F. W. Vook, A. Ghosh, and T. A. Thomas (2014) MIMO and beamforming solutions for 5g technology. In 2014 IEEE MTT-S International Microwave Symposium (IMS2014), Vol. , pp. 1–4. External Links: Document Cited by: §1.
  • [26] M. Wang, T. Zhu, T. Zhang, J. Zhang, S. Yu, and W. Zhou (2020) Security and privacy in 6G networks: new areas and new challenges. Digital Communications and Networks 6 (3), pp. 281–291. Cited by: §1.
  • [27] Y. Wang, A. Klautau, M. Ribero, M. Narasimha, and R. W. Heath (2018) MmWave vehicular beam training with situational awareness by machine learning. In 2018 IEEE Globecom Workshops (GC Wkshps), Vol. , pp. 1–6. External Links: Document Cited by: §1.
  • [28] Y. Xiao, G. Shi, Y. Li, W. Saad, and H. V. Poor (2020) Toward self-learning edge intelligence in 6G. IEEE Communications Magazine 58 (12), pp. 34–40. External Links: Document Cited by: §1.
  • [29] C. Yizhan, W. Zhong, H. Da, and L. Ruosen (2020) 6G is coming : discussion on key candidate technologies and application scenarios. In 2020 International Conference on Computer Communication and Network Security (CCNS), Vol. , pp. 59–62. External Links: Document Cited by: §1.
  • [30] Z. Zhang, Y. Xiao, Z. Ma, M. Xiao, Z. Ding, X. Lei, G. K. Karagiannidis, and P. Fan (2019) 6G wireless networks: vision, requirements, architecture, and key technologies. IEEE Vehicular Technology Magazine 14 (3), pp. 28–41. External Links: Document Cited by: §1.