DeepAI
Log In Sign Up

A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks

03/03/2021
by   Yudi Dong, et al.
Oklahoma State University
Stevens Institute of Technology
0

We propose a novel defensive mechanism based on a generative adversarial network (GAN) framework to defend against adversarial attacks in end-to-end communications systems. Specifically, we utilize a generative network to model a powerful adversary and enable the end-to-end communications system to combat the generative attack network via a minimax game. We show that the proposed system not only works well against white-box and black-box adversarial attacks but also possesses excellent generalization capabilities to maintain good performance under no attacks. We also show that our GAN-based end-to-end system outperforms the conventional communications system and the end-to-end communications system with/without adversarial training.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/23/2019

A Direct Approach to Robust Deep Learning Using Adversarial Networks

Deep neural networks have been shown to perform well in many classical m...
02/22/2019

Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems

We show that end-to-end learning of communication systems through deep n...
02/01/2021

Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems

Deep Neural Networks (DNNs) have become prevalent in wireless communicat...
04/14/2022

Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification

This study explores how robots and generative approaches can be used to ...
02/15/2021

CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification

Adversarial attack is aimed at fooling the target classifier with imperc...
04/29/2022

Logically Consistent Adversarial Attacks for Soft Theorem Provers

Recent efforts within the AI community have yielded impressive results t...
06/17/2021

Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning

End-to-end encryption (E2EE) by messaging platforms enable people to sec...

Code Repositories

GAN-based-E2E-communications-system-for-defense-against-adversarial-attack

A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks


view repo

I Introduction

Deep neural networks (DNNs) bring wireless communications into a new era of deep learning and artificial intelligence. One of the insightful ideas is end-to-end learning of communications systems 

[o2017introduction]

, which re-designs the physical layer by employing a neural network instead of multiple independent blocks at the transmitter and the receiver. Particularly, an autoencoder architecture 

[goodfellow2016deep] is utilized for end-to-end communications, where an encoder neural network (NN) and a decoder NN are respectively utilized in the transmitter and receiver to replace signal processing tasks. Through jointly training the transmitter NN and the receiver NN, the end-to-end communications system can achieve global optimization and considerable performance improvements [o2017introduction].

However, neural networks have an inherent/natural vulnerability to adversarial attacks [goodfellow2014explaining]

. That is, a neural network model can easily lead to a false output by adding a small perturbation into the input of a neural network. Such perturbation, called adversarial perturbation, is an elaborate vector designed based on the receptive fields of inputs in the neural network model. This vulnerability threatens almost all deep learning-based systems including the end-to-end learning based communications system in terms of robustness and security. A recent work 

[sadeghi2019physical] investigates adversarial attacks against autoencoder end-to-end communications systems, which crafts universal adversarial perturbations using a fast gradient method (FGM) [goodfellow2014explaining]. By leveraging the broadcast nature of the wireless channel, attackers can inject adversarial perturbations into the input of the receiver NN, which causes a more significantly negative impact on the end-to-end learning based systems than conventional communications systems [sadeghi2019physical].

A direct defensive method against adversarial attacks is to train the end-to-end system with adversarial perturbations, which is called adversarial training [goodfellow2014explaining]. However, adversarial training only works for some specific adversarial perturbations that have been added to the training. For other various and new adversarial perturbations, adversarial training may be incapable of any defense [tramer2019adversarial]. Also, adversarial training degrades the generalization ability of neural networks [raghunathan2019adversarial], which can lead to the poor performance of neural networks on unperturbed/clean inputs. Therefore, a more effective defense mechanism is desired for robust deep learning of end-to-end communications systems.

To this end, in this paper, we propose to integrate the GAN framework [NIPS2014gan] into the autoencoder based end-to-end communications system for defense against various adversarial attacks. We utilize a generative network as an adversary to generate adversarial perturbations that can fool the receiver NN into recovering the false message. By leveraging the great computational capacity of the neural network, the generative network can generate various and powerful perturbations. Meanwhile, the discriminative network is the decoder NN in the end-to-end system, which is responsible for recovering the correct message from both clean signal and perturbed signal with adversarial perturbations generated by the generative network. The generative network and the discriminative network are trained in a confrontation game, where the generative network becomes a powerful adversary while the discriminative network (i.e., decoder NN) becomes a robust defender.

The main contributions of our paper are as follows.

  • This work is the first to resolve the security and robustness issue induced by adversarial attacks in the end-to-end communications system, where we build a robust and defensive GAN-based end-to-end communications system by jointly and adversarially training an autoencoder network against a generative attack network.

  • Unlike the adversarial training method that is hard to gain simultaneously defense and generalization capacity, the proposed approach can effectively defend against various adversarial attacks including white-box attacks and black-box attacks and, meanwhile, it has excellent generalization performance to remain in low error rates on clean inputs.

  • Consensus optimization is utilized in the training of the proposed end-to-end system, which ensures a stable and impartial minimax game to train a defensive end-to-end communications system.

Ii Preliminaries

In this section, we introduce the preliminary studies regarding autoencoder based end-to-end communications systems and adversarial attacks. Also, we discuss the attack model and the method of crafting adversarial perturbations for attacking an end-to-end communications system.

Fig. 1: Illustration of an end-to-end autoencoder communications system.

Ii-a Autoencoder Based End-to-End Communications System

Fig. 1 illustrates a typical end-to-end autoencoder communications system [o2017introduction], which is implemented in this paper. Specifically, message that needs to be transmitted is chosen from a message set , where and is the bit number of a message. The message is first preprocessed as a one-hot binary vector where the element of is equal to one and all others are zero. Then the one-hot message goes through the encoder NN to perform a mapping: , which generates the output signal , where refers the encoder model parameterized by , is the message set via one-hot calculation, refers to the number of channel uses and is a concatenation of the real and imaginary parts of the transmitted signal. Consider the hardware constraints of a transmitter, we restrict the energy of the transmitted signal as . Next, an additive white Gaussian noise (AWGN) channel is used for the transmission of to obtain the received signal , where involves

and noise. We assign the fixed variance

in the AWGN channel, where , computed by bit number and channel uses, is the data rate in our communications system. is the energy per bit to noise power spectral density ratio. Finally, the decoder NN performs a mapping

to recover the estimated message

, where is the decoder model parameterized by

. In particular, the softmax layer of the decoder NN generates the vector

. The estimated message is set as the index of the highest value in the output vector .

Ii-B Adversarial Attacks

Fig. 2: Adversarial attacks against an end-to-end autoencoder communications system.

Neural networks have a natural vulnerability to adversarial attacks, where input with adversarial perturbations can lead a well-trained neural network to output a wrong answer with high confidence [goodfellow2014explaining]. An adversarial perturbation is a carefully crafted vector or matrix with small values, which are imperceptible but sensitive to neural networks. Due to this property of neural networks, the security and robustness of deep learning-based systems are compromised by adversarial attacks. In our case, an autoencoder based end-to-end communications system can be easily fooled by using physical adversarial attacks [sadeghi2019physical]. As shown in Fig. 2, attackers can leverage the broadcast nature of the channel and emit an interference signal of adversarial perturbation to the channel. The perturbed received signal forces the decoder NN to provide an incorrect output. Under adversarial attacks, autoencoder communications systems have more significant performance degradation than conventional communications systems [sadeghi2019physical]. According to the knowledge of attackers, adversarial attacks can be divided into white-box attacks and black-box attacks [yuan2019adversarial]. In white-box attacks, an attacker has complete knowledge of the NN model . In black-box attacks, attackers only know the output of the decoder model but have no information about the NN model.

Ii-C Attack Model: Crafting Adversarial Perturbation

Fig. 3: BLER performance comparison of the autoencoder end-to-end system and conventional scheme (BPSK modulation with Hamming coding) under adversarial attacks.
Fig. 4: The proposed adversarial network based approach for robust end-to-end communications system.

To perform white-box attacks on the decoder NN model that generates the estimated message , we need to find an adversarial perturbation such that results in an incorrect output, which is described as

(1)

To solve the problem (1) of generating adversarial perturbations, the FGM method [goodfellow2014explaining] is commonly used to obtain an optimal -norm constrained perturbation,

(2)

where is a small scaling coefficient,

denotes the loss function, and

is the gradient of the loss function with respect to the input . However, FGM requires the knowledge of the message that is unknown to the transmission process. Therefore, Sadeghi et al. introduce an input-agnostic FGM [sadeghi2019physical] to generate an universal perturbation that works for all messages from . This method is used in this paper for crafting adversarial perturbations. For the black-box attacks, attackers cannot obtain any information about our autoencoder. Thus, attackers need to design white-box perturbations based on a substitute autoencoder system that is fully open to attackers. These adversarial perturbations are also effective for other unknown autoencoder systems due to the transferability of adversarial attacks [yuan2019adversarial]. We use this general approach to perform black-box attacks on the autoencoder system and the proposed system.

Fig. 3 shows the block-error-rate (BLER) of an autoencoder end-to-end communications system with channel uses and bits per channel, and the BLER of a conventional communications system using binary phase-shift keying (BPSK) modulation and Hamming (7,4) code with hard-decision (HD) decoding [o2017introduction, sadeghi2019physical]. The BLER is calculated as the ratio of . The smaller BLER indicates the better system performance. We can see that the autoencoder outperforms the conventional scheme if there is no attack. However, by performing the adversarial attacks using the input-agnostic FGM, the performance of the autoencoder is degraded more significantly, where the performance of the autoencoder is worse than the conventional scheme. This paper is to address this issue induced by adversarial attacks in the end-to-end communications system.

Iii End-to-End Communications System Using Adversarial Networks

We integrate the GAN framework [NIPS2014gan, Wang2019ICLR] into an autoencoder communications system. As shown in Fig. 4, a neural network parameterized by is added as the generative network. The decoder NN is served as a discriminative network. In our case, the purposes of the generative and discriminative networks are different from the original GAN. Here the generative network acts as an adversary to model and generate adversarial perturbation (i.e., ) based on the input and a scaling factor . The discriminative network tries to estimate the correct message from both clean signal and perturbed signal . The generative and discriminative networks are trained jointly and adversarially, where the generative network generates evermore powerful adversarial perturbation but the discriminative network still correctly estimates messages from the heavily perturbed signals. With the proposed adversarial network-based approach, the autoencoder communications system obtains a strong capability to defend against adversarial attacks.

Iii-a Objective Function

The intuition of an ideal defensive method is to find a solution of the decoder NN that simultaneously has the small loss on the clean inputs and the small loss on the inputs with adversarial perturbations,

(3)
(4)

However, it is hard to find a single solution for both and . There is a trade-off between and . The traditional adversarial training usually satisfies either the small loss of clean input or the small loss of the perturbed inputs, which causes the model to lose either defense ability or generalization ability.

To satisfy the above two requirements, we try to find an optimal parameter of the decoder NN to minimize the loss between the output of the clear signal and the groundtruth , as well as minimize the loss between the output of the perturbed signal and the groundtruth , where the objective of the decoder NN is

(5)

In our approach, we model the adversarial perturbation using the generative neural network , and the objective of the decoder NN becomes

(6)

where is the generated adversarial perturbation. In order to enable the decoder NN to handle as many perturbation types as possible, we want the generative neural network to be a powerful adversary, where a generative network parameter is trained to maximize the loss between the output of the perturbed signal and the groundtruth ,

(7)

Finally, we jointly train the decoder NN (i.e., discriminative network) and the generative network to find a solution of a minimax game between and ,

(8)

where our final objective is realized that the discriminative network is capable of countering against a powerful adversary while has a good generalization performance.

Iii-B Consensus Optimization For GAN Training

The stability and convergence of GAN training is a very challenge task, which suffers from the problems of non-convergence, mode collapse, and diminished gradient. In this paper, we adopt a consensus optimization approach [mescheder2017numerics] to regularize gradients to stabilize the GAN training.

Denote the objective of the discriminative network (i.e., Eq. (6)) as and denote the objective of the generative network (i.e., Eq. (7)) as . The gradient vector field of this minimax game is defined as

(9)

The GAN training is to find a solution of

. However, the eigenvalues of the Jacobian of

could be zero in real part or be very large in imaginary part [mescheder2017numerics], which results in the convergence failure of GAN training. To this end, we respectively add a regularization factor to the objectives of the discriminative network and the generative network. The new gradient vector field is obtained [mescheder2017numerics]

(10)

where is a constant parameter for regularization. This added regularization factor can help the two networks to reach a consensus optimization with better convergence.

Iv Evaluation Results

In this section, we evaluate the proposed GAN based end-to-end communications system by comparing it with the conventional communications system (Section IV-C), the autoencoder end-to-end communications system with regular training and adversarial training (Section IV-D). To examine the robustness of those systems, we calculate their BLER performance under different scenarios involving white-box attacks, black-box attacks, and no attacks.

Iv-a Neural Network Architecture

We implement our adversarial network based approach into two different end-to-end communications systems: a multilayer perceptron (MLP) based end-to-end communications system and a convolutional neural network (CNN) based end-to-end communications system, which are given in Table 

I and Table II, respectively. The encoder NN and decoder NN used in these two systems are the same as in [sadeghi2019physical]. For the design of the generative network, one noticed rule is that the depth (i.e., number of layers) of the generative network and the decoder NN (i.e., discriminative network) should be similar, which can reach equal competition between the generative network and the discriminative network to result in better performance.

Iv-B Experiment Setup

In the experiments under white-box attacks, the proposed system uses the network architecture listed in Table I. The autoencoder system use the same MLP encoder and MLP decoder listed in Table I. The conventional communications system uses BPSK modulation and Hamming coding with HD decoding. The adversarial perturbations for attacking these three systems are generated using FGM [sadeghi2019physical] based on the MLP decoder. In the experiments under black-box attacks, the proposed system uses the network architecture listed in Table II. The autoencoder system uses the same CNN encoder and CNN decoder in Table II

. The conventional communications system also uses BPSK modulation and Hamming coding with HD decoding. The adversarial perturbations for black-box attacks are generated from the MLP decoder. In addition, the proposed system and the autoencoder system are all sufficiently trained with the same hyper-parameters on TensorFlow-GPU.

Name Encoder NN Decoder NN Generative Network
Layer FC+eLU

FC+ReLU

Conv1d+ReLU
Cov1d+ReLU+Flatten
FC+Linear+ Norm FC+Softmax FC+Linear
Normalization ()
TABLE I: NN Architectures used in our approach (MLP based).
Name Encoder NN Decoder NN Generative Network
Layer FC+eLU Conv2d+ReLU Conv1d+ReLU+BN
Conv1d+ReLU+Flatten Conv2d+ReLU+Flatten Cov1d+ReLU+BN+Flatten
FC+Linear FC+ReLU FC+Linear
Normalization () FC+Softmax Normalization ()
TABLE II: NN Architectures used in our approach (CNN-based).

Iv-C Proposed Approach versus Conventional Communications System

(a) BLER under whiter-box attacks
(b) BLER under black-box attacks
Fig. 5: BLER performance comparison of the proposed GAN-based end-to-end communications system and the conventional communications system.

We first compare our proposed GAN-based communications system with the conventional communications system under adversarial attacks and no attack. In Figure 5, we can see that the performance of our proposed system is better than the conventional communications system under no attacks. When we attack these two systems using whiter-box attacks, as shown in Figure 5(a), our system can mitigate the effect of attacks and has a better performance than the conventional communications system. While performing black-box attacks in Figure 5(b), our system shows a considerable defense capacity, where the performance of our system significantly outperforms the conventional one, which is very close to the performance under no attacks.

Iv-D Proposed Approach versus Autoencoder End-to-End Communications System

(a) BLER under whiter-box attacks
(b) BLER under black-box attacks
Fig. 6: BLER performance comparison of the proposed GAN-based end-to-end communications system and the autoencoder end-to-end communications system with regular training and adversarial training.

Next, we compare our proposed system with the autoencoder end-to-end system that uses regular training and adversarial training, respectively. Regular training means that we train the autoencoder end-to-end system using clean inputs. Adversarial training means that we training the autoencoder system with both the clean inputs and the inputs with adversarial perturbations. For the results of white-box attacks shown in Figure 6(a), we can see that the regular training based autoencoder system has no capability to defend against white-box attacks, where the regular training has the highest error rate. The adversarial training based autoencoder system achieves successful defense against white-box attacks, which obtains large performance improvements compared with the regular training based autoencoder system. The adversarial training is effective for defending against white-box perturbations because it augments the training data with the same perturbations beforehand. Our proposed system also achieves a good performance similar to the adversarial training, indicating a good defense against white-box attacks. Notably, adversarial training causes considerable performance degradation when there is no attack, indicating that adversarial training degrades the generalization ability of the autoencoder. In contrast, our proposed system still remains in a good performance under no attacks with a strong generalization ability. For the results of black-box attacks shown in Figure 6(b), our proposed system still shows a good defensive ability against black-box perturbations, but the adversarial training leads to a defense failure where the adversarial training has a high error rate. This is because the perturbations used for black-box attacking are different from the perturbations used in the adversarial training. Adversarial training does not work well for unknown perturbations. In contrast, our system can defend against various unknown perturbations. Similarly, Figure 6(b) also indicates that the adversarial training shows a performance degradation under no attacks while our system shows good generalization performance.

V Conclusions

This paper presents a novel GAN-based defense approach for end-to-end learning of communications systems, which uses a generative network to model powerful adversarial perturbations and jointly train the end-to-end communications system against the generative attack network. Our approach can learn an end-to-end communication system which is robust to various adversarial perturbations including both white-box and black-box attacks, without degrading the generalization performance of the system. In evaluation results, our GAN-based communications system shows better performance and defense capability than the classical communications scheme and the end-to-end communications system with regular training and adversarial training.

References