Mixture GAN For Modulation Classification Resiliency Against Adversarial Attacks

05/29/2022
by   Eyad Shtaiwi, et al.
0

Automatic modulation classification (AMC) using the Deep Neural Network (DNN) approach outperforms the traditional classification techniques, even in the presence of challenging wireless channel environments. However, the adversarial attacks cause the loss of accuracy for the DNN-based AMC by injecting a well-designed perturbation to the wireless channels. In this paper, we propose a novel generative adversarial network (GAN)-based countermeasure approach to safeguard the DNN-based AMC systems against adversarial attack examples. GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier. Specifically, we have shown the resiliency of our proposed defense GAN against the Fast-Gradient Sign method (FGSM) algorithm as one of the most potent kinds of attack algorithms to craft the perturbed signals. The existing defense-GAN has been designed for image classification and does not work in our case where the above-mentioned communication system is considered. Thus, our proposed countermeasure approach deploys GANs with a mixture of generators to overcome the mode collapsing problem in a typical GAN facing radio signal classification problem. Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81

READ FULL TEXT

page 1

page 3

research
02/01/2021

Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems

Deep Neural Networks (DNNs) have become prevalent in wireless communicat...
research
02/27/2019

Communication without Interception: Defense against Deep-Learning-based Modulation Detection

We consider a communication scenario, in which an intruder, employing a ...
research
08/12/2022

Scale-free Photo-realistic Adversarial Pattern Attack

Traditional pixel-wise image attack algorithms suffer from poor robustne...
research
02/04/2020

Minimax Defense against Gradient-based Adversarial Attacks

State-of-the-art adversarial attacks are aimed at neural network classif...
research
11/14/2018

Deep Neural Networks based Modrec: Some Results with Inter-Symbol Interference and Adversarial Examples

Recent successes and advances in Deep Neural Networks (DNN) in machine v...
research
04/17/2019

Adversarial Defense Through Network Profiling Based Path Extraction

Recently, researchers have started decomposing deep neural network model...
research
04/25/2020

TITLE CITED BY YEAR Robust and accurate feature selection for humanoid push recovery and classification: deep learning approach

This current work describes human push recovery data classification usin...

Please sign up or login with your details

Forgot password? Click here to reset