Maximum Likelihood Distillation for Robust Modulation Classification

11/01/2022
by   Javier Maroto, et al.
4

Deep Neural Networks are being extensively used in communication systems and Automatic Modulation Classification (AMC) in particular. However, they are very susceptible to small adversarial perturbations that are carefully crafted to change the network decision. In this work, we build on knowledge distillation ideas and adversarial training in order to build more robust AMC systems. We first outline the importance of the quality of the training data in terms of accuracy and robustness of the model. We then propose to use the Maximum Likelihood function, which could solve the AMC problem in offline settings, to generate better training labels. Those labels teach the model to be uncertain in challenging conditions, which permits to increase the accuracy, as well as the robustness of the model when combined with adversarial training. Interestingly, we observe that this increase in performance transfers to online settings, where the Maximum Likelihood function cannot be used in practice. Overall, this work highlights the potential of learning to be uncertain in difficult scenarios, compared to directly removing label noise.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2021

SafeAMC: Adversarial training for robust modulation recognition models

In communication systems, there are many tasks, like modulation recognit...
research
03/14/2022

On the benefits of knowledge distillation for adversarial robustness

Knowledge distillation is normally used to compress a big network, or te...
research
11/03/2021

LTD: Low Temperature Distillation for Robust Adversarial Training

Adversarial training has been widely used to enhance the robustness of t...
research
03/19/2021

Noise Modulation: Let Your Model Interpret Itself

Given the great success of Deep Neural Networks(DNNs) and the black-box ...
research
06/03/2022

Adversarial Unlearning: Reducing Confidence Along Adversarial Directions

Supervised learning methods trained with maximum likelihood objectives o...
research
08/15/2023

SEDA: Self-Ensembling ViT with Defensive Distillation and Adversarial Training for robust Chest X-rays Classification

Deep Learning methods have recently seen increased adoption in medical i...
research
03/27/2021

On the benefits of robust models in modulation recognition

Given the rapid changes in telecommunication systems and their higher de...

Please sign up or login with your details

Forgot password? Click here to reset