Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning

12/22/2022
by   Yalin E. Sagduyu, et al.
0

This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification. NextG systems such as the one envisioned for a massive number of IoT devices can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users (such as in the Citizens Broadband Radio Service (CBRS) band). By training another DNN as the surrogate model, an adversary can launch an inference (exploratory) attack to learn the behavior of the victim model, predict successful operation modes (e.g., channel access), and jam them. A defense mechanism can increase the adversary's uncertainty by introducing controlled errors in the victim model's decisions (i.e., poisoning the adversary's training data). This defense is effective against an attack but reduces the performance when there is no attack. The interactions between the defender and the adversary are formulated as a non-cooperative game, where the defender selects the probability of defending or the defense level itself (i.e., the ratio of falsified decisions) and the adversary selects the probability of attacking. The defender's objective is to maximize its reward (e.g., throughput or transmission success ratio), whereas the adversary's objective is to minimize this reward and its attack cost. The Nash equilibrium strategies are determined as operation modes such that no player can unilaterally improve its utility given the other's strategy is fixed. A fictitious play is formulated for each player to play the game repeatedly in response to the empirical frequency of the opponent's actions. The performance in Nash equilibrium is compared to the fixed attack and defense cases, and the resilience of NextG signal classification against attacks is quantified.

READ FULL TEXT
research
12/04/2021

A Game-Theoretic Approach for AI-based Botnet Attack Defence

The new generation of botnets leverages Artificial Intelligent (AI) tech...
research
01/07/2021

Adversarial Machine Learning for 5G Communications Security

Machine learning provides automated means to capture complex dynamics of...
research
11/01/2019

Adversarial Deep Learning for Over-the-Air Spectrum Poisoning Attacks

An adversarial deep learning approach is presented to launch over-the-ai...
research
06/23/2022

A Framework for Understanding Model Extraction Attack and Defense

The privacy of machine learning models has become a significant concern ...
research
05/26/2021

The Anatomy of Corner 3s in the NBA: What makes them efficient, how are they generated and how can defenses respond?

Modern basketball is all about creating efficient shots, that is, shots ...
research
10/12/2017

Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries

With a large number of sensors and control units in networked systems, d...
research
11/01/2022

Adversarial Policies Beat Professional-Level Go AIs

We attack the state-of-the-art Go-playing AI system, KataGo, by training...

Please sign up or login with your details

Forgot password? Click here to reset