Practical Adversarial Attacks Against AI-Driven Power Allocation in a Distributed MIMO Network

01/23/2023
by   Omer Faruk Tuna, et al.
0

In distributed multiple-input multiple-output (D-MIMO) networks, power control is crucial to optimize the spectral efficiencies of users and max-min fairness (MMF) power control is a commonly used strategy as it satisfies uniform quality-of-service to all users. The optimal solution of MMF power control requires high complexity operations and hence deep neural network based artificial intelligence (AI) solutions are proposed to decrease the complexity. Although quite accurate models can be achieved by using AI, these models have some intrinsic vulnerabilities against adversarial attacks where carefully crafted perturbations are applied to the input of the AI model. In this work, we show that threats against the target AI model which might be originated from malicious users or radio units can substantially decrease the network performance by applying a successful adversarial sample, even in the most constrained circumstances. We also demonstrate that the risk associated with these kinds of adversarial attacks is higher than the conventional attack threats. Detailed simulations reveal the effectiveness of adversarial attacks and the necessity of smart defense techniques.

READ FULL TEXT
research
01/28/2021

Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network

Deep learning (DL) is becoming popular as a new tool for many applicatio...
research
10/10/2021

Universal Adversarial Attacks on Neural Networks for Power Allocation in a Massive MIMO System

Deep learning (DL) architectures have been successfully used in many app...
research
06/14/2022

Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial Attacks and Training

The successful emergence of deep learning (DL) in wireless system applic...
research
11/02/2022

Distributed Massive MIMO for LEO Satellite Networks

The ultra-dense deployment of interconnected satellites will characteriz...
research
08/24/2023

Evaluating the Vulnerabilities in ML systems in terms of adversarial attacks

There have been recent adversarial attacks that are difficult to find. T...
research
09/06/2023

Disarming Steganography Attacks Inside Neural Network Models

Similar to the revolution of open source code sharing, Artificial Intell...
research
04/29/2022

Logically Consistent Adversarial Attacks for Soft Theorem Provers

Recent efforts within the AI community have yielded impressive results t...

Please sign up or login with your details

Forgot password? Click here to reset