Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning

07/01/2023
by   Md Tamjid Hossain, et al.
0

Lately, differential privacy (DP) has been introduced in cooperative multiagent reinforcement learning (CMARL) to safeguard the agents' privacy against adversarial inference during knowledge sharing. Nevertheless, we argue that the noise introduced by DP mechanisms may inadvertently give rise to a novel poisoning threat, specifically in the context of private knowledge sharing during CMARL, which remains unexplored in the literature. To address this shortcoming, we present an adaptive, privacy-exploiting, and evasion-resilient localized poisoning attack (PeLPA) that capitalizes on the inherent DP-noise to circumvent anomaly detection systems and hinder the optimal convergence of the CMARL model. We rigorously evaluate our proposed PeLPA attack in diverse environments, encompassing both non-adversarial and multiple-adversarial contexts. Our findings reveal that, in a medium-scale environment, the PeLPA attack with attacker ratios of 20 an increase in average steps to goal by 50.69 Furthermore, under similar conditions, PeLPA can result in a 1.4x and 1.6x computational time increase in optimal reward attainment and a 1.18x and 1.38x slower convergence for attacker ratios of 20

READ FULL TEXT
research
04/06/2022

Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures

Differential privacy (DP) is considered to be an effective privacy-prese...
research
08/02/2023

BRNES: Enabling Security and Privacy-aware Experience Sharing in Multiagent Robotic and Autonomous Systems

Although experience sharing (ES) accelerates multiagent reinforcement le...
research
12/25/2021

Gradient Leakage Attack Resilient Deep Learning

Gradient leakage attacks are considered one of the wickedest privacy thr...
research
09/21/2021

DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning

Federated learning (FL) has become an emerging machine learning techniqu...
research
09/21/2021

Privacy, Security, and Utility Analysis of Differentially Private CPES Data

Differential privacy (DP) has been widely used to protect the privacy of...
research
11/01/2020

Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack

Public intelligent services enabled by machine learning algorithms are v...

Please sign up or login with your details

Forgot password? Click here to reset