Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks

03/07/2022
by   Siddhartha Datta, et al.
0

Recent exploration of the multi-agent backdoor attack demonstrated the backfiring effect, a natural defense against backdoor attacks where backdoored inputs are randomly classified. This yields a side-effect of low accuracy w.r.t. clean labels, which motivates this paper's work on the construction of multi-agent backdoor defenses that maximize accuracy w.r.t. clean labels and minimize that of poison labels. Founded upon agent dynamics and low-loss subspace construction, we contribute three defenses that yield improved multi-agent backdoor robustness.

READ FULL TEXT
research
09/29/2019

Strong Baseline Defenses Against Clean-Label Poisoning Attacks

Targeted clean-label poisoning is a type of adversarial attack on machin...
research
01/28/2022

Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire

Malicious agents in collaborative learning and outsourced data collectio...
research
05/19/2023

Developing Multi-Agent Systems with Degrees of Neuro-Symbolic Integration [A Position Paper]

In this short position paper we highlight our ongoing work on verifiable...
research
01/11/2023

SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning

Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial M...
research
03/23/2022

Multi-agent Searching System for Medical Information

In the paper is proposed a model of multi-agent security system for sear...
research
02/19/2020

NNoculation: Broad Spectrum and Targeted Treatment of Backdoored DNNs

This paper proposes a novel two-stage defense (NNoculation) against back...
research
10/28/2022

Forecasting local behavior of multi-agent system and its application to forest fire model

In this paper, we study a CNN-LSTM model to forecast the state of a spec...

Please sign up or login with your details

Forgot password? Click here to reset