Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

11/27/2022
by   Yunchao Zhang, et al.
0

Federated embodied agent learning protects the data privacy of individual visual environments by keeping data locally at each client (the individual environment) during training. However, since the local data is inaccessible to the server under federated learning, attackers may easily poison the training data of the local client to build a backdoor in the agent without notice. Deploying such an agent raises the risk of potential harm to humans, as the attackers may easily navigate and control the agent as they wish via the backdoor. Towards Byzantine-robust federated embodied agent learning, in this paper, we study the attack and defense for the task of vision-and-language navigation (VLN), where the agent is required to follow natural language instructions to navigate indoor environments. First, we introduce a simple but effective attack strategy, Navigation as Wish (NAW), in which the malicious client manipulates local trajectory data to implant a backdoor into the global model. Results on two VLN datasets (R2R and RxR) show that NAW can easily navigate the deployed VLN agent regardless of the language instruction, without affecting its performance on normal test sets. Then, we propose a new Prompt-Based Aggregation (PBA) to defend against the NAW attack in federated VLN, which provides the server with a ”prompt” of the vision-and-language alignment variance between the benign and malicious clients so that they can be distinguished during training. We validate the effectiveness of the PBA method on protecting the global model from the NAW attack, which outperforms other state-of-the-art defense methods by a large margin in the defense metrics on R2R and RxR.

READ FULL TEXT

page 1

page 5

page 12

page 13

research
03/28/2022

FedVLN: Privacy-preserving Federated Vision-and-Language Navigation

Data privacy is a central problem for embodied agents that can perceive ...
research
01/19/2023

On the Vulnerability of Backdoor Defenses for Federated Learning

Federated Learning (FL) is a popular distributed machine learning paradi...
research
11/26/2019

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

In federated learning, multiple client devices jointly learn a machine l...
research
11/29/2018

Analyzing Federated Learning through an Adversarial Lens

Federated learning distributes model training among a multitude of agent...
research
03/31/2023

Secure Federated Learning against Model Poisoning Attacks via Client Filtering

Given the distributed nature, detecting and defending against the backdo...
research
07/05/2022

Defending against the Label-flipping Attack in Federated Learning

Federated learning (FL) provides autonomy and privacy by design to parti...
research
08/08/2023

Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning

The main premise of federated learning is that machine learning model up...

Please sign up or login with your details

Forgot password? Click here to reset