Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning

08/08/2023
by   Simon Queyrut, et al.
0

The main premise of federated learning is that machine learning model updates are computed locally, in particular to preserve user data privacy, as those never leave the perimeter of their device. This mechanism supposes the general model, once aggregated, to be broadcast to collaborating and non malicious nodes. However, without proper defenses, compromised clients can easily probe the model inside their local memory in search of adversarial examples. For instance, considering image-based applications, adversarial examples consist of imperceptibly perturbed images (to the human eye) misclassified by the local model, which can be later presented to a victim node's counterpart model to replicate the attack. To mitigate such malicious probing, we introduce Pelta, a novel shielding mechanism leveraging trusted hardware. By harnessing the capabilities of Trusted Execution Environments (TEEs), Pelta masks part of the back-propagation chain rule, otherwise typically exploited by attackers for the design of malicious samples. We evaluate Pelta on a state of the art ensemble model and demonstrate its effectiveness against the Self Attention Gradient adversarial Attack.

READ FULL TEXT
research
09/13/2023

Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments

The main premise of federated learning (FL) is that machine learning mod...
research
10/19/2021

TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks

Federated learning—multi-party, distributed learning in a decentralized ...
research
05/02/2022

Performance Weighting for Robust Federated Learning Against Corrupted Sources

Federated Learning has emerged as a dominant computational paradigm for ...
research
10/26/2021

Ensemble Federated Adversarial Training with Non-IID data

Despite federated learning endows distributed clients with a cooperative...
research
11/03/2022

FedTP: Federated Learning by Transformer Personalization

Federated learning is an emerging learning paradigm where multiple clien...
research
11/27/2022

Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

Federated embodied agent learning protects the data privacy of individua...
research
05/09/2023

Turning Privacy-preserving Mechanisms against Federated Learning

Recently, researchers have successfully employed Graph Neural Networks (...

Please sign up or login with your details

Forgot password? Click here to reset