Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments

09/13/2023
by   Simon Queyrut, et al.
0

The main premise of federated learning (FL) is that machine learning model updates are computed locally to preserve user data privacy. This approach avoids by design user data to ever leave the perimeter of their device. Once the updates aggregated, the model is broadcast to all nodes in the federation. However, without proper defenses, compromised nodes can probe the model inside their local memory in search for adversarial examples, which can lead to dangerous real-world scenarios. For instance, in image-based applications, adversarial examples consist of images slightly perturbed to the human eye getting misclassified by the local model. These adversarial images are then later presented to a victim node's counterpart model to replay the attack. Typical examples harness dissemination strategies such as altered traffic signs (patch attacks) no longer recognized by autonomous vehicles or seemingly unaltered samples that poison the local dataset of the FL scheme to undermine its robustness. Pelta is a novel shielding mechanism leveraging Trusted Execution Environments (TEEs) that reduce the ability of attackers to craft adversarial samples. Pelta masks inside the TEE the first part of the back-propagation chain rule, typically exploited by attackers to craft the malicious samples. We evaluate Pelta on state-of-the-art accurate models using three well-established datasets: CIFAR-10, CIFAR-100 and ImageNet. We show the effectiveness of Pelta in mitigating six white-box state-of-the-art adversarial attacks, such as Projected Gradient Descent, Momentum Iterative Method, Auto Projected Gradient Descent, the Carlini Wagner attack. In particular, Pelta constitutes the first attempt at defending an ensemble model against the Self-Attention Gradient attack to the best of our knowledge. Our code is available to the research community at https://github.com/queyrusi/Pelta.

READ FULL TEXT

page 1

page 6

page 8

page 9

research
08/08/2023

Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning

The main premise of federated learning is that machine learning model up...
research
06/28/2021

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Evading adversarial example detection defenses requires finding adversar...
research
05/31/2023

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

In Federated Learning (FL) and many other distributed training framework...
research
09/19/2023

SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks

While Federated learning (FL) is attractive for pulling privacy-preservi...
research
04/28/2022

AGIC: Approximate Gradient Inversion Attack on Federated Learning

Federated learning is a private-by-design distributed learning paradigm ...
research
10/08/2022

FedDef: Robust Federated Learning-based Network Intrusion Detection Systems Against Gradient Leakage

Deep learning methods have been widely applied to anomaly-based network ...

Please sign up or login with your details

Forgot password? Click here to reset