FLAIRS: FPGA-Accelerated Inference-Resistant Secure Federated Learning

08/01/2023
by   Huimin Li, et al.
0

Federated Learning (FL) has become very popular since it enables clients to train a joint model collaboratively without sharing their private data. However, FL has been shown to be susceptible to backdoor and inference attacks. While in the former, the adversary injects manipulated updates into the aggregation process; the latter leverages clients' local models to deduce their private data. Contemporary solutions to address the security concerns of FL are either impractical for real-world deployment due to high-performance overheads or are tailored towards addressing specific threats, for instance, privacy-preserving aggregation or backdoor defenses. Given these limitations, our research delves into the advantages of harnessing the FPGA-based computing paradigm to overcome performance bottlenecks of software-only solutions while mitigating backdoor and inference attacks. We utilize FPGA-based enclaves to address inference attacks during the aggregation process of FL. We adopt an advanced backdoor-aware aggregation algorithm on the FPGA to counter backdoor attacks. We implemented and evaluated our method on Xilinx VMK-180, yielding a significant speed-up of around 300 times on the IoT-Traffic dataset and more than 506 times on the CIFAR-10 dataset.

READ FULL TEXT
research
08/16/2022

FedPerm: Private and Robust Federated Learning by Parameter Permutation

Federated Learning (FL) is a distributed learning paradigm that enables ...
research
01/06/2021

FLGUARD: Secure and Private Federated Learning

Recently, a number of backdoor attacks against Federated Learning (FL) h...
research
02/10/2021

Meta Federated Learning

Due to its distributed methodology alongside its privacy-preserving feat...
research
01/03/2022

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Federated Learning (FL) allows multiple clients to collaboratively train...
research
09/20/2023

Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection

The significant rise of security concerns in conventional centralized le...
research
11/05/2021

Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

Federated learning (FL) enables a set of entities to collaboratively tra...
research
09/03/2023

martFL: Enabling Utility-Driven Data Marketplace with a Robust and Verifiable Federated Learning Architecture

The development of machine learning models requires a large amount of tr...

Please sign up or login with your details

Forgot password? Click here to reset