Enhanced Security and Privacy via Fragmented Federated Learning

07/13/2022
by   Najeeb Moharram Jebreel, et al.
16

In federated learning (FL), a set of participants share updates computed on their local data with an aggregator server that combines updates into a global model. However, reconciling accuracy with privacy and security is a challenge to FL. On the one hand, good updates sent by honest participants may reveal their private local information, whereas poisoned updates sent by malicious participants may compromise the model's availability and/or integrity. On the other hand, enhancing privacy via update distortion damages accuracy, whereas doing so via update aggregation damages security because it does not allow the server to filter out individual poisoned updates. To tackle the accuracy-privacy-security conflict, we propose fragmented federated learning (FFL), in which participants randomly exchange and mix fragments of their updates before sending them to the server. To achieve privacy, we design a lightweight protocol that allows participants to privately exchange and mix encrypted fragments of their updates so that the server can neither obtain individual updates nor link them to their originators. To achieve security, we design a reputation-based defense tailored for FFL that builds trust in participants and their mixed updates based on the quality of the fragments they exchange and the mixed updates they send. Since the exchanged fragments' parameters keep their original coordinates and attackers can be neutralized, the server can correctly reconstruct a global model from the received mixed updates without accuracy loss. Experiments on four real data sets show that FFL can prevent semi-honest servers from mounting privacy attacks, can effectively counter poisoning attacks and can keep the accuracy of the global model.

READ FULL TEXT
research
05/12/2020

A Secure Federated Learning Framework for 5G Networks

Federated Learning (FL) has been recently proposed as an emerging paradi...
research
11/08/2021

BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination

In federated learning, each participant trains its local model with its ...
research
02/07/2022

Preserving Privacy and Security in Federated Learning

Federated learning is known to be vulnerable to security and privacy iss...
research
07/16/2020

Data Poisoning Attacks Against Federated Learning Systems

Federated learning (FL) is an emerging paradigm for distributed training...
research
04/04/2022

Towards Privacy-Preserving and Verifiable Federated Matrix Factorization

Recent years have witnessed the rapid growth of federated learning (FL),...
research
09/26/2021

MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

Machine Learning (ML) has emerged as a core technology to provide learni...
research
12/28/2022

XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning

Federated Learning (FL) has received increasing attention due to its pri...

Please sign up or login with your details

Forgot password? Click here to reset