Byzantine-Resilient Secure Federated Learning

07/21/2020
by   Jinhyun So, et al.
0

Secure federated learning is a privacy-preserving framework to improve machine learning models by training over large volumes of data collected by mobile users. This is achieved through an iterative process where, at each iteration, users update a global model using their local datasets. Each user then masks its local model via random keys, and the masked models are aggregated at a central server to compute the global model for the next iteration. As the local models are protected by random masks, the server cannot observe their true values. This presents a major challenge for the resilience of the model against adversarial (Byzantine) users, who can manipulate the global model by modifying their local models or datasets. Towards addressing this challenge, this paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning. BREA is based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine-resilience, privacy, and convergence simultaneously. We provide theoretical convergence and privacy guarantees and characterize the fundamental trade-offs in terms of the network size, user dropouts, and privacy protection. Our experiments demonstrate convergence in the presence of Byzantine users, and comparable accuracy to conventional federated learning benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2021

Sparsified Secure Aggregation for Privacy-Preserving Federated Learning

Secure aggregation is a popular protocol in privacy-preserving federated...
research
07/18/2021

RobustFed: A Truth Inference Approach for Robust Federated Learning

Federated learning is a prominent framework that enables clients (e.g., ...
research
06/24/2022

zPROBE: Zero Peek Robustness Checks for Federated Learning

Privacy-preserving federated learning allows multiple users to jointly t...
research
05/24/2022

Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees

We propose Byzantine-robust federated learning protocols with nearly opt...
research
06/03/2019

Secure Distributed On-Device Learning Networks With Byzantine Adversaries

The privacy concern exists when the central server has the copies of dat...
research
09/30/2020

Secure Aggregation with Heterogeneous Quantization in Federated Learning

Secure model aggregation across many users is a key component of federat...
research
02/20/2023

Byzantine-Resistant Secure Aggregation for Federated Learning Based on Coded Computing and Vector Commitment

In this paper, we propose an efficient secure aggregation scheme for fed...

Please sign up or login with your details

Forgot password? Click here to reset