Preserving Privacy and Security in Federated Learning

02/07/2022
by   Truc Nguyen, et al.
0

Federated learning is known to be vulnerable to security and privacy issues. Existing research has focused either on preventing poisoning attacks from users or on protecting user privacy of model updates. However, integrating these two lines of research remains a crucial challenge since they often conflict with one another with respect to the threat model. In this work, we develop a framework to combine secure aggregation with defense mechanisms against poisoning attacks from users, while maintaining their respective privacy guarantees. We leverage zero-knowledge proof protocol to let users run the defense mechanisms locally and attest the result to the central server without revealing any information about their model updates. Furthermore, we propose a new secure aggregation protocol for federated learning using homomorphic encryption that is robust against malicious users. Our framework enables the central server to identify poisoned model updates without violating the privacy guarantees of secure aggregation. Finally, we analyze the computation and communication complexity of our proposed solution and benchmark its performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2021

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Federated learning allows a set of users to train a deep neural network ...
research
06/24/2022

zPROBE: Zero Peek Robustness Checks for Federated Learning

Privacy-preserving federated learning allows multiple users to jointly t...
research
09/23/2020

Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud

Federated learning has become prevalent in medical diagnosis due to its ...
research
06/12/2020

Backdoor Attacks on Federated Meta-Learning

Federated learning allows multiple users to collaboratively train a shar...
research
11/29/2018

Analyzing Federated Learning through an Adversarial Lens

Federated learning distributes model training among a multitude of agent...
research
07/13/2022

Enhanced Security and Privacy via Fragmented Federated Learning

In federated learning (FL), a set of participants share updates computed...
research
11/21/2021

Secure Linear Aggregation Using Decentralized Threshold Additive Homomorphic Encryption For Federated Learning

Secure linear aggregation is to linearly aggregate private inputs of dif...

Please sign up or login with your details

Forgot password? Click here to reset