PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

10/22/2021
by   Xiaolan Gu, et al.
0

Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates. Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography. Using secure multi-party computation (MPC) techniques (e.g., secret sharing), noise is added to the model updates by the honest-but-curious server(s) (instead of each client) without revealing clients' inputs, which achieves the benefit of centralized DP in terms of providing a better privacy-utility tradeoff than local DP based solutions. Meanwhile, a crypto-aided secure validation protocol is designed to verify that the contribution of model update from each client is bounded without leaking privacy. We show analytically that the noise added to ensure DP also provides enhanced robustness against malicious model submissions. We experimentally demonstrate that our PRECAD framework achieves higher privacy-utility tradeoff and enhances robustness for the trained models.

READ FULL TEXT
research
06/22/2023

DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum

Federated Learning (FL) allows multiple participating clients to train m...
research
08/06/2020

On the relationship between (secure) multi-party computation and (secure) federated learning

The contribution of this short note, contains the following two parts: i...
research
09/07/2023

Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy

Federated learning (FL) is designed to preserve data privacy during mode...
research
05/09/2022

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

This paper tackles the problem of ensuring training data privacy in a fe...
research
09/21/2021

DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning

Federated learning (FL) has become an emerging machine learning techniqu...
research
03/07/2023

Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

While preserving the privacy of federated learning (FL), differential pr...
research
06/06/2023

FedVal: Different good or different bad in federated learning

Federated learning (FL) systems are susceptible to attacks from maliciou...

Please sign up or login with your details

Forgot password? Click here to reset