DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum

06/22/2023
by   Xiaolan Gu, et al.
0

Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging the gradient or model updates with a coordinating server. Existing FL protocols were shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we focus on simultaneously achieving differential privacy (DP) and Byzantine robustness for cross-silo FL, based on the idea of learning from history. The robustness is achieved via client momentum, which averages the updates of each client over time, thus reduces the variance of the honest clients and exposes the small malicious perturbations of Byzantine clients that are undetectable in a single round but accumulate over time. In our initial solution DP-BREM, the DP property is achieved via adding noise to the aggregated momentum, and we account for the privacy cost from the momentum, which is different from the conventional DP-SGD that accounts for the privacy cost from gradient. Since DP-BREM assumes a trusted server (who can obtain clients' local models or updates), we further develop the final solution called DP-BREM+, which achieves the same DP and robustness properties as DP-BREM without a trusted server by utilizing secure aggregation techniques, where DP noise is securely and jointly generated by the clients. Our theoretical analysis on the convergence rate and experimental results under different DP guarantees and attack settings demonstrate that our proposed protocols achieve better privacy-utility tradeoff and stronger Byzantine robustness than several baseline methods.

READ FULL TEXT
research
10/22/2021

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Federated Learning (FL) allows multiple participating clients to train m...
research
09/07/2023

Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy

Federated learning (FL) is designed to preserve data privacy during mode...
research
05/11/2021

DP-SIGNSGD: When Efficiency Meets Privacy and Robustness

Federated learning (FL) has emerged as a promising collaboration paradig...
research
02/17/2021

Differential Private Hogwild! over Distributed Local Data Sets

We consider the Hogwild! setting where clients use local SGD iterations ...
research
10/06/2022

CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning

Federated Learning (FL) is a setting for training machine learning model...
research
03/07/2022

The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning

We consider the problem of training a d dimensional model with distribut...
research
05/01/2023

Towards the Flatter Landscape and Better Generalization in Federated Learning under Client-level Differential Privacy

To defend the inference attacks and mitigate the sensitive information l...

Please sign up or login with your details

Forgot password? Click here to reset