Towards the Flatter Landscape and Better Generalization in Federated Learning under Client-level Differential Privacy

05/01/2023
by   Yifan Shi, et al.
3

To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise. However, existing DPFL methods tend to make a sharp loss landscape and have poor weight perturbation robustness, resulting in severe performance degradation. To alleviate these issues, we propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with improved stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance. To further reduce the magnitude of random noise while achieving better performance, we propose DP-FedSAM-top_k by adopting the local update sparsification technique. From the theoretical perspective, we present the convergence analysis to investigate how our algorithms mitigate the performance degradation induced by DP. Meanwhile, we give rigorous privacy guarantees with Rényi DP, the sensitivity analysis of local updates, and generalization analysis. At last, we empirically confirm that our algorithms achieve state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.

READ FULL TEXT

page 2

page 9

research
03/20/2023

Make Landscape Flatter in Differentially Private Federated Learning

To defend the inference attacks and mitigate the sensitive information l...
research
02/15/2022

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

Federated learning (FL) that enables distributed clients to collaborativ...
research
06/22/2023

DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning with Client Momentum

Federated Learning (FL) allows multiple participating clients to train m...
research
02/08/2023

Improving the Model Consistency of Decentralized Federated Learning

To mitigate the privacy leakages and communication burdens of Federated ...
research
06/16/2022

On Privacy and Personalization in Cross-Silo Federated Learning

While the application of differential privacy (DP) has been well-studied...
research
03/07/2023

Amplitude-Varying Perturbation for Balancing Privacy and Utility in Federated Learning

While preserving the privacy of federated learning (FL), differential pr...
research
08/01/2023

Differential Privacy for Adaptive Weight Aggregation in Federated Tumor Segmentation

Federated Learning (FL) is a distributed machine learning approach that ...

Please sign up or login with your details

Forgot password? Click here to reset