Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

06/25/2021
by   Xinwei Zhang, et al.
0

Providing privacy protection has been one of the primary motivations of Federated Learning (FL). Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL. To guarantee the client-level differential privacy in FL algorithms, the clients' transmitted model updates have to be clipped before adding privacy noise. Such clipping operation is substantially different from its counterpart of gradient clipping in the centralized differentially private SGD and has not been well-understood. In this paper, we first empirically demonstrate that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity when training neural networks, which is partly because the clients' updates become similar for several popular deep architectures. Based on this key observation, we provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates. To the best of our knowledge, this is the first work that rigorously investigates theoretical and empirical issues regarding the clipping operation in FL algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2021

FeO2: Federated Learning with Opt-Out Differential Privacy

Federated learning (FL) is an emerging privacy-preserving paradigm, wher...
research
02/15/2022

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

Federated learning (FL) that enables distributed clients to collaborativ...
research
11/29/2022

Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise

Federated learning seeks to address the issue of isolated data islands b...
research
06/17/2021

Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds

Federated learning (FL) is a distributed learning paradigm in which many...
research
06/13/2023

(Amplified) Banded Matrix Factorization: A unified approach to private training

Matrix factorization (MF) mechanisms for differential privacy (DP) have ...
research
06/16/2022

On Privacy and Personalization in Cross-Silo Federated Learning

While the application of differential privacy (DP) has been well-studied...
research
06/07/2022

Subject Granular Differential Privacy in Federated Learning

This paper introduces subject granular privacy in the Federated Learning...

Please sign up or login with your details

Forgot password? Click here to reset