Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds

by   Andrew Lowy, et al.

Federated learning (FL) is a distributed learning paradigm in which many clients with heterogeneous, unbalanced, and often sensitive local data, collaborate to learn a model. Local Differential Privacy (LDP) provides a strong guarantee that each client's data cannot be leaked during and after training, without relying on a trusted third party. While LDP is often believed to be too stringent to allow for satisfactory utility, our paper challenges this belief. We consider a general setup with unbalanced, heterogeneous data, disparate privacy needs across clients, and unreliable communication, where a random number/subset of clients is available each round. We propose three LDP algorithms for smooth (strongly) convex FL; each are noisy variations of distributed minibatch SGD. One is accelerated and one involves novel time-varying noise, which we use to obtain the first non-trivial LDP excess risk bound for the fully general non-i.i.d. FL problem. Specializing to i.i.d. clients, our risk bounds interpolate between the best known and/or optimal bounds in the centralized setting and the cross-device setting, where each client represents just one person's data. Furthermore, we show that in certain regimes, our convergence rate (nearly) matches the corresponding non-private lower bound or outperforms state of the art non-private algorithms (“privacy for free”). Finally, we validate our theoretical results and illustrate the practical utility of our algorithm with numerical experiments.


page 1

page 2

page 3

page 4


Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Providing privacy protection has been one of the primary motivations of ...

Private Non-Convex Federated Learning Without a Trusted Server

We study differentially private (DP) federated learning (FL) with non-co...

FedAUXfdp: Differentially Private One-Shot Federated Distillation

Federated learning suffers in the case of non-iid local datasets, i.e., ...

Reducing the Communication Cost of Federated Learning through Multistage Optimization

A central question in federated learning (FL) is how to design optimizat...

Single SMPC Invocation DPHelmet: Differentially Private Distributed Learning on a Large Scale

Distributing machine learning predictors enables the collection of large...

Shuffled Check-in: Privacy Amplification towards Practical Distributed Learning

Recent studies of distributed computation with formal privacy guarantees...

DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning

In this paper, we focus on facilitating differentially private quantized...

Please sign up or login with your details

Forgot password? Click here to reset