Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds

06/17/2021 ∙ by Andrew Lowy, et al. ∙ 0

Federated learning (FL) is a distributed learning paradigm in which many clients with heterogeneous, unbalanced, and often sensitive local data, collaborate to learn a model. Local Differential Privacy (LDP) provides a strong guarantee that each client's data cannot be leaked during and after training, without relying on a trusted third party. While LDP is often believed to be too stringent to allow for satisfactory utility, our paper challenges this belief. We consider a general setup with unbalanced, heterogeneous data, disparate privacy needs across clients, and unreliable communication, where a random number/subset of clients is available each round. We propose three LDP algorithms for smooth (strongly) convex FL; each are noisy variations of distributed minibatch SGD. One is accelerated and one involves novel time-varying noise, which we use to obtain the first non-trivial LDP excess risk bound for the fully general non-i.i.d. FL problem. Specializing to i.i.d. clients, our risk bounds interpolate between the best known and/or optimal bounds in the centralized setting and the cross-device setting, where each client represents just one person's data. Furthermore, we show that in certain regimes, our convergence rate (nearly) matches the corresponding non-private lower bound or outperforms state of the art non-private algorithms (“privacy for free”). Finally, we validate our theoretical results and illustrate the practical utility of our algorithm with numerical experiments.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


Code for the paper "Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds," by Andrew Lowy & Meisam Razaviyayn

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.