DeepAI
Log In Sign Up

Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds

06/17/2021
by   Andrew Lowy, et al.
0

Federated learning (FL) is a distributed learning paradigm in which many clients with heterogeneous, unbalanced, and often sensitive local data, collaborate to learn a model. Local Differential Privacy (LDP) provides a strong guarantee that each client's data cannot be leaked during and after training, without relying on a trusted third party. While LDP is often believed to be too stringent to allow for satisfactory utility, our paper challenges this belief. We consider a general setup with unbalanced, heterogeneous data, disparate privacy needs across clients, and unreliable communication, where a random number/subset of clients is available each round. We propose three LDP algorithms for smooth (strongly) convex FL; each are noisy variations of distributed minibatch SGD. One is accelerated and one involves novel time-varying noise, which we use to obtain the first non-trivial LDP excess risk bound for the fully general non-i.i.d. FL problem. Specializing to i.i.d. clients, our risk bounds interpolate between the best known and/or optimal bounds in the centralized setting and the cross-device setting, where each client represents just one person's data. Furthermore, we show that in certain regimes, our convergence rate (nearly) matches the corresponding non-private lower bound or outperforms state of the art non-private algorithms (“privacy for free”). Finally, we validate our theoretical results and illustrate the practical utility of our algorithm with numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/25/2021

Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy

Providing privacy protection has been one of the primary motivations of ...
03/13/2022

Private Non-Convex Federated Learning Without a Trusted Server

We study differentially private (DP) federated learning (FL) with non-co...
05/30/2022

FedAUXfdp: Differentially Private One-Shot Federated Distillation

Federated learning suffers in the case of non-iid local datasets, i.e., ...
11/03/2022

Single SMPC Invocation DPHelmet: Differentially Private Distributed Learning on a Large Scale

Distributing machine learning predictors enables the collection of large...
10/03/2022

β-Stochastic Sign SGD: A Byzantine Resilient and Differentially Private Gradient Compressor for Federated Learning

Federated Learning (FL) is a nascent privacy-preserving learning framewo...
05/03/2022

Privacy Amplification via Random Participation in Federated Learning

Running a randomized algorithm on a subsampled dataset instead of the en...
09/17/2020

FLAME: Differentially Private Federated Learning in the Shuffle Model

Differentially private federated learning has been intensively studied. ...

Code Repositories

Locally-Differentially-Private-Federated-Learning

Code for the paper "Locally Differentially Private Federated Learning: Efficient Algorithms with Tight Risk Bounds," by Andrew Lowy & Meisam Razaviyayn


view repo