LOCKS: User Differentially Private and Federated Optimal Client Sampling

12/26/2022
by   Ajinkya K Mulay, et al.
0

With changes in privacy laws, there is often a hard requirement for client data to remain on the device rather than being sent to the server. Therefore, most processing happens on the device, and only an altered element is sent to the server. Such mechanisms are developed by leveraging differential privacy and federated learning. Differential privacy adds noise to the client outputs and thus deteriorates the quality of each iteration. This distributed setting adds a layer of complexity and additional communication and performance overhead. These costs are additive per round, so we need to reduce the number of iterations. In this work, we provide an analytical framework for studying the convergence guarantees of gradient-based distributed algorithms. We show that our private algorithm minimizes the expected gradient variance by approximately d^2 rounds, where d is the dimensionality of the model. We discuss and suggest novel ways to improve the convergence rate to minimize the overhead using Importance Sampling (IS) and gradient diversity. Finally, we provide alternative frameworks that might be better suited to exploit client sampling techniques like IS and gradient diversity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2019

Federated Learning with Bayesian Differential Privacy

We consider the problem of reinforcing federated learning with formal pr...
research
07/11/2018

Differentially-Private "Draw and Discard" Machine Learning

In this work, we propose a novel framework for privacy-preserving client...
research
05/26/2022

Aggregating Gradients in Encoded Domain for Federated Learning

Malicious attackers and an honest-but-curious server can steal private c...
research
02/22/2022

Differential Secrecy for Distributed Data and Applications to Robust Differentially Secure Vector Summation

Computing the noisy sum of real-valued vectors is an important primitive...
research
02/02/2023

Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy

This work proposes Fed-GLOSS-DP, a novel approach to privacy-preserving ...
research
06/07/2022

Shuffled Check-in: Privacy Amplification towards Practical Distributed Learning

Recent studies of distributed computation with formal privacy guarantees...
research
05/28/2023

LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers

Prior works have attempted to build private inference frameworks for tra...

Please sign up or login with your details

Forgot password? Click here to reset