Differentially-Private "Draw and Discard" Machine Learning

07/11/2018
by   Vasyl Pihur, et al.
0

In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it "Draw and Discard" because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of "Draw and Discard" and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework's viability in practical deployments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2022

LOCKS: User Differentially Private and Federated Optimal Client Sampling

With changes in privacy laws, there is often a hard requirement for clie...
research
02/22/2023

Multi-Message Shuffled Privacy in Federated Learning

We study differentially private distributed optimization under communica...
research
06/07/2018

Privacy-Preserving Identification via Layered Sparse Code Design: Distributed Servers and Multiple Access Authorization

We propose a new computationally efficient privacy-preserving identifica...
research
07/19/2021

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

We study privacy in a distributed learning framework, where clients coll...
research
09/06/2021

Statistical Privacy Guarantees of Machine Learning Preprocessing Techniques

Differential privacy provides strong privacy guarantees for machine lear...
research
06/14/2022

Private Set Matching Protocols

We introduce Private Set Matching (PSM) problems, in which a client aims...
research
06/27/2019

Privacy-Preserving Distributed Learning with Secret Gradient Descent

In many important application domains of machine learning, data is a pri...

Please sign up or login with your details

Forgot password? Click here to reset