DeepAI AI Chat
Log In Sign Up

Differentially Private Bayesian Learning on Distributed Data

by   Mikko Heikkilä, et al.

Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost.


page 1

page 2

page 3

page 4


Auditing Differential Privacy in High Dimensions with the Kernel Quantum Rényi Divergence

Differential privacy (DP) is the de facto standard for private data rele...

DP-PSI: Private and Secure Set Intersection

One way to classify private set intersection (PSI) for secure 2-party co...

DPOAD: Differentially Private Outsourcing of Anomaly Detection through Iterative Sensitivity Learning

Outsourcing anomaly detection to third-parties can allow data owners to ...

Selective MPC: Distributed Computation of Differentially Private Key Value Statistics

An increasingly popular method for computing aggregate statistics while ...

Differentially Private Hamiltonian Monte Carlo

Markov chain Monte Carlo (MCMC) algorithms have long been the main workh...

Differentially Private Linear Bandits with Partial Distributed Feedback

In this paper, we study the problem of global reward maximization with o...

Generation of Differentially Private Heterogeneous Electronic Health Records

Electronic Health Records (EHRs) are commonly used by the machine learni...