DeepAI AI Chat
Log In Sign Up

Differentially Private Markov Chain Monte Carlo

by   Mikko A. Heikkilä, et al.

Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects. In this paper, we further extend the applicability of DP Bayesian learning by presenting the first general DP Markov chain Monte Carlo (MCMC) algorithm whose privacy-guarantees are not subject to unrealistic assumptions on Markov chain convergence and that is applicable to posterior inference in arbitrary models. Our algorithm is based on a decomposition of the Barker acceptance test that allows evaluating the Rényi DP privacy cost of the accept-reject choice. We further show how to improve the DP guarantee through data subsampling and approximate acceptance tests.


page 1

page 2

page 3

page 4


Differentially Private Hamiltonian Monte Carlo

Markov chain Monte Carlo (MCMC) algorithms have long been the main workh...

Optimal Local Bayesian Differential Privacy over Markov Chains

In the literature of data privacy, differential privacy is the most popu...

Exact Privacy Guarantees for Markov Chain Implementations of the Exponential Mechanism with Artificial Atoms

Implementations of the exponential mechanism in differential privacy oft...

Statistic Selection and MCMC for Differentially Private Bayesian Estimation

This paper concerns differentially private Bayesian estimation of the pa...

Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten

As the use of machine learning (ML) models is becoming increasingly popu...

Differentially Private Kolmogorov-Smirnov-Type Tests

The test statistics for many nonparametric hypothesis tests can be expre...

Differentially Private Learning with Margin Guarantees

We present a series of new differentially private (DP) algorithms with d...