DeepAI AI Chat
Log In Sign Up

Differentially Private Markov Chain Monte Carlo

01/29/2019
by   Mikko A. Heikkilä, et al.
0

Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects. In this paper, we further extend the applicability of DP Bayesian learning by presenting the first general DP Markov chain Monte Carlo (MCMC) algorithm whose privacy-guarantees are not subject to unrealistic assumptions on Markov chain convergence and that is applicable to posterior inference in arbitrary models. Our algorithm is based on a decomposition of the Barker acceptance test that allows evaluating the Rényi DP privacy cost of the accept-reject choice. We further show how to improve the DP guarantee through data subsampling and approximate acceptance tests.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/17/2021

Differentially Private Hamiltonian Monte Carlo

Markov chain Monte Carlo (MCMC) algorithms have long been the main workh...
06/22/2022

Optimal Local Bayesian Differential Privacy over Markov Chains

In the literature of data privacy, differential privacy is the most popu...
04/03/2022

Exact Privacy Guarantees for Markov Chain Implementations of the Exponential Mechanism with Artificial Atoms

Implementations of the exponential mechanism in differential privacy oft...
03/24/2022

Statistic Selection and MCMC for Differentially Private Bayesian Estimation

This paper concerns differentially private Bayesian estimation of the pa...
02/28/2022

Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten

As the use of machine learning (ML) models is becoming increasingly popu...
08/12/2022

Differentially Private Kolmogorov-Smirnov-Type Tests

The test statistics for many nonparametric hypothesis tests can be expre...
04/21/2022

Differentially Private Learning with Margin Guarantees

We present a series of new differentially private (DP) algorithms with d...