Verifying Pufferfish Privacy in Hidden Markov Models

08/04/2020
by   Depeng Liu, et al.
0

Pufferfish is a Bayesian privacy framework for designing and analyzing privacy mechanisms. It refines differential privacy, the current gold standard in data privacy, by allowing explicit prior knowledge in privacy analysis. Through these privacy frameworks, a number of privacy mechanisms have been developed in literature. In practice, privacy mechanisms often need be modified or adjusted to specific applications. Their privacy risks have to be re-evaluated for different circumstances. Moreover, computing devices only approximate continuous noises through floating-point computation, which is discrete in nature. Privacy proofs can thus be complicated and prone to errors. Such tedious tasks can be burdensome to average data curators. In this paper, we propose an automatic verification technique for Pufferfish privacy. We use hidden Markov models to specify and analyze discretized Pufferfish privacy mechanisms. We show that the Pufferfish verification problem in hidden Markov models is NP-hard. Using Satisfiability Modulo Theories solvers, we propose an algorithm to analyze privacy requirements. We implement our algorithm in a prototypical tool called FAIER, and present several case studies. Surprisingly, our case studies show that naïve discretization of well-established privacy mechanisms often fail, witnessed by counterexamples generated by FAIER. In discretized Above Threshold, we show that it results in absolutely no privacy. Finally, we compare our approach with testing based approach on several case studies, and show that our verification technique can be combined with testing based approach for the purpose of (i) efficiently certifying counterexamples and (ii) obtaining a better lower bound for the privacy budget ϵ.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2017

Emirati Speaker Verification Based on HMM1s, HMM2s, and HMM3s

This work focuses on Emirati speaker verification systems in neutral tal...
research
07/03/2023

Thompson Sampling under Bernoulli Rewards with Local Differential Privacy

This paper investigates the problem of regret minimization for multi-arm...
research
07/26/2018

Bisimilarity Distances for Approximate Differential Privacy

Differential privacy is a widely studied notion of privacy for various m...
research
08/17/2020

CheckDP: An Automated and Integrated Approach for Proving Differential Privacy or Finding Precise Counterexamples

We propose CheckDP, the first automated and integrated approach for prov...
research
09/15/2017

Synthesizing Coupling Proofs of Differential Privacy

Differential privacy has emerged as a promising probabilistic formulatio...
research
02/24/2021

Computing Differential Privacy Guarantees for Heterogeneous Compositions Using FFT

The recently proposed Fast Fourier Transform (FFT)-based accountant for ...
research
07/25/2023

Sensor selection for fine-grained behavior verification that respects privacy (extended version)

A useful capability is that of classifying some agent's behavior using d...

Please sign up or login with your details

Forgot password? Click here to reset