DeepAI AI Chat
Log In Sign Up

Differential Privacy with Higher Utility through Non-identical Additive Noise

02/07/2023
by   Gokularam Muthukrishnan, et al.
Indian Institute Of Technology, Madras
0

Differential privacy is typically ensured by perturbation with additive noise that is sampled from a known distribution. Conventionally, independent and identically distributed (i.i.d.) noise samples are added to each coordinate. In this work, propose to add noise which is independent, but not identically distributed (i.n.i.d.) across the coordinates. In particular, we study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy. The optimal choice of parameters that ensure these conditions are derived theoretically. Theoretical analyses and numerical simulations show that the i.n.i.d. mechanisms achieve higher utility for the given privacy requirements compared to their i.i.d. counterparts.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/19/2022

Grafting Laplace and Gaussian distributions: A new noise mechanism for differential privacy

The framework of Differential privacy protects an individual's privacy w...
12/07/2020

A bounded-noise mechanism for differential privacy

Answering multiple counting queries is one of the best-studied problems ...
10/26/2022

Local Graph-homomorphic Processing for Privatized Distributed Systems

We study the generation of dependent random numbers in a distributed fas...
05/25/2022

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

We study the privacy risks that are associated with training a neural ne...
02/26/2021

Private and Utility Enhanced Recommendations with Local Differential Privacy and Gaussian Mixture Model

Recommendation systems rely heavily on users behavioural and preferentia...
10/08/2020

Duff: A Dataset-Distance-Based Utility Function Family for the Exponential Mechanism

We propose and analyze a general-purpose dataset-distance-based utility ...