DeepAI AI Chat
Log In Sign Up

Differential Privacy with Higher Utility through Non-identical Additive Noise

by   Gokularam Muthukrishnan, et al.
Indian Institute Of Technology, Madras

Differential privacy is typically ensured by perturbation with additive noise that is sampled from a known distribution. Conventionally, independent and identically distributed (i.i.d.) noise samples are added to each coordinate. In this work, propose to add noise which is independent, but not identically distributed (i.n.i.d.) across the coordinates. In particular, we study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy. The optimal choice of parameters that ensure these conditions are derived theoretically. Theoretical analyses and numerical simulations show that the i.n.i.d. mechanisms achieve higher utility for the given privacy requirements compared to their i.i.d. counterparts.


page 1

page 2

page 3

page 4


Grafting Laplace and Gaussian distributions: A new noise mechanism for differential privacy

The framework of Differential privacy protects an individual's privacy w...

A bounded-noise mechanism for differential privacy

Answering multiple counting queries is one of the best-studied problems ...

Local Graph-homomorphic Processing for Privatized Distributed Systems

We study the generation of dependent random numbers in a distributed fas...

Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning

We study the privacy risks that are associated with training a neural ne...

Private and Utility Enhanced Recommendations with Local Differential Privacy and Gaussian Mixture Model

Recommendation systems rely heavily on users behavioural and preferentia...

Duff: A Dataset-Distance-Based Utility Function Family for the Exponential Mechanism

We propose and analyze a general-purpose dataset-distance-based utility ...