DeepAI AI Chat
Log In Sign Up

Partial sensitivity analysis in differential privacy

by   Tamara T. Mueller, et al.
Technische Universität München

Differential privacy (DP) allows the quantification of privacy loss when the data of individuals is subjected to algorithmic processing such as machine learning, as well as the provision of objective privacy guarantees. However, while techniques such as individual Rényi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss. Here we extend the view of individual RDP by introducing a new concept we call partial sensitivity, which leverages symbolic automatic differentiation to determine the influence of each input feature on the gradient norm of a function. We experimentally evaluate our approach on queries over private databases, where we obtain a feature-level contribution of private attributes to the DP guarantee of individuals. Furthermore, we explore our findings in the context of neural network training on synthetic data by investigating the partial sensitivity of input pixels on an image classification task.


How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

Differential privacy (DP) is typically formulated as a worst-case privac...

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

In recent years, formal methods of privacy protection such as differenti...

An automatic differentiation system for the age of differential privacy

We introduce Tritium, an automatic differentiation-based sensitivity ana...

Efficient Privacy-Preserved Processing of Multimodal Data for Vehicular Traffic Analysis

We estimate vehicular traffic states from multimodal data collected by s...

Individual Sensitivity Preprocessing for Data Privacy

The sensitivity metric in differential privacy, which is informally defi...