DeepAI AI Chat
Log In Sign Up

Understanding Compressive Adversarial Privacy

by   Xiao Chen, et al.
Stanford University

Designing a data sharing mechanism without sacrificing too much privacy can be considered as a game between data holders and malicious attackers. This paper describes a compressive adversarial privacy framework that captures the trade-off between the data privacy and utility. We characterize the optimal data releasing mechanism through convex optimization when assuming that both the data holder and attacker can only modify the data using linear transformations. We then build a more realistic data releasing mechanism that can rely on a nonlinear compression model while the attacker uses a neural network. We demonstrate in a series of empirical applications that this framework, consisting of compressive adversarial privacy, can preserve sensitive information.


page 5

page 8

page 9


Compressive analysis and the Future of Privacy

Compressive analysis is the name given to the family of techniques that ...

Deep Directed Information-Based Learning for Privacy-Preserving Smart Meter Data Release

The explosion of data collection has raised serious privacy concerns in ...

A compressive multi-kernel method for privacy-preserving machine learning

As the analytic tools become more powerful, and more data are generated ...

Passive and active attackers in noiseless privacy

Differential privacy offers clear and strong quantitative guarantees for...

Privacy-preserving Neural Representations of Text

This article deals with adversarial attacks towards deep learning system...

Genomic Data Sharing under Dependent Local Differential Privacy

Privacy-preserving genomic data sharing is prominent to increase the pac...