DeepAI AI Chat
Log In Sign Up

Understanding Compressive Adversarial Privacy

09/21/2018
by   Xiao Chen, et al.
Stanford University
0

Designing a data sharing mechanism without sacrificing too much privacy can be considered as a game between data holders and malicious attackers. This paper describes a compressive adversarial privacy framework that captures the trade-off between the data privacy and utility. We characterize the optimal data releasing mechanism through convex optimization when assuming that both the data holder and attacker can only modify the data using linear transformations. We then build a more realistic data releasing mechanism that can rely on a nonlinear compression model while the attacker uses a neural network. We demonstrate in a series of empirical applications that this framework, consisting of compressive adversarial privacy, can preserve sensitive information.

READ FULL TEXT

page 5

page 8

page 9

06/06/2020

Compressive analysis and the Future of Privacy

Compressive analysis is the name given to the family of techniques that ...
11/20/2020

Deep Directed Information-Based Learning for Privacy-Preserving Smart Meter Data Release

The explosion of data collection has raised serious privacy concerns in ...
06/20/2021

A compressive multi-kernel method for privacy-preserving machine learning

As the analytic tools become more powerful, and more data are generated ...
05/02/2019

Passive and active attackers in noiseless privacy

Differential privacy offers clear and strong quantitative guarantees for...
08/28/2018

Privacy-preserving Neural Representations of Text

This article deals with adversarial attacks towards deep learning system...
02/15/2021

Genomic Data Sharing under Dependent Local Differential Privacy

Privacy-preserving genomic data sharing is prominent to increase the pac...