Theoretical Guarantees for Model Auditing with Finite Adversaries

11/08/2019
by   Mario Diaz, et al.
0

Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data. Yet, in practice, even models learned with privacy guarantees can inadvertently memorize unique training examples or leak sensitive features. To identify such privacy violations, existing model auditing techniques use finite adversaries defined as machine learning models with (a) access to some finite side information (e.g., a small auditing dataset), and (b) finite capacity (e.g., a fixed neural network architecture). Our work investigates the requirements under which an unsuccessful attempt to identify privacy violations by a finite adversary implies that no stronger adversary can succeed at such a task. We do so via parameters that quantify the capabilities of the finite adversary, including the size of the neural network employed by such an adversary and the amount of side information it has access to as well as the regularity of the (perhaps privacy-guaranteeing) audited model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2018

2P-DNN : Privacy-Preserving Deep Neural Networks Based on Homomorphic Cryptosystem

Machine Learning as a Service (MLaaS), such as Microsoft Azure, Amazon A...
research
10/04/2018

Finding Solutions to Generative Adversarial Privacy

We present heuristics for solving the maximin problem induced by the gen...
research
08/16/2021

NeuraCrypt is not private

NeuraCrypt (Yara et al. arXiv 2021) is an algorithm that converts a sens...
research
11/16/2021

Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair

More capable language models increasingly saturate existing task benchma...
research
07/20/2016

Evaluating the Strength of Genomic Privacy Metrics

The genome is a unique identifier for human individuals. The genome also...
research
05/09/2021

Bounding Information Leakage in Machine Learning

Machine Learning services are being deployed in a large range of applica...
research
05/04/2022

Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type Conscious Transformation

We propose an adversarial learning framework that deals with the privacy...

Please sign up or login with your details

Forgot password? Click here to reset