Anonymizing Machine Learning Models

07/26/2020
by   Abigail Goldsteen, et al.
0

There is a known tension between the need to analyze personal data to drive business and privacy concerns. Many data protection regulations, including the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), set out strict restrictions and obligations on companies that collect or process personal data. Moreover, machine learning models themselves can be used to derive personal information, as demonstrated by recent membership and attribute inference attacks. Anonymized data, however, is exempt from data protection principles and obligations. Thus, models built on anonymized data are also exempt from any privacy obligations, in addition to providing better protection against such attacks on the training data. Learning on anonymized data typically results in a significant degradation in accuracy. We address this challenge by guiding our anonymization using the knowledge encoded within the model, and targeting it to minimize the impact on the model's accuracy, a process we call accuracy-guided anonymization. We demonstrate that by focusing on the model's accuracy rather than information loss, our method outperforms state of the art k-anonymity methods in terms of the achieved utility, in particular with high values of k and large numbers of quasi-identifiers. We also demonstrate that our approach achieves similar results in its ability to prevent membership inference attacks as alternative approaches based on differential privacy. This shows that model-guided anonymization can, in some cases, be a legitimate substitute for such methods, while averting some of their inherent drawbacks such as complexity, performance overhead and being fitted to specific model types. As opposed to methods that rely on adding noise during training, our approach does not rely on making any modifications to the training algorithm itself.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2020

Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries

Machine learning models have been shown to leak information violating th...
research
07/12/2018

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

Many individuals are concerned about the governance of machine learning ...
research
12/25/2017

Towards Measuring Membership Privacy

Machine learning models are increasingly made available to the masses th...
research
12/08/2020

Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale

Individuals are gaining more control of their personal data through rece...
research
02/27/2022

Attacks on Deidentification's Defenses

Quasi-identifier-based deidentification techniques (QI-deidentification)...
research
03/15/2023

The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models

Protecting personal data against the exploitation of machine learning mo...
research
01/13/2021

Unlearnable Examples: Making Personal Data Unexploitable

The volume of "free" data on the internet has been key to the current su...

Please sign up or login with your details

Forgot password? Click here to reset