Anonymizing Machine Learning Models

by   Abigail Goldsteen, et al.

There is a known tension between the need to analyze personal data to drive business and privacy concerns. Many data protection regulations, including the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), set out strict restrictions and obligations on companies that collect or process personal data. Moreover, machine learning models themselves can be used to derive personal information, as demonstrated by recent membership and attribute inference attacks. Anonymized data, however, is exempt from data protection principles and obligations. Thus, models built on anonymized data are also exempt from any privacy obligations, in addition to providing better protection against such attacks on the training data. Learning on anonymized data typically results in a significant degradation in accuracy. We address this challenge by guiding our anonymization using the knowledge encoded within the model, and targeting it to minimize the impact on the model's accuracy, a process we call accuracy-guided anonymization. We demonstrate that by focusing on the model's accuracy rather than information loss, our method outperforms state of the art k-anonymity methods in terms of the achieved utility, in particular with high values of k and large numbers of quasi-identifiers. We also demonstrate that our approach achieves similar results in its ability to prevent membership inference attacks as alternative approaches based on differential privacy. This shows that model-guided anonymization can, in some cases, be a legitimate substitute for such methods, while averting some of their inherent drawbacks such as complexity, performance overhead and being fitted to specific model types. As opposed to methods that rely on adding noise during training, our approach does not rely on making any modifications to the training algorithm itself.



There are no comments yet.


page 1

page 2

page 3

page 4


Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference

Differentially private training algorithms provide protection against on...

Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale

Individuals are gaining more control of their personal data through rece...

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

Many individuals are concerned about the governance of machine learning ...

Data Minimization for GDPR Compliance in Machine Learning Models

The EU General Data Protection Regulation (GDPR) mandates the principle ...

Unlearnable Examples: Making Personal Data Unexploitable

The volume of "free" data on the internet has been key to the current su...

TransMIA: Membership Inference Attacks Using Transfer Shadow Training

Transfer learning has been widely studied and gained increasing populari...

Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep Models

The huge computation demand of deep learning models and limited computat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.