ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning

07/18/2020
by   Sasi Kumar Murakonda, et al.
0

When building machine learning models using sensitive data, organizations should ensure that the data processed in such systems is adequately protected. For projects involving machine learning on personal data, Article 35 of the GDPR mandates it to perform a Data Protection Impact Assessment (DPIA). In addition to the threats of illegitimate access to data through security breaches, machine learning models pose an additional privacy risk to the data by indirectly revealing about it through the model predictions and parameters. Guidances released by the Information Commissioner's Office (UK) and the National Institute of Standards and Technology (US) emphasize on the threat to data from models and recommend organizations to account for and estimate these risks to comply with data protection regulations. Hence, there is an immediate need for a tool that can quantify the privacy risk to data from models. In this paper, we focus on this indirect leakage about training data from machine learning models. We present ML Privacy Meter, a tool that can quantify the privacy risk to data from models through state of the art membership inference attack techniques. We discuss how this tool can help practitioners in compliance with data protection regulations, when deploying machine learning models.

READ FULL TEXT

page 1

page 2

page 3

research
09/14/2022

Data Privacy and Trustworthy Machine Learning

The privacy risks of machine learning models is a major concern when tra...
research
08/06/2020

Data Minimization for GDPR Compliance in Machine Learning Models

The EU General Data Protection Regulation (GDPR) mandates the principle ...
research
09/08/2022

Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future Directions

Recent advances in machine learning have enabled its wide application in...
research
04/07/2021

Evaluating Medical IoT (MIoT) Device Security using NISTIR-8228 Expectations

How do healthcare organizations (from small Practices to large HDOs) eva...
research
12/09/2020

Risk Management Framework for Machine Learning Security

Adversarial attacks for machine learning models have become a highly stu...
research
11/15/2020

Towards Compliant Data Management Systems for Healthcare ML

The increasing popularity of machine learning approaches and the rising ...
research
05/30/2023

Quantifying Overfitting: Evaluating Neural Network Performance through Analysis of Null Space

Machine learning models that are overfitted/overtrained are more vulnera...

Please sign up or login with your details

Forgot password? Click here to reset