Certified Data Removal from Machine Learning Models

11/08/2019
by   Chuan Guo, et al.
0

Good data stewardship requires removal of data at the request of the data's owner. This raises the question if and how a trained machine-learning model, which implicitly stores information about its training data, should be affected by such a removal request. Is it possible to "remove" data from a machine-learning model? We study this problem by defining certified removal: a very strong theoretical guarantee that a model from which data is removed cannot be distinguished from a model that never observed the data to begin with. We develop a certified-removal mechanism for linear classifiers and empirically study learning settings in which this mechanism is practical.

READ FULL TEXT
research
05/21/2023

Random Relabeling for Efficient Machine Unlearning

Learning algorithms and data are the driving forces for machine learning...
research
09/02/2022

An Introduction to Machine Unlearning

Removing the influence of a specified subset of training data from a mac...
research
09/17/2021

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

The right to erasure requires removal of a user's information from data ...
research
11/24/2022

Shortcut Removal for Improved OOD-Generalization

Machine learning is a data-driven discipline, and learning success is la...
research
05/19/2021

Statistical Learning for Best Practices in Tattoo Removal

The causes behind complications in laser-assisted tattoo removal are cur...
research
06/22/2023

Targeted Background Removal Creates Interpretable Feature Visualizations

Feature visualization is used to visualize learned features for black bo...
research
12/07/2020

Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately

The presence of spurious features interferes with the goal of obtaining ...

Please sign up or login with your details

Forgot password? Click here to reset