To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods

02/07/2023
by   Dawen Zhang, et al.
0

The right to be forgotten (RTBF) is motivated by the desire of people not to be perpetually disadvantaged by their past deeds. For this, data deletion needs to be deep and permanent, and should be removed from machine learning models. Researchers have proposed machine unlearning algorithms which aim to erase specific data from trained models more efficiently. However, these methods modify how data is fed into the model and how training is done, which may subsequently compromise AI ethics from the fairness perspective. To help software engineers make responsible decisions when adopting these unlearning methods, we present the first study on machine unlearning methods to reveal their fairness implications. We designed and conducted experiments on two typical machine unlearning methods (SISA and AmnesiacML) along with a retraining method (ORTR) as baseline using three fairness datasets under three different deletion strategies. Experimental results show that under non-uniform data deletion, SISA leads to better fairness compared with ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods. These findings have exposed an important research problem in software engineering, and can help practitioners better understand the potential trade-offs on fairness when considering solutions for RTBF.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2020

Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models

Modern software relies heavily on data and machine learning, and affects...
research
07/17/2023

Fairness in KI-Systemen

The more AI-assisted decisions affect people's lives, the more important...
research
05/14/2019

Software Engineering for Fairness: A Case Study with Hyperparameter Optimization

We assert that it is the ethical duty of software engineers to strive to...
research
05/22/2023

Causality-Aided Trade-off Analysis for Machine Learning Fairness

There has been an increasing interest in enhancing the fairness of machi...
research
03/15/2020

Getting Fairness Right: Towards a Toolbox for Practitioners

The potential risk of AI systems unintentionally embedding and reproduci...
research
06/03/2021

BiFair: Training Fair Models with Bilevel Optimization

Prior studies have shown that, training machine learning models via empi...
research
06/08/2021

Adaptive Machine Unlearning

Data deletion algorithms aim to remove the influence of deleted data poi...

Please sign up or login with your details

Forgot password? Click here to reset