Explainability for identification of vulnerable groups in machine learning models

03/01/2022
by   Inga Strumke, et al.
18

If a prediction model identifies vulnerable individuals or groups, the use of that model may become an ethical issue. But can we know that this is what a model does? Machine learning fairness as a field is focused on the just treatment of individuals and groups under information processing with machine learning methods. While considerable attention has been given to mitigating discrimination of protected groups, vulnerable groups have not received the same attention. Unlike protected groups, which can be regarded as always vulnerable, a vulnerable group may be vulnerable in one context but not in another. This raises new challenges on how and when to protect vulnerable individuals and groups under machine learning. Methods from explainable artificial intelligence (XAI), in contrast, do consider more contextual issues and are concerned with answering the question "why was this decision made?". Neither existing fairness nor existing explainability methods allow us to ascertain if a prediction model identifies vulnerability. We discuss this problem and propose approaches for analysing prediction models in this respect.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2022

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Motivations for methods in explainable artificial intelligence (XAI) oft...
research
01/12/2023

Against Algorithmic Exploitation of Human Vulnerabilities

Decisions such as which movie to watch next, which song to listen to, or...
research
02/08/2023

Participatory Systems for Personalized Prediction

Machine learning models are often personalized based on information that...
research
10/10/2022

fAux: Testing Individual Fairness via Gradient Alignment

Machine learning models are vulnerable to biases that result in unfair t...
research
08/14/2019

Stop the Open Data Bus, We Want to Get Off

The subject of this report is the re-identification of individuals in th...
research
01/24/2020

Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack

Machine learning algorithms, when applied to sensitive data, pose a pote...
research
07/04/2022

A Framework for Auditing Multilevel Models using Explainability Methods

Applications of multilevel models usually result in binary classificatio...

Please sign up or login with your details

Forgot password? Click here to reset