Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

01/23/2019
by   Jonathan Dodge, et al.
0

Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased, and user-friendly explanations that people can confidently rely on. Towards that end, we conducted an empirical study with four types of programmatically generated explanations to understand how they impact people's fairness judgments of ML systems. With an experiment involving more than 160 Mechanical Turk workers, we show that: 1) Certain explanations are considered inherently less fair, while others can enhance people's confidence in the fairness of the algorithm; 2) Different fairness problems--such as model-wide fairness issues versus case-specific fairness discrepancies--may be more effectively exposed through different styles of explanation; 3) Individual differences, including prior positions and judgment criteria of algorithmic fairness, impact how people react to different styles of explanation. We conclude with a discussion on providing personalized and adaptive explanations to support fairness judgments of ML systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2020

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Active Learning (AL) is a human-in-the-loop Machine Learning paradigm fa...
research
04/27/2022

On the Relationship Between Explanations, Fairness Perceptions, and Decisions

It is known that recommendations of AI-based systems can be incorrect or...
research
04/18/2023

Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study

While the emerging research field of explainable artificial intelligence...
research
04/27/2023

Why not both? Complementing explanations with uncertainty, and the role of self-confidence in Human-AI collaboration

AI and ML models have already found many applications in critical domain...
research
04/20/2020

Games for Fairness and Interpretability

As Machine Learning (ML) systems becomes more ubiquitous, ensuring the f...

Please sign up or login with your details

Forgot password? Click here to reset