FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven Social-Critical Algorithms

07/11/2023
by   Normen Yu, et al.
0

This thesis explores open-sourced machine learning (ML) model explanation tools to understand whether these tools can allow a layman to visualize, understand, and suggest intuitive remedies to unfairness in ML-based decision-support systems. Machine learning models trained on datasets biased against minority groups are increasingly used to guide life-altering social decisions, prompting the urgent need to study their logic for unfairness. Due to this problem's impact on vast populations of the general public, it is critical for the layperson – not just subject matter experts in social justice or machine learning experts – to understand the nature of unfairness within these algorithms and the potential trade-offs. Existing research on fairness in machine learning focuses mostly on the mathematical definitions and tools to understand and remedy unfair models, with some directly citing user-interactive tools as necessary for future work. This thesis presents FairLay-ML, a proof-of-concept GUI integrating some of the most promising tools to provide intuitive explanations for unfair logic in ML models by integrating existing research tools (e.g. Local Interpretable Model-Agnostic Explanations) with existing ML-focused GUI (e.g. Python Streamlit). We test FairLay-ML using models of various accuracy and fairness generated by an unfairness detector tool, Parfait-ML, and validate our results using Themis. Our study finds that the technology stack used for FairLay-ML makes it easy to install and provides real-time black-box explanations of pre-trained models to users. Furthermore, the explanations provided translate to actionable remedies.

READ FULL TEXT

page 12

page 13

page 19

page 25

page 26

page 29

page 40

research
09/12/2022

Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

Model explainability has become an important problem in machine learning...
research
03/07/2023

Causal Dependence Plots for Interpretable Machine Learning

Explaining artificial intelligence or machine learning models is an incr...
research
07/29/2022

Leveraging Explanations in Interactive Machine Learning: An Overview

Explanations have gained an increasing level of interest in the AI and M...
research
09/30/2021

The Explanatory Gap in Algorithmic News Curation

Considering the large amount of available content, social media platform...
research
09/24/2021

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

During a research project in which we developed a machine learning (ML) ...
research
05/27/2020

Seamlessly Integrating Loops That Matter into Model Development and Analysis

Understanding why models behave the way they do is critical to learning ...
research
09/01/2023

Declarative Reasoning on Explanations Using Constraint Logic Programming

Explaining opaque Machine Learning (ML) models is an increasingly releva...

Please sign up or login with your details

Forgot password? Click here to reset