Visual Identification of Problematic Bias in Large Label Spaces

01/17/2022
by   Alex Bäuerle, et al.
0

While the need for well-trained, fair ML systems is increasing ever more, measuring fairness for modern models and datasets is becoming increasingly difficult as they grow at an unprecedented pace. One key challenge in scaling common fairness metrics to such models and datasets is the requirement of exhaustive ground truth labeling, which cannot always be done. Indeed, this often rules out the application of traditional analysis metrics and systems. At the same time, ML-fairness assessments cannot be made algorithmically, as fairness is a highly subjective matter. Thus, domain experts need to be able to extract and reason about bias throughout models and datasets to make informed decisions. While visual analysis tools are of great help when investigating potential bias in DL models, none of the existing approaches have been designed for the specific tasks and challenges that arise in large label spaces. Addressing the lack of visualization work in this area, we propose guidelines for designing visualizations for such large label spaces, considering both technical and ethical issues. Our proposed visualization approach can be integrated into classical model and data pipelines, and we provide an implementation of our techniques open-sourced as a TensorBoard plug-in. With our approach, different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias.

READ FULL TEXT

page 3

page 6

page 7

research
08/14/2020

LiFT: A Scalable Framework for Measuring Fairness in ML Applications

Many internet applications are powered by machine learned models, which ...
research
09/24/2018

Evaluating Fairness Metrics in the Presence of Dataset Bias

Data-driven algorithms play a large role in decision making across a var...
research
05/18/2022

Software Fairness: An Analysis and Survey

In the last decade, researchers have studied fairness as a software prop...
research
06/28/2021

Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics

Measuring bias is key for better understanding and addressing unfairness...
research
09/16/2023

Bias and Fairness in Chatbots: An Overview

Chatbots have been studied for more than half a century. With the rapid ...
research
03/05/2021

Measuring Model Biases in the Absence of Ground Truth

Recent advances in computer vision have led to the development of image ...
research
10/26/2021

Managing Bias in Human-Annotated Data: Moving Beyond Bias Removal

Due to the widespread use of data-powered systems in our everyday lives,...

Please sign up or login with your details

Forgot password? Click here to reset