Data quality dimensions for fair AI

05/11/2023
by   Camilla Quaresmini, et al.
0

AI systems are not intrinsically neutral and biases trickle in any type of technological tool. In particular when dealing with people, AI algorithms reflect technical errors originating with mislabeled data. As they feed wrong and discriminatory classifications, perpetuating structural racism and marginalization, these systems are not systematically guarded against bias. In this article we consider the problem of bias in AI systems from the point of view of Information Quality dimensions. We illustrate potential improvements of a bias mitigation tool in gender classification errors, referring to two typically difficult contexts: the classification of non-binary individuals and the classification of transgender individuals. The identification of data quality dimensions to implement in bias mitigation tool may help achieve more fairness. Hence, we propose to consider this issue in terms of completeness, consistency, timeliness and reliability, and offer some theoretical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2022

Speciesist bias in AI – How AI applications perpetuate discrimination and unfair outcomes against animals

Massive efforts are made to reduce biases in both data and algorithms in...
research
12/13/2021

Anatomizing Bias in Facial Analysis

Existing facial analysis systems have been shown to yield biased results...
research
05/31/2023

Bias Mitigation Methods for Binary Classification Decision-Making Systems: Survey and Recommendations

Bias mitigation methods for binary classification decision-making system...
research
03/07/2023

"If I Had All the Time in the World": Ophthalmologists' Perceptions of Anchoring Bias Mitigation in Clinical AI Support

Clinical needs and technological advances have resulted in increased use...
research
07/28/2020

Data, Power and Bias in Artificial Intelligence

Artificial Intelligence has the potential to exacerbate societal bias an...
research
03/20/2023

Bias mitigation techniques in image classification: fair machine learning in human heritage collections

A major problem with using automated classification systems is that if t...

Please sign up or login with your details

Forgot password? Click here to reset