No computation without representation: Avoiding data and algorithm biases through diversity

02/26/2020
by   Caitlin Kuhlman, et al.
0

The emergence and growth of research on issues of ethics in AI, and in particular algorithmic fairness, has roots in an essential observation that structural inequalities in society are reflected in the data used to train predictive models and in the design of objective functions. While research aiming to mitigate these issues is inherently interdisciplinary, the design of unbiased algorithms and fair socio-technical systems are key desired outcomes which depend on practitioners from the fields of data science and computing. However, these computing fields broadly also suffer from the same under-representation issues that are found in the datasets we analyze. This disconnect affects the design of both the desired outcomes and metrics by which we measure success. If the ethical AI research community accepts this, we tacitly endorse the status quo and contradict the goals of non-discrimination and equity which work on algorithmic fairness, accountability, and transparency seeks to address. Therefore, we advocate in this work for diversifying computing as a core priority of the field and our efforts to achieve ethical AI practices. We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets, machine learning models, problem formulations, and interpretation of results. Examining the current fairness/ethics in AI literature, we highlight cases where this lack of diverse perspectives has been foundational to the inequity in treatment of underrepresented and protected group data. We also look to other professional communities, such as in law and health, where disparities have been reduced both in the educational diversity of trainees and among professional practices. We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2022

Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

This survey article assesses and compares existing critiques of current ...
research
01/14/2019

Approaching Ethical Guidelines for Data Scientists

The goal of this article is to inspire data scientists to participate in...
research
05/10/2022

Social Inclusion in Curated Contexts: Insights from Museum Practices

Artificial intelligence literature suggests that minority and fragile co...
research
06/10/2021

It's COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

Risk assessment instrument (RAI) datasets, particularly ProPublica's COM...
research
06/25/2021

Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in Criminal Justice

Early studies of risk assessment algorithms used in criminal justice rev...
research
09/19/2023

Towards affective computing that works for everyone

Missing diversity, equity, and inclusion elements in affective computing...
research
03/27/2023

Beyond Accuracy: A Critical Review of Fairness in Machine Learning for Mobile and Wearable Computing

The field of mobile, wearable, and ubiquitous computing (UbiComp) is und...

Please sign up or login with your details

Forgot password? Click here to reset