A toolkit of dilemmas: Beyond debiasing and fairness formulas for responsible AI/ML

Approaches to fair and ethical AI have recently fell under the scrutiny of the emerging, chiefly qualitative, field of critical data studies, placing emphasis on the lack of sensitivity to context and complex social phenomena of such interventions. We employ some of these lessons to introduce a tripartite decision-making toolkit, informed by dilemmas encountered in the pursuit of responsible AI/ML. These are: (a) the opportunity dilemma between the availability of data shaping problem statements vs problem statements shaping data; (b) the trade-off between scalability and contextualizability (too much data versus too specific data); and (c) the epistemic positioning between the pragmatic technical objectivism and the reflexive relativism in acknowledging the social. This paper advocates for a situated reasoning and creative engagement with the dilemmas surrounding responsible algorithmic/data-driven systems, and going beyond the formulaic bias elimination and ethics operationalization narratives found in the fair-AI literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2023

Responsible Design Patterns for Machine Learning Pipelines

Integrating ethical practices into the AI development process for artifi...
research
05/06/2022

Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

This survey article assesses and compares existing critiques of current ...
research
11/04/2022

Uncertainty-aware predictive modeling for fair data-driven decisions

Both industry and academia have made considerable progress in developing...
research
09/13/2021

Measurement as governance in and for responsible AI

Measurement of social phenomena is everywhere, unavoidably, in sociotech...
research
10/05/2022

Addressing contingency in algorithmic misinformation detection: Toward a responsible innovation agenda

Machine learning (ML) enabled classification models are becoming increas...
research
02/19/2022

The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness

Computer scientists are trained to create abstractions that simplify and...
research
11/10/2022

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Developing robust and fair AI systems require datasets with comprehensiv...

Please sign up or login with your details

Forgot password? Click here to reset