Conservative AI and social inequality: Conceptualizing alternatives to bias through social theory

07/16/2020
by   Mike Zajko, et al.
0

In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist's view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an understanding of the ways that inequality is continually reproduced in society – processes that AI systems are either complicit in, or can be designed to disrupt and counter. The contrast presented here is between conservative and radical approaches to AI, with conservatism referring to dominant tendencies that reproduce and strengthen the status quo, while radical approaches work to disrupt systemic forms of inequality. The limitations of conservative approaches to class, gender, and racial bias are discussed as specific examples, along with the social structures and processes that biases in these areas are linked to. Societal issues can no longer be out of scope for AI and machine learning, given the impact of these systems on human lives. This requires engagement with a growing body of critical AI scholarship that goes beyond biased data to analyze structured ways of perpetuating inequality, opening up the possibility for radical alternatives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2020

Data, Power and Bias in Artificial Intelligence

Artificial Intelligence has the potential to exacerbate societal bias an...
research
10/14/2022

Pseudo AI Bias

Pseudo Artificial Intelligence bias (PAIB) is broadly disseminated in th...
research
01/07/2020

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

AI algorithms are not immune to biases. Traditionally, non-experts have ...
research
07/15/2023

Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

Bias evaluation benchmarks and dataset and model documentation have emer...
research
04/30/2021

Does "AI" stand for augmenting inequality in the era of covid-19 healthcare?

Among the most damaging characteristics of the covid-19 pandemic has bee...
research
06/26/2021

Detecting race and gender bias in visual representation of AI on web search engines

Web search engines influence perception of social reality by filtering a...
research
12/11/2020

Interdisciplinary Approaches to Understanding Artificial Intelligence's Impact on Society

Innovations in AI have focused primarily on the questions of "what" and ...

Please sign up or login with your details

Forgot password? Click here to reset