AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks

02/04/2021
by   McKane Andrus, et al.
0

Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-in-the-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedagogy in AI.

READ FULL TEXT
research
06/10/2021

Hard Choices in Artificial Intelligence

As AI systems are integrated into high stakes social domains, researcher...
research
06/08/2022

Automating Ambiguity: Challenges and Pitfalls of Artificial Intelligence

Machine learning (ML) and artificial intelligence (AI) tools increasingl...
research
11/04/2022

Is Decentralized AI Safer?

Artificial Intelligence (AI) has the potential to significantly benefit ...
research
10/31/2022

Artificial intelligence in government: Concepts, standards, and a unified framework

Recent advances in artificial intelligence (AI) and machine learning (ML...
research
02/25/2014

The Anatomy of a Modular System for Media Content Analysis

Intelligent systems for the annotation of media content are increasingly...
research
05/13/2023

Bridging History with AI A Comparative Evaluation of GPT 3.5, GPT4, and GoogleBARD in Predictive Accuracy and Fact Checking

The rapid proliferation of information in the digital era underscores th...
research
12/15/2022

Manifestations of Xenophobia in AI Systems

Xenophobia is one of the key drivers of marginalisation, discrimination,...

Please sign up or login with your details

Forgot password? Click here to reset