DeepAI AI Chat
Log In Sign Up

AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks

02/04/2021
by   McKane Andrus, et al.
0

Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-in-the-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedagogy in AI.

READ FULL TEXT
06/10/2021

Hard Choices in Artificial Intelligence

As AI systems are integrated into high stakes social domains, researcher...
06/08/2022

Automating Ambiguity: Challenges and Pitfalls of Artificial Intelligence

Machine learning (ML) and artificial intelligence (AI) tools increasingl...
03/01/2021

Anticipation Next – System-sensitive technology development and integration in work contexts

When discussing future concerns within socio-technical systems in work c...
10/31/2022

Artificial intelligence in government: Concepts, standards, and a unified framework

Recent advances in artificial intelligence (AI) and machine learning (ML...
03/02/2022

Artificial Concepts of Artificial Intelligence: Institutional Compliance and Resistance in AI Startups

Scholars and industry practitioners have debated how to best develop int...
12/15/2022

Manifestations of Xenophobia in AI Systems

Xenophobia is one of the key drivers of marginalisation, discrimination,...
02/25/2014

The Anatomy of a Modular System for Media Content Analysis

Intelligent systems for the annotation of media content are increasingly...