Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments

11/20/2019
by   Roel Dobbe, et al.
0

As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang's theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson's work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2021

Hard Choices in Artificial Intelligence

As AI systems are integrated into high stakes social domains, researcher...
research
11/30/2022

Prioritizing Policies for Furthering Responsible Artificial Intelligence in the United States

Several policy options exist, or have been proposed, to further responsi...
research
02/21/2023

Tailoring Requirements Engineering for Responsible AI

Requirements Engineering (RE) is the discipline for identifying, analyzi...
research
01/21/2023

The Pipeline for the Continuous Development of Artificial Intelligence Models – Current State of Research and Practice

Companies struggle to continuously develop and deploy AI models to compl...
research
04/14/2021

Towards a framework for evaluating the safety, acceptability and efficacy of AI systems for health: an initial synthesis

The potential presented by Artificial Intelligence (AI) for healthcare h...
research
08/28/2021

Why and How Governments Should Monitor AI Development

In this paper we outline a proposal for improving the governance of arti...
research
05/10/2022

Social Inclusion in Curated Contexts: Insights from Museum Practices

Artificial intelligence literature suggests that minority and fragile co...

Please sign up or login with your details

Forgot password? Click here to reset