Playing the Blame Game with Robots

02/08/2021
by   Markus Kneer, et al.
0

Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive" capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/15/2021

Can Artificial Intelligence Make Art?

In two experiments (total N=693) we explored whether people are willing ...
02/19/2021

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

People supported by AI-powered decision support tools frequently overrel...
01/18/2019

Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game Level Editor Affects Creators

Machine learning advances have afforded an increase in algorithms capabl...
03/15/2021

Crossing the Tepper Line: An Emerging Ontology for Describing the Dynamic Sociality of Embodied AI

Artificial intelligences (AI) are increasingly being embodied and embedd...
02/15/2021

The corruptive force of AI-generated advice

Artificial Intelligence (AI) is increasingly becoming a trusted advisor ...
08/01/2018

Studying Preferences and Concerns about Information Disclosure in Email Notifications

The proliferation of network-connected devices and applications has resu...
01/12/2022

Revelation of Task Difficulty in AI-aided Education

When a student is asked to perform a given task, her subjective estimate...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.