Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making

02/01/2021
by   Gabriel Lima, et al.
5

How to attribute responsibility for autonomous artificial intelligence (AI) systems' actions has been widely debated across the humanities and social science disciplines. This work presents two experiments (N=200 each) that measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful difference in how people perceived these agents' moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these findings, such as the need for explainable AI in high-stakes scenarios.

READ FULL TEXT
research
02/10/2021

The human-AI relationship in decision-making: AI explanation to support people on justifying their decisions

The explanation dimension of Artificial Intelligence (AI) based system h...
research
03/15/2021

Crossing the Tepper Line: An Emerging Ontology for Describing the Dynamic Sociality of Embodied AI

Artificial intelligences (AI) are increasingly being embodied and embedd...
research
01/07/2022

AI and the Sense of Self

After several winters, AI is center-stage once again, with current advan...
research
03/22/2022

Consent as a Foundation for Responsible Autonomy

This paper focuses on a dynamic aspect of responsible autonomy, namely, ...
research
07/02/2023

Minimum Levels of Interpretability for Artificial Moral Agents

As artificial intelligence (AI) models continue to scale up, they are be...
research
11/18/2021

Reinforcement Learning on Human Decision Models for Uniquely Collaborative AI Teammates

In 2021 the Johns Hopkins University Applied Physics Laboratory held an ...
research
10/05/2019

Towards Deployment of Robust AI Agents for Human-Machine Partnerships

We study the problem of designing AI agents that can robustly cooperate ...

Please sign up or login with your details

Forgot password? Click here to reset