Explaining the Punishment Gap of AI and Robots

03/13/2020
by   Gabriel Lima, et al.
0

The European Parliament's proposal to create a new legal status for artificial intelligence (AI) and robots brought into focus the idea of electronic legal personhood. This discussion, however, is hugely controversial. While some scholars argue that the proposed status could contribute to the coherence of the legal system, others say that it is neither beneficial nor desirable. Notwithstanding this prospect, we conducted a survey (N=3315) to understand online users' perceptions of the legal personhood of AI and robots. We observed how the participants assigned responsibility, awareness, and punishment to AI, robots, humans, and various entities that could be held liable under existing doctrines. We also asked whether the participants thought that punishing electronic agents fulfills the same legal and social functions as human punishment. The results suggest that even though people do not assign any mental state to electronic agents and are not willing to grant AI and robots physical independence or assets, which are the prerequisites of criminal or civil liability, they do consider them responsible for their actions and worthy of punishment. The participants also did not think that punishment or liability of these entities would achieve the primary functions of punishment, leading to what we define as the punishment gap. Therefore, before we recognize electronic legal personhood, we must first discuss proper methods of satisfying the general population's demand for punishment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2020

Collecting the Public Perception of AI and Robot Rights

Whether to give rights to artificial intelligence (AI) and robots has be...
research
08/19/2023

Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust

This paper presents the results of an extensive study investigating the ...
research
12/23/2020

Antitrust and Artificial Intelligence (AAI): Antitrust Vigilance Lifecycle and AI Legal Reasoning Autonomy

There is an increasing interest in the entwining of the field of antitru...
research
02/16/2021

A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening Civil Liberties with Non-Invasive AI Lie Detection

Imagine an app on your phone or computer that can tell if you are being ...
research
10/09/2021

Towards AI Logic for Social Reasoning

Artificial Intelligence (AI) logic formalizes the reasoning of intellige...
research
05/04/2023

ChatGPT and Works Scholarly: Best Practices and Legal Pitfalls in Writing with AI

Recent advances in artificial intelligence (AI) have raised questions ab...
research
04/23/2023

Epistemic considerations when AI answers questions for us

In this position paper, we argue that careless reliance on AI to answer ...

Please sign up or login with your details

Forgot password? Click here to reset