Responsible AI and Its Stakeholders

04/23/2020
by   Gabriel Lima, et al.
46

Responsible Artificial Intelligence (AI) proposes a framework that holds all stakeholders involved in the development of AI to be responsible for their systems. It, however, fails to accommodate the possibility of holding AI responsible per se, which could close some legal and moral gaps concerning the deployment of autonomous and self-learning systems. We discuss three notions of responsibility (i.e., blameworthiness, accountability, and liability) for all stakeholders, including AI, and suggest the roles of jurisdiction and the general public in this matter.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

Tailoring Requirements Engineering for Responsible AI

Requirements Engineering (RE) is the discipline for identifying, analyzi...
research
11/30/2022

Prioritizing Policies for Furthering Responsible Artificial Intelligence in the United States

Several policy options exist, or have been proposed, to further responsi...
research
05/27/2020

AI Forensics: Did the Artificial Intelligence System Do It? Why?

In an increasingly autonomous manner AI systems make decisions impacting...
research
02/13/2020

Functionally Effective Conscious AI Without Suffering

Insofar as consciousness has a functional role in facilitating learning ...
research
10/31/2022

Bad, mad and cooked apples: Responsibility for unlawful targeting in human-AI military teams

A Nation's responsibility is to predict in advance and protect human wel...
research
08/04/2023

Unravelling Responsibility for AI

To reason about where responsibility does and should lie in complex situ...
research
01/15/2021

Descriptive AI Ethics: Collecting and Understanding the Public Opinion

There is a growing need for data-driven research efforts on how the publ...

Please sign up or login with your details

Forgot password? Click here to reset