Meaningful human control over AI systems: beyond talking the talk

The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans (e.g., users, designers and developers, manufacturers, legislators). However, the relevant discussions around meaningful human control have so far not resulted in clear requirements for researchers, designers, and engineers. As a result, there is no consensus on how to assess whether a designed AI system is under meaningful human control, making the practical development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying four actionable properties which AI-based systems must have to be under meaningful human control. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue these four properties are necessary for AI systems under meaningful human control, and provide possible directions to incorporate them into practice. We illustrate these properties with two use cases, automated vehicle and AI-based hiring. We believe these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control and responsibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2022

A Cognitive Framework for Delegation Between Error-Prone AI and Human Agents

With humans interacting with AI-based systems at an increasing rate, it ...
research
07/12/2023

Reflective Hybrid Intelligence for Meaningful Human Control in Decision-Support Systems

With the growing capabilities and pervasiveness of AI systems, societies...
research
05/30/2023

Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety

The rapid advancement of artificial intelligence (AI) systems suggests t...
research
10/15/2019

Challenges of Human-Aware AI Systems

From its inception, AI has had a rather ambivalent relationship to human...
research
03/22/2023

Human Uncertainty in Concept-Based AI Systems

Placing a human in the loop may abate the risks of deploying AI systems ...
research
03/16/2023

Characterizing Manipulation from AI Systems

Manipulation is a common concern in many domains, such as social media, ...
research
09/29/2017

Explainable Planning

As AI is increasingly being adopted into application solutions, the chal...

Please sign up or login with your details

Forgot password? Click here to reset