Bad, mad and cooked apples: Responsibility for unlawful targeting in human-AI military teams

10/31/2022
by   Susannah Kate Devitt, et al.
0

A Nation's responsibility is to predict in advance and protect human wellbeing in conflict including protection from moral injury and unjust attribution of responsibility for their actions. This position paper considers responsibility for unlawful killings by human AI teams drawing on a metaphor from Neta Crawford's chapter, When Soldiers Snap: Bad Apples and Mad Apples, in Accountability for Killing: Moral responsibility for collateral damage in America's post 911 wars. This paper contends that although militaries may have some bad apples responsible for war crimes and some mad apples unable to be responsible for their actions during a conflict, increasingly militaries may cook their good apples by putting them in untenable decision making environments with AI. A cooked apple may be pushed beyond reasonable limits leading to a loss of situational awareness, cognitive overload, loss of agency and autonomy leading to automation bias. In these cases, moral responsibility and perhaps even legal responsibility for unlawful deaths may be contested for cooked apples, risking operators becoming moral crumple zones and or suffering moral injury from being part of larger human AI systems authorised by the state. Nations are responsible for minimising risks to humans within reasonable bounds and compliance with legal obligations in human AI military teams, and the military systems used to make or implement decisions. The paper suggests that best practise WHS frameworks might be drawn on in development, acquisition and training ahead of deployment of systems in conflicts to predict and mitigate risks of human AI military teams.

READ FULL TEXT
research
04/23/2020

Responsible AI and Its Stakeholders

Responsible Artificial Intelligence (AI) proposes a framework that holds...
research
11/23/2021

Australia's Approach to AI Governance in Security and Defence

Australia is a leading AI nation with strong allies and partnerships. Au...
research
10/25/2021

Normative Epistemology for Lethal Autonomous Weapons Systems

The rise of human-information systems, cybernetic systems, and increasin...
research
12/07/2022

DDoD: Dual Denial of Decision Attacks on Human-AI Teams

Artificial Intelligence (AI) systems have been increasingly used to make...
research
04/21/2023

The centaur programmer – How Kasparov's Advanced Chess spans over to the software development of the future

We introduce the idea of Centaur Programmer, based on the premise that a...
research
10/15/2019

Objective and Subjective Responsibility of a Control-Room Worker

When working with AI and advanced automation, human responsibility for o...

Please sign up or login with your details

Forgot password? Click here to reset