Bad, mad and cooked apples: Responsibility for unlawful targeting in human-AI military teams

10/31/2022
by   Susannah Kate Devitt, et al.
0

A Nation's responsibility is to predict in advance and protect human wellbeing in conflict including protection from moral injury and unjust attribution of responsibility for their actions. This position paper considers responsibility for unlawful killings by human AI teams drawing on a metaphor from Neta Crawford's chapter, When Soldiers Snap: Bad Apples and Mad Apples, in Accountability for Killing: Moral responsibility for collateral damage in America's post 911 wars. This paper contends that although militaries may have some bad apples responsible for war crimes and some mad apples unable to be responsible for their actions during a conflict, increasingly militaries may cook their good apples by putting them in untenable decision making environments with AI. A cooked apple may be pushed beyond reasonable limits leading to a loss of situational awareness, cognitive overload, loss of agency and autonomy leading to automation bias. In these cases, moral responsibility and perhaps even legal responsibility for unlawful deaths may be contested for cooked apples, risking operators becoming moral crumple zones and or suffering moral injury from being part of larger human AI systems authorised by the state. Nations are responsible for minimising risks to humans within reasonable bounds and compliance with legal obligations in human AI military teams, and the military systems used to make or implement decisions. The paper suggests that best practise WHS frameworks might be drawn on in development, acquisition and training ahead of deployment of systems in conflicts to predict and mitigate risks of human AI military teams.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset