Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning

01/20/2023
by   Elizaveta Tennant, et al.
0

Practical uses of Artificial Intelligence (AI) in the real world have demonstrated the importance of embedding moral choices into intelligent agents. They have also highlighted that defining top-down ethical constraints on AI according to any one type of morality is extremely challenging and can pose risks. A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents. In particular, we believe that an interesting and insightful starting point is the analysis of emergent behavior of Reinforcement Learning (RL) agents that act according to a predefined set of moral rewards in social dilemmas. In this work, we present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories. We aim to design reward structures that are simplified yet representative of a set of key ethical systems. Therefore, we first define moral reward functions that distinguish between consequence- and norm-based agents, between morality based on societal norms or internal virtues, and between single- and mixed-virtue (e.g., multi-objective) methodologies. Then, we evaluate our approach by modeling repeated dyadic interactions between learning moral agents in three iterated social dilemma games (Prisoner's Dilemma, Volunteer's Dilemma and Stag Hunt). We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation, and the corresponding social outcomes. Finally, we discuss the implications of these findings for the development of moral agents in artificial and mixed human-AI societies.

READ FULL TEXT

page 10

page 12

page 13

page 14

page 15

page 20

page 23

page 25

research
10/30/2014

Do Artificial Reinforcement-Learning Agents Matter Morally?

Artificial reinforcement learning (RL) is a widely used technique in art...
research
02/11/2022

Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

In the long term, reinforcement learning (RL) is considered by many AI t...
research
07/19/2020

Expected Utilitarianism

We want artificial intelligence (AI) to be beneficial. This is the groun...
research
03/30/2022

Reinforcement Learning Guided by Provable Normative Compliance

Reinforcement learning (RL) has shown promise as a tool for engineering ...
research
02/03/2022

Reward is not enough: can we liberate AI from the reinforcement learning paradigm?

I present arguments against the hypothesis put forward by Silver, Singh,...
research
10/06/2022

Artificial virtuous agents in a multiagent tragedy of the commons

Although virtue ethics has repeatedly been proposed as a suitable framew...
research
06/08/2020

Reinforcement Learning Under Moral Uncertainty

An ambitious goal for artificial intelligence is to create agents that b...

Please sign up or login with your details

Forgot password? Click here to reset