Cognitive Models as Simulators: The Case of Moral Decision-Making

by   Ardavan S. Nobandegani, et al.

To achieve desirable performance, current AI systems often require huge amounts of training data. This is especially problematic in domains where collecting data is both expensive and time-consuming, e.g., where AI systems require having numerous interactions with humans, collecting feedback from them. In this work, we substantiate the idea of cognitive models as simulators, which is to have AI systems interact with, and collect feedback from, cognitive models instead of humans, thereby making their training process both less costly and faster. Here, we leverage this idea in the context of moral decision-making, by having reinforcement learning (RL) agents learn about fairness through interacting with a cognitive model of the Ultimatum Game (UG), a canonical task in behavioral and brain sciences for studying fairness. Interestingly, these RL agents learn to rationally adapt their behavior depending on the emotional state of their simulated UG responder. Our work suggests that using cognitive models as simulators of humans is an effective approach for training AI systems, presenting an important way for computational cognitive science to make contributions to AI.


page 1

page 2

page 3

page 4


Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning

In this work we propose Neuro-Nav, an open-source library for neurally p...

Cognitive science as a source of forward and inverse models of human decisions for robotics and control

Those designing autonomous systems that interact with humans will invari...

Using Cognitive Models to Train Warm Start Reinforcement Learning Agents for Human-Computer Interactions

Reinforcement learning (RL) agents in human-computer interactions applic...

Thought Cloning: Learning to Think while Acting by Imitating Human Thinking

Language is often considered a key aspect of human thinking, providing u...

Retrospective End-User Walkthrough: A Method for Assessing How People Combine Multiple AI Models in Decision-Making Systems

Evaluating human-AI decision-making systems is an emerging challenge as ...

Bayesian Reinforcement Learning with Limited Cognitive Load

All biological and artificial agents must learn and make decisions given...

Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory

Working memory (WM), a fundamental cognitive process facilitating the te...

Please sign up or login with your details

Forgot password? Click here to reset