Warmth and competence in human-agent cooperation

01/31/2022
by   Kevin R. McKee, et al.
0

Interaction and cooperation with humans are overarching aspirations of artificial intelligence (AI) research. Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans. These studies primarily evaluate human compatibility through "objective" metrics such as task performance, obscuring potential variation in the levels of trust and subjective preference that different agents garner. To better understand the factors shaping subjective preferences in human-agent cooperation, we train deep reinforcement learning agents in Coins, a two-player social dilemma. We recruit participants for a human-agent cooperation study and measure their impressions of the agents they encounter. Participants' perceptions of warmth and competence predict their stated preferences for different agents, above and beyond objective performance metrics. Drawing inspiration from social science and biology research, we subsequently implement a new "partner choice" framework to elicit revealed preferences: after playing an episode with an agent, participants are asked whether they would like to play the next round with the same agent or to play alone. As with stated preferences, social perception better predicts participants' revealed preferences than does objective performance. Given these results, we recommend human-agent interaction researchers routinely incorporate the measurement of social perception and subjective preferences into their studies.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 10

page 14

page 26

page 28

research
03/10/2019

Improving Humanness of Virtual Agents and Users' Cooperation through Emotions

In this paper, we analyze the performance of an agent developed accordin...
research
07/15/2021

Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi

Deep reinforcement learning has generated superhuman AI in competitive g...
research
09/14/2017

Towards personalized human AI interaction - adapting the behavior of AI agents using neural signatures of subjective interest

Reinforcement Learning AI commonly uses reward/penalty signals that are ...
research
10/15/2021

Collaborating with Humans without Human Data

Collaborating with humans requires rapidly adapting to their individual ...
research
08/08/2019

Preferences for efficiency, rather than preferences for morality, drive cooperation in the one-shot Stag-Hunt Game

Recent work highlights that cooperation in the one-shot Prisoner's dilem...
research
02/08/2019

Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability

Although artificial intelligence holds promise for addressing societal c...

Please sign up or login with your details

Forgot password? Click here to reset