Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased

02/03/2023
by   Chao Yu, et al.
0

There is a recent trend of applying multi-agent reinforcement learning (MARL) to train an agent that can cooperate with humans in a zero-shot fashion without using any human data. The typical workflow is to first repeatedly run self-play (SP) to build a policy pool and then train the final adaptive policy against this pool. A crucial limitation of this framework is that every policy in the pool is optimized w.r.t. the environment reward function, which implicitly assumes that the testing partners of the adaptive policy will be precisely optimizing the same reward function as well. However, human objectives are often substantially biased according to their own preferences, which can differ greatly from the environment reward. We propose a more general framework, Hidden-Utility Self-Play (HSP), which explicitly models human biases as hidden reward functions in the self-play objective. By approximating the reward space as linear functions, HSP adopts an effective technique to generate an augmented policy pool with biased policies. We evaluate HSP on the Overcooked benchmark. Empirical results show that our HSP method produces higher rewards than baselines when cooperating with learned human models, manually scripted policies, and real humans. The HSP policy is also rated as the most assistive policy based on human feedback.

READ FULL TEXT

page 4

page 18

research
06/24/2020

Quantifying Differences in Reward Functions

For many tasks, the reward function is too complex to be specified proce...
research
02/15/2022

Zero-Shot Assistance in Novel Decision Problems

We consider the problem of creating assistants that can help agents - of...
research
10/11/2022

Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning

No-press Diplomacy is a complex strategy game involving both cooperation...
research
12/21/2020

Difference Rewards Policy Gradients

Policy gradient methods have become one of the most popular classes of a...
research
12/15/2017

Impossibility of deducing preferences and rationality from human policy

Inverse reinforcement learning (IRL) attempts to infer human rewards or ...
research
06/08/2021

RewardsOfSum: Exploring Reinforcement Learning Rewards for Summarisation

To date, most abstractive summarisation models have relied on variants o...
research
04/08/2021

Learning What To Do by Simulating the Past

Since reward functions are hard to specify, recent work has focused on l...

Please sign up or login with your details

Forgot password? Click here to reset