Measuring collaborative emergent behavior in multi-agent reinforcement learning

by   Sean L. Barton, et al.

Multi-agent reinforcement learning (RL) has important implications for the future of human-agent teaming. We show that improved performance with multi-agent RL is not a guarantee of the collaborative behavior thought to be important for solving multi-agent tasks. To address this, we present a novel approach for quantitatively assessing collaboration in continuous spatial tasks with multi-agent RL. Such a metric is useful for measuring collaboration between computational agents and may serve as a training signal for collaboration in future RL paradigms involving humans.



page 1

page 2

page 3

page 4


Coordination-driven learning in multi-agent problem spaces

We discuss the role of coordination as a direct learning objective in mu...

How Can Creativity Occur in Multi-Agent Systems?

Complex systems show how surprising and beautiful phenomena can emerge f...

Multi-agent Collaboration for Feasible Collaborative Behavior Construction and Evaluation

In the case of the two-person zero-sum stochastic game with a central co...

Human and Multi-Agent collaboration in a human-MARL teaming framework

Collaborative multi-agent reinforcement learning (MARL) as a specific ca...

Bayesian Inference of Self-intention Attributed by Observer

Most of agents that learn policy for tasks with reinforcement learning (...

On Multi-Agent Learning in Team Sports Games

In recent years, reinforcement learning has been successful in solving v...

PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration

Learning to collaborate is critical in Multi-Agent Reinforcement Learnin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.