Q-CP: Learning Action Values for Cooperative Planning

03/01/2018
by   Francesco Riccio, et al.
0

Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance.

READ FULL TEXT
research
03/22/2018

DOP: Deep Optimistic Planning with Approximate Value Function Evaluation

Research on reinforcement learning has demonstrated promising results in...
research
09/10/2018

Decentralized Cooperative Planning for Automated Vehicles with Continuous Monte Carlo Tree Search

Urban traffic scenarios often require a high degree of cooperation betwe...
research
08/03/2020

Cooperative Control of Mobile Robots with Stackelberg Learning

Multi-robot cooperation requires agents to make decisions that are consi...
research
12/29/2021

Fully Distributed Informative Planning for Environmental Learning with Multi-Robot Systems

This paper proposes a cooperative environmental learning algorithm worki...
research
04/20/2021

Neural Tree Expansion for Multi-Robot Planning in Non-Cooperative Environments

We present a self-improving, neural tree expansion method for multi-robo...
research
05/13/2021

An Upper Confidence Bound for Simultaneous Exploration and Exploitation in Heterogeneous Multi-Robot Systems

Heterogeneous multi-robot systems are advantageous for operations in unk...
research
01/12/2022

Object Gathering with a Tethered Robot Duo

We devise a cooperative planning framework to generate optimal trajector...

Please sign up or login with your details

Forgot password? Click here to reset