Model based Multi-agent Reinforcement Learning with Tensor Decompositions

10/27/2021
by   Pascal Van Der Vaart, et al.
0

A challenge in multi-agent reinforcement learning is to be able to generalize over intractable state-action spaces. Inspired from Tesseract [Mahajan et al., 2021], this position paper investigates generalisation in state-action space over unexplored state-action pairs by modelling the transition and reward functions as tensors of low CP-rank. Initial experiments on synthetic MDPs show that using tensor decompositions in a model-based reinforcement learning algorithm can lead to much faster convergence if the true transition and reward functions are indeed of low rank.

READ FULL TEXT
research
10/27/2021

Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

We present an extended abstract for the previously published work TESSER...
research
05/31/2021

Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning

Reinforcement Learning in large action spaces is a challenging problem. ...
research
09/15/2022

Mean-Field Approximation of Cooperative Constrained Multi-Agent Reinforcement Learning (CMARL)

Mean-Field Control (MFC) has recently been proven to be a scalable tool ...
research
03/20/2023

Deceptive Reinforcement Learning in Model-Free Domains

This paper investigates deceptive reinforcement learning for privacy pre...
research
05/27/2020

Tensor Decomposition for Multi-agent Predictive State Representation

Predictive state representation (PSR) uses a vector of action-observatio...
research
06/11/2020

Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward

It has long been recognized that multi-agent reinforcement learning (MAR...
research
04/27/2021

Learning Fair Canonical Polyadical Decompositions using a Kernel Independence Criterion

This work proposes to learn fair low-rank tensor decompositions by regul...

Please sign up or login with your details

Forgot password? Click here to reset