Market-Based Reinforcement Learning in Partially Observable Worlds

05/15/2001
by   Ivo Kwee, et al.
0

Unlike traditional reinforcement learning (RL), market-based RL is in principle applicable to worlds described by partially observable Markov Decision Processes (POMDPs), where an agent needs to learn short-term memories of relevant previous events in order to execute optimal actions. Most previous work, however, has focused on reactive settings (MDPs) instead of POMDPs. Here we reimplement a recent approach to market-based RL and for the first time evaluate it in a toy POMDP setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2022

Reinforcement Learning under Partial Observability Guided by Learned Environment Models

In practical applications, we can rarely assume full observability of a ...
research
01/04/2019

Optimal Decision-Making in Mixed-Agent Partially Observable Stochastic Environments via Reinforcement Learning

Optimal decision making with limited or no information in stochastic env...
research
06/09/2023

Approximate information state based convergence analysis of recurrent Q-learning

In spite of the large literature on reinforcement learning (RL) algorith...
research
04/11/2023

Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning

Interpretability of AI models allows for user safety checks to build tru...
research
11/20/2019

Avoiding Jammers: A Reinforcement Learning Approach

This paper investigates the anti-jamming performance of a cognitive rada...
research
11/01/2019

A2: Extracting Cyclic Switchings from DOB-nets for Rejecting Excessive Disturbances

Reinforcement Learning (RL) is limited in practice by its gray-box natur...
research
08/22/2019

Opponent Aware Reinforcement Learning

In several reinforcement learning (RL) scenarios such as security settin...

Please sign up or login with your details

Forgot password? Click here to reset