State Advantage Weighting for Offline RL

10/09/2022
by   Jiafei Lyu, et al.
0

We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage A(s,a) that we commonly adopt in QSA learning, we leverage state advantage A(s,s^') and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2021

Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning

Learning from datasets without interaction with environments (Offline Le...
research
06/13/2023

A Simple Unified Uncertainty-Guided Framework for Offline-to-Online Reinforcement Learning

Offline reinforcement learning (RL) provides a promising solution to lea...
research
05/27/2019

LAW: Learning to Auto Weight

Example weighting algorithm is an effective solution to the training bia...
research
05/02/2023

Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare

Many reinforcement learning (RL) applications have combinatorial action ...
research
05/29/2019

Advantage Amplification in Slowly Evolving Latent-State Environments

Latent-state environments with long horizons, such as those faced by rec...
research
04/24/2023

Efficient Halftoning via Deep Reinforcement Learning

Halftoning aims to reproduce a continuous-tone image with pixels whose i...
research
10/12/2022

Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories

Natural agents can effectively learn from multiple data sources that dif...

Please sign up or login with your details

Forgot password? Click here to reset