DeepAI AI Chat
Log In Sign Up

Advantage Amplification in Slowly Evolving Latent-State Environments

05/29/2019
by   Martin Mladenov, et al.
Google
0

Latent-state environments with long horizons, such as those faced by recommender systems, pose significant challenges for reinforcement learning (RL). In this work, we identify and analyze several key hurdles for RL in such environments, including belief state error and small action advantage. We develop a general principle of advantage amplification that can overcome these hurdles through the use of temporal abstraction. We propose several aggregation methods and prove they induce amplification in certain settings. We also bound the loss in optimality incurred by our methods in environments where latent state evolves slowly and demonstrate their performance empirically in a stylized user-modeling task.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/11/2019

RecSim: A Configurable Simulation Platform for Recommender Systems

We propose RecSim, a configurable platform for authoring simulation envi...
06/02/2019

The Principle of Unchanged Optimality in Reinforcement Learning Generalization

Several recent papers have examined generalization in reinforcement lear...
10/09/2022

State Advantage Weighting for Offline RL

We present state advantage weighting for offline reinforcement learning ...
12/05/2021

Benchmark for Out-of-Distribution Detection in Deep Reinforcement Learning

Reinforcement Learning (RL) based solutions are being adopted in a varie...
01/20/2023

Generative Slate Recommendation with Reinforcement Learning

Recent research has employed reinforcement learning (RL) algorithms to o...
02/15/2022

User-Oriented Robust Reinforcement Learning

Recently, improving the robustness of policies across different environm...