DeepAI AI Chat
Log In Sign Up

Using Contrastive Samples for Identifying and Leveraging Possible Causal Relationships in Reinforcement Learning

by   Harshad Khadilkar, et al.
Tata Consultancy Services
IIT Bombay

A significant challenge in reinforcement learning is quantifying the complex relationship between actions and long-term rewards. The effects may manifest themselves over a long sequence of state-action pairs, making them hard to pinpoint. In this paper, we propose a method to link transitions with significant deviations in state with unusually large variations in subsequent rewards. Such transitions are marked as possible causal effects, and the corresponding state-action pairs are added to a separate replay buffer. In addition, we include contrastive samples corresponding to transitions from a similar state but with differing actions. Including this Contrastive Experience Replay (CER) during training is shown to outperform standard value-based methods on 2D navigation tasks. We believe that CER can be useful for a broad class of learning tasks, including for any off-policy reinforcement learning algorithm.


Experience Replay Using Transition Sequences

Experience replay is one of the most commonly used approaches to improve...

State Distribution-aware Sampling for Deep Q-learning

A critical and challenging problem in reinforcement learning is how to l...

Reinforcement learning with experience replay and adaptation of action dispersion

Effective reinforcement learning requires a proper balance of exploratio...

Explainable Reinforcement Learning via a Causal World Model

Generating explanations for reinforcement learning (RL) is challenging a...

Tackling Non-Stationarity in Reinforcement Learning via Causal-Origin Representation

In real-world scenarios, the application of reinforcement learning is si...

Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning

Discovering achievements with a hierarchical structure on procedurally g...