-
Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation
Reinforcement learning (RL) algorithms usually require a substantial amo...
read it
-
Policy Continuation with Hindsight Inverse Dynamics
Solving goal-oriented tasks is an important but challenging problem in r...
read it
-
Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation
Deep reinforcement learning (DRL) is a promising approach for adaptive r...
read it
-
Causal Reasoning from Meta-reinforcement Learning
Discovering and exploiting the causal structure in the environment is a ...
read it
-
Dynamics-aware Embeddings
In this paper we consider self-supervised representation learning to imp...
read it
-
Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Leargning
Traditional distributed deep reinforcement learning (RL) commonly relies...
read it
-
Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Combining model-based and model-free deep reinforcement learning has sho...
read it
Counterfactual Data Augmentation using Locally Factored Dynamics
Many dynamic processes, including common scenarios in robotic control and reinforcement learning (RL), involve a set of interacting subprocesses. Though the subprocesses are not independent, their interactions are often sparse, and the dynamics at any given time step can often be decomposed into locally independent causal mechanisms. Such local causal structures can be leveraged to improve the sample efficiency of sequence prediction and off-policy reinforcement learning. We formalize this by introducing local causal models (LCMs), which are induced from a global causal model by conditioning on a subset of the state space. We propose an approach to inferring these structures given an object-oriented state representation, as well as a novel algorithm for model-free Counterfactual Data Augmentation (CoDA). CoDA uses local structures and an experience replay to generate counterfactual experiences that are causally valid in the global model. We find that CoDA significantly improves the performance of RL agents in locally factored tasks, including the batch-constrained and goal-conditioned settings.
READ FULL TEXT
Comments
There are no comments yet.