Transfer Reinforcement Learning under Unobserved Contextual Information

03/09/2020
by   Yan Zhang, et al.
9

In this paper, we study a transfer reinforcement learning problem where the state transitions and rewards are affected by the environmental context. Specifically, we consider a demonstrator agent that has access to a context-aware policy and can generate transition and reward data based on that policy. These data constitute the experience of the demonstrator. Then, the goal is to transfer this experience, excluding the underlying contextual information, to a learner agent that does not have access to the environmental context, so that they can learn a control policy using fewer samples. It is well known that, disregarding the causal effect of the contextual information, can introduce bias in the transition and reward models estimated by the learner, resulting in a learned suboptimal policy. To address this challenge, in this paper, we develop a method to obtain causal bounds on the transition and reward functions using the demonstrator's data, which we then use to obtain causal bounds on the value functions. Using these value function bounds, we propose new Q learning and UCB-Q learning algorithms that converge to the true value function without bias. We provide numerical experiments for robot motion planning problems that validate the proposed value function bounds and demonstrate that the proposed algorithms can effectively make use of the data from the demonstrator to accelerate the learning process of the learner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2021

Learning without Knowing: Unobserved Context in Continuous Transfer Reinforcement Learning

In this paper, we consider a transfer Reinforcement Learning (RL) proble...
research
09/19/2019

Value function estimation in Markov reward processes: Instance-dependent ℓ_∞-bounds for policy evaluation

Markov reward processes (MRPs) are used to model stochastic phenomena ar...
research
01/05/2022

A Generalized Bootstrap Target for Value-Learning, Efficiently Combining Value and Feature Predictions

Estimating value functions is a core component of reinforcement learning...
research
06/23/2022

World Value Functions: Knowledge Representation for Learning and Planning

We propose world value functions (WVFs), a type of goal-oriented general...
research
08/28/2022

Goal-Conditioned Q-Learning as Knowledge Distillation

Many applications of reinforcement learning can be formalized as goal-co...
research
10/25/2021

Operator Augmentation for Model-based Policy Evaluation

In model-based reinforcement learning, the transition matrix and reward ...
research
08/24/2023

Intentionally-underestimated Value Function at Terminal State for Temporal-difference Learning with Mis-designed Reward

Robot control using reinforcement learning has become popular, but its l...

Please sign up or login with your details

Forgot password? Click here to reset