Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning

04/18/2023
by   Dingwen Kong, et al.
0

An appropriate reward function is of paramount importance in specifying a task in reinforcement learning (RL). Yet, it is known to be extremely challenging in practice to design a correct reward function for even simple tasks. Human-in-the-loop (HiL) RL allows humans to communicate complex goals to the RL agent by providing various types of feedback. However, despite achieving great empirical successes, HiL RL usually requires too much feedback from a human teacher and also suffers from insufficient theoretical understanding. In this paper, we focus on addressing this issue from a theoretical perspective, aiming to provide provably feedback-efficient algorithmic frameworks that take human-in-the-loop to specify rewards of given tasks. We provide an active-learning-based RL algorithm that first explores the environment without specifying a reward function and then asks a human teacher for only a few queries about the rewards of a task at some state-action pairs. After that, the algorithm guarantees to provide a nearly optimal policy for the task with high probability. We show that, even with the presence of random noise in the feedback, the algorithm only takes O(H_R^2) queries on the reward function to provide an ϵ-optimal policy for any ϵ > 0. Here H is the horizon of the RL environment, and _R specifies the complexity of the function class representing the reward function. In contrast, standard RL algorithms require to query the reward function for at least Ω(poly(d, 1/ϵ)) state-action pairs where d depends on the complexity of the environmental transition.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset