Adversarially Guided Subgoal Generation for Hierarchical Reinforcement Learning

01/24/2022
by   Vivienne Huiling Wang, et al.
0

Hierarchical reinforcement learning (HRL) proposes to solve difficult tasks by performing decision-making and control at successively higher levels of temporal abstraction. However, off-policy training in HRL often suffers from the problem of non-stationary high-level decision making since the low-level policy is constantly changing. In this paper, we propose a novel HRL approach for mitigating the non-stationarity by adversarially enforcing the high-level policy to generate subgoals compatible with the current instantiation of the low-level policy. In practice, the adversarial learning can be implemented by training a simple discriminator network concurrently with the high-level policy which determines the compatibility level of subgoals. Experiments with state-of-the-art algorithms show that our approach significantly improves learning efficiency and overall performance of HRL in various challenging continuous control tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset