Soft Options Critic

05/23/2019
by   Elita Lobo, et al.
0

The option-critic paper and several variants have successfully demonstrated the use of the options framework proposed by Barto et al to scale learning and planning in hierarchical tasks. Although most of these frameworks use entropy as a regularizer to improve exploration, they do not maximize entropy along with returns at every time step. In this paper we investigate the effect of maximizing entropy of each options and inter-option policy in options framework. We adopt the architecture of the recently introduced soft-actor critic algorithm to enable learning of robust options in continuous and discrete action spaces in a off-policy manner thus also making it sample efficient. In this paper we derive the soft options improvement theorem and propose a novel soft-options framework to incorporate maximization of entropy of actions and options in a constrained manner. Our experiments show that maximizing entropy of actions and options in a constrained manner with high learning rate does not harm the main objective of maximizing returns and hence outperforms vanilla options-critic framework in most hierarchical tasks. We also observe faster recovery when the environment is subject to perturbations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset