Improving Exploration in Soft-Actor-Critic with Normalizing Flows Policies

06/06/2019
by   Patrick Nadeem Ward, et al.
0

Deep Reinforcement Learning (DRL) algorithms for continuous action spaces are known to be brittle toward hyperparameters as well as beingsample inefficient. Soft Actor Critic (SAC) proposes an off-policy deep actor critic algorithm within the maximum entropy RL framework which offers greater stability and empirical gains. The choice of policy distribution, a factored Gaussian, is motivated by chosen dueits easy re-parametrization rather than its modeling power. We introduce Normalizing Flow policies within the SAC framework that learn more expressive classes of policies than simple factored Gaussians. We also present a series of stabilization tricks that enable effective training of these policies in the RL setting.We show empirically on continuous grid world tasks that our approach increases stability and is better suited to difficult exploration in sparse reward settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset