How do Offline Measures for Exploration in Reinforcement Learning behave?

10/29/2020
by   Jakob J. Hollenstein, et al.
0

Sufficient exploration is paramount for the success of a reinforcement learning agent. Yet, exploration is rarely assessed in an algorithm-independent way. We compare the behavior of three data-based, offline exploration metrics described in the literature on intuitive simple distributions and highlight problems to be aware of when using them. We propose a fourth metric,uniform relative entropy, and implement it using either a k-nearest-neighbor or a nearest-neighbor-ratio estimator, highlighting that the implementation choices have a profound impact on these measures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset