Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings

06/16/2021
by   Hengyuan Hu, et al.
0

Search is an important tool for computing effective policies in single- and multi-agent environments, and has been crucial for achieving superhuman performance in several benchmark fully and partially observable games. However, one major limitation of prior search approaches for partially observable environments is that the computational cost scales poorly with the amount of hidden information. In this paper we present Learned Belief Search (LBS), a computationally efficient search procedure for partially observable environments. Rather than maintaining an exact belief distribution, LBS uses an approximate auto-regressive counterfactual belief that is learned as a supervised task. In multi-agent settings, LBS uses a novel public-private model architecture for underlying policies in order to efficiently evaluate these policies during rollouts. In the benchmark domain of Hanabi, LBS can obtain 55   91 35.8 ×   4.6 ×, allowing it to scale to larger settings that were inaccessible to previous search methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset