NNCFR: Minimize Counterfactual Regret with Neural Networks

05/26/2021
by   Huale Li, et al.
0

Counterfactual Regret Minimization (CFR) is the popular method for finding approximate Nash equilibrium in two-player zero-sum games with imperfect information. CFR solves games by travsersing the full game tree iteratively, which limits its scalability in larger games. When applying CFR to solve large-scale games in previously, large-scale games are abstracted into small-scale games firstly. Secondly, CFR is used to solve the abstract game. And finally, the solution strategy is mapped back to the original large-scale game. However, this process requires considerable expert knowledge, and the accuracy of abstraction is closely related to expert knowledge. In addition, the abstraction also loses certain information, which will eventually affect the accuracy of the solution strategy. Towards this problem, a recent method, Deep CFR alleviates the need for abstraction and expert knowledge by applying deep neural networks directly to CFR in full games. In this paper, we introduces Neural Network Counterfactual Regret Minimization (NNCFR), an improved variant of Deep CFR that has a faster convergence by constructing a dueling netwok as the value network. Moreover, an evaluation module is designed by combining the value network and Monte Carlo, which reduces the approximation error of the value network. In addition, a new loss function is designed in the procedure of training policy network in the proposed NNCFR, which can be good to make the policy network more stable. The extensive experimental tests are conducted to show that the NNCFR converges faster and performs more stable than Deep CFR, and outperforms Deep CFR with respect to exploitability and head-to-head performance on test games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset