Using Graph Convolutional Networks and TD(λ) to play the game of Risk
Risk is 6 player game with significant randomness and a large game-tree complexity which poses a challenge to creating an agent to play the game effectively. Previous AIs focus on creating high-level handcrafted features determine agent decision making. In this project, I create D.A.D, A Risk agent using temporal difference reinforcement learning to train a Deep Neural Network including a Graph Convolutional Network to evaluate player positions. This is used in a game-tree to select optimal moves. This allows minimal handcrafting of knowledge into the AI, assuring input features are as low-level as possible to allow the network to extract useful and sophisticated features itself, even with the network starting from a random initialisation. I also tackle the issue of non-determinism in Risk by introducing a new method of interpreting attack moves necessary for the search. The result is an AI which wins 35 versus 5 of best inbuilt AIs in Lux Delux, a Risk variant.
READ FULL TEXT