DeepAI
Log In Sign Up

Polygames: Improved Zero Learning

01/27/2020
by   Tristan Cazenave, et al.
12

Since DeepMind's AlphaZero, Zero learning quickly became the state-of-the-art method for many board games. It can be improved using a fully convolutional structure (no fully connected layer). Using such an architecture plus global pooling, we can create bots independent of the board size. The training can be made more robust by keeping track of the best checkpoints during the training and by training against them. Using these features, we release Polygames, our framework for Zero learning, with its library of games and its checkpoints. We won against strong humans at the game of Hex in 19x19, which was often said to be untractable for zero learning; and in Havannah. We also won several first places at the TAAI competitions.

READ FULL TEXT
02/24/2021

Transfer of Fully Convolutional Policy-Value Networks Between Games and Game Variants

In this paper, we use fully convolutional architectures in AlphaZero-lik...
07/18/2021

Train on Small, Play the Large: Scaling Up Board Games with AlphaZero and GNN

Playing board games is considered a major challenge for both humans and ...
03/29/2019

Improved Reinforcement Learning with Curriculum

Humans tend to learn complex abstract concepts faster if examples are pr...
09/11/2018

SAI, a Sensible Artificial Intelligence that plays Go

We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero p...
07/29/2022

Designing Programming Exercises from Board Games

This paper introduces a collection of board games specifically chosen to...
02/24/2021

Combining Off and On-Policy Training in Model-Based Reinforcement Learning

The combination of deep learning and Monte Carlo Tree Search (MCTS) has ...
09/10/2020

Finite Group Equivariant Neural Networks for Games

Games such as go, chess and checkers have multiple equivalent game state...