Application of Self-Play Reinforcement Learning to a Four-Player Game of Imperfect Information

by   Henry Charlesworth, et al.

We introduce a new virtual environment for simulating a card game known as "Big 2". This is a four-player game of imperfect information with a relatively complicated action space (being allowed to play 1,2,3,4 or 5 card combinations from an initial starting hand of 13 cards). As such it poses a challenge for many current reinforcement learning methods. We then use the recently proposed "Proximal Policy Optimization" algorithm to train a deep neural network to play the game, purely learning via self-play, and find that it is able to reach a level which outperforms amateur human players after only a relatively short amount of training time and without needing to search a tree of future game states.



There are no comments yet.


page 1

page 2

page 3

page 4


Deep RL Agent for a Real-Time Action Strategy Game

We introduce a reinforcement learning environment based on Heroic - Magi...

Mastering the Game of Sungka from Random Play

Recent work in reinforcement learning demonstrated that learning solely ...

Learning to Play Two-Player Perfect-Information Games without Knowledge

In this paper, several techniques for learning game state evaluation fun...

Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation

A variety of machine learning models have been proposed to assess the pe...

Using Graph Convolutional Networks and TD(λ) to play the game of Risk

Risk is 6 player game with significant randomness and a large game-tree ...

HEX and Neurodynamic Programming

Hex is a complex game with a high branching factor. For the first time H...

Tackling Morpion Solitaire with AlphaZero-likeRanked Reward Reinforcement Learning

Morpion Solitaire is a popular single player game, performed with paper ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.