Deep Pepper: Expert Iteration based Chess agent in the Reinforcement Learning Setting

06/02/2018
by   Sai Krishna G. V., et al.
0

An almost-perfect chess playing agent has been a long standing challenge in the field of Artificial Intelligence. Some of the recent advances demonstrate we are approaching that goal. In this project, we provide methods for faster training of self-play style algorithms, mathematical details of the algorithm used, various potential future directions, and discuss most of the relevant work in the area of computer chess. Deep Pepper uses embedded knowledge to accelerate the training of the chess engine over a "tabula rasa" system such as Alpha Zero. We also release our code to promote further research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2020

Provable Self-Play Algorithms for Competitive Reinforcement Learning

Self-play, where the algorithm learns by playing against itself without ...
research
02/21/2017

Beating the World's Best at Super Smash Bros. with Deep Reinforcement Learning

There has been a recent explosion in the capabilities of game-playing ar...
research
07/25/2019

Interactive Lungs Auscultation with Reinforcement Learning Agent

To perform a precise auscultation for the purposes of examination of res...
research
09/02/2021

An Oracle and Observations for the OpenAI Gym / ALE Freeway Environment

The OpenAI Gym project contains hundreds of control problems whose goal ...
research
01/14/2023

Recent advances in artificial intelligence for retrosynthesis

Retrosynthesis is the cornerstone of organic chemistry, providing chemis...
research
09/11/2018

SAI, a Sensible Artificial Intelligence that plays Go

We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero p...
research
08/21/2020

Biomechanic Posture Stabilisation via Iterative Training of Multi-policy Deep Reinforcement Learning Agents

It is not until we become senior citizens do we recognise how much we to...

Please sign up or login with your details

Forgot password? Click here to reset