MazeBase: A Sandbox for Learning from Games

11/23/2015
by   Sainbayar Sukhbaatar, et al.
0

This paper introduces MazeBase: an environment for simple 2D games, designed as a sandbox for machine learning approaches to reasoning and planning. Within it, we create 10 simple games embodying a range of algorithmic tasks (e.g. if-then statements or set negation). A variety of neural models (fully connected, convolutional network, memory network) are deployed via reinforcement learning on these games, with and without a procedurally generated curriculum. Despite the tasks' simplicity, the performance of the models is far from optimal, suggesting directions for future development. We also demonstrate the versatility of MazeBase by using it to emulate small combat scenarios from StarCraft. Models trained on the MazeBase version can be directly applied to StarCraft, where they consistently beat the in-game AI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2020

Agent Environment Cycle Games

Partially Observable Stochastic Games (POSGs), are the most general mode...
research
04/08/2019

Creating Pro-Level AI for Real-Time Fighting Game with Deep Reinforcement Learning

Reinforcement learning combined with deep neural networks has performed ...
research
11/07/2017

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?

Deep reinforcement learning has achieved many recent successes, but our ...
research
09/20/2020

Multiplayer Support for the Arcade Learning Environment

The Arcade Learning Environment ("ALE") is a widely used library in the ...
research
08/18/2021

Analogical Learning in Tactical Decision Games

Tactical Decision Games (TDGs) are military conflict scenarios presented...
research
03/23/2020

Neural Game Engine: Accurate learning ofgeneralizable forward models from pixels

Access to a fast and easily copied forward model of a game is essential ...

Please sign up or login with your details

Forgot password? Click here to reset