DeepAI
Log In Sign Up

Learning Chess Blindfolded: Evaluating Language Models on State Tracking

02/26/2021
by   Shubham Toshniwal, et al.
5

Transformer language models have made tremendous strides in natural language understanding tasks. However, the complexity of natural language makes it challenging to ascertain how accurately these models are tracking the world state underlying the text. Motivated by this issue, we consider the task of language modeling for the game of chess. Unlike natural language, chess notations describe a simple, constrained, and deterministic domain. Moreover, we observe that the appropriate choice of chess notation allows for directly probing the world state, without requiring any additional probing-related machinery. We find that: (a) With enough training data, transformer language models can learn to track pieces and predict legal moves with high accuracy when trained solely on move sequences. (b) For small training sets providing access to board state information during training can yield significant improvements. (c) The success of transformer language models is dependent on access to the entire game history i.e. "full attention". Approximating this full attention results in a significant performance drop. We propose this testbed as a benchmark for future work on the development and analysis of transformer language models.

READ FULL TEXT

page 2

page 9

page 18

page 20

05/07/2021

Understanding by Understanding Not: Modeling Negation in Language Models

Negation is a core construction in natural language. Despite being very ...
03/24/2022

Evaluating Distributional Distortion in Neural Language Modeling

A fundamental characteristic of natural language is the high rate at whi...
07/07/2020

The Go Transformer: Natural Language Modeling for Game Play

This work applies natural language modeling to generate plausible strate...
08/10/2020

Navigating Human Language Models with Synthetic Agents

Modern natural language models such as the GPT-2/GPT-3 contain tremendou...
10/24/2022

Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task

Language models show a surprising range of capabilities, but the source ...
05/03/2021

Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks

Leader-boards like SuperGLUE are seen as important incentives for active...