DeepAI AI Chat
Log In Sign Up

An Overview of Natural Language State Representation for Reinforcement Learning

by   Brielen Madureira, et al.

A suitable state representation is a fundamental part of the learning process in Reinforcement Learning. In various tasks, the state can either be described by natural language or be natural language itself. This survey outlines the strategies used in the literature to build natural language state representations. We appeal for more linguistically interpretable and grounded representations, careful justification of design decisions and evaluation of the effectiveness of different approaches.


page 1

page 2

page 3

page 4


Natural Language State Representation for Reinforcement Learning

Recent advances in Reinforcement Learning have highlighted the difficult...

Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook

In recent years, reinforcement learning and bandits have transformed a w...

lilGym: Natural Language Visual Reasoning with Reinforcement Learning

We present lilGym, a new benchmark for language-conditioned reinforcemen...

The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning

Most reinforcement learning methods operate on propositional representat...

Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques

This survey provides an overview of the evolution of visually grounded m...

Towards Pragmatic Production Strategies for Natural Language Generation Tasks

This position paper proposes a conceptual framework for the design of Na...

A Compare-Propagate Architecture with Alignment Factorization for Natural Language Inference

This paper presents a new deep learning architecture for Natural Languag...