DeepAI AI Chat
Log In Sign Up

An Overview of Natural Language State Representation for Reinforcement Learning

07/19/2020
by   Brielen Madureira, et al.
0

A suitable state representation is a fundamental part of the learning process in Reinforcement Learning. In various tasks, the state can either be described by natural language or be natural language itself. This survey outlines the strategies used in the literature to build natural language state representations. We appeal for more linguistically interpretable and grounded representations, careful justification of design decisions and evaluation of the effectiveness of different approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/02/2019

Natural Language State Representation for Reinforcement Learning

Recent advances in Reinforcement Learning have highlighted the difficult...
10/24/2022

Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook

In recent years, reinforcement learning and bandits have transformed a w...
11/03/2022

lilGym: Natural Language Visual Reasoning with Reinforcement Learning

We present lilGym, a new benchmark for language-conditioned reinforcemen...
12/12/2012

The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning

Most reinforcement learning methods operate on propositional representat...
04/27/2021

Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques

This survey provides an overview of the evolution of visually grounded m...
10/23/2022

Towards Pragmatic Production Strategies for Natural Language Generation Tasks

This position paper proposes a conceptual framework for the design of Na...
12/30/2017

A Compare-Propagate Architecture with Alignment Factorization for Natural Language Inference

This paper presents a new deep learning architecture for Natural Languag...