An Overview of Natural Language State Representation for Reinforcement Learning

07/19/2020
by   Brielen Madureira, et al.
0

A suitable state representation is a fundamental part of the learning process in Reinforcement Learning. In various tasks, the state can either be described by natural language or be natural language itself. This survey outlines the strategies used in the literature to build natural language state representations. We appeal for more linguistically interpretable and grounded representations, careful justification of design decisions and evaluation of the effectiveness of different approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2019

Natural Language State Representation for Reinforcement Learning

Recent advances in Reinforcement Learning have highlighted the difficult...
research
10/24/2022

Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook

In recent years, reinforcement learning and bandits have transformed a w...
research
11/03/2022

lilGym: Natural Language Visual Reasoning with Reinforcement Learning

We present lilGym, a new benchmark for language-conditioned reinforcemen...
research
12/12/2012

The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning

Most reinforcement learning methods operate on propositional representat...
research
12/30/2017

A Compare-Propagate Architecture with Alignment Factorization for Natural Language Inference

This paper presents a new deep learning architecture for Natural Languag...
research
03/22/2023

Can we trust the evaluation on ChatGPT?

ChatGPT, the first large language model (LLM) with mass adoption, has de...
research
04/27/2021

Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques

This survey provides an overview of the evolution of visually grounded m...

Please sign up or login with your details

Forgot password? Click here to reset