-
Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
Text-based adventure games provide a platform on which to explore reinfo...
12/04/2018 ∙ by Prithviraj Ammanabrolu, et al. ∙0 ∙
share
read it
-
Extracting Action Sequences from Texts Based on Deep Reinforcement Learning
Extracting action sequences from texts in natural language is challengin...
03/07/2018 ∙ by Wenfeng Feng, et al. ∙0 ∙
share
read it
-
Deep Reinforcement Learning with a Combinatorial Action Space for Predicting Popular Reddit Threads
We introduce an online popularity prediction and tracking task as a benc...
06/12/2016 ∙ by Ji He, et al. ∙0 ∙
share
read it
-
A Deep Reinforcement Learning Chatbot
We present MILABOT: a deep reinforcement learning chatbot developed by t...
09/07/2017 ∙ by Iulian V. Serban, et al. ∙0 ∙
share
read it
-
Reinforcement Learning with External Knowledge and Two-Stage Q-functions for Predicting Popular Reddit Threads
This paper addresses the problem of predicting popularity of comments in...
04/20/2017 ∙ by Ji He, et al. ∙0 ∙
share
read it
-
Towards Solving Text-based Games by Producing Adaptive Action Spaces
To solve a text-based game, an agent needs to formulate valid text comma...
12/03/2018 ∙ by Ruo Yu Tao, et al. ∙0 ∙
share
read it
-
Language Understanding for Text-based Games Using Deep Reinforcement Learning
In this paper, we consider the task of learning control policies for tex...
06/30/2015 ∙ by Karthik Narasimhan, et al. ∙0 ∙
share
read it
Deep Reinforcement Learning with a Natural Language Action Space
This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Q-learning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.
READ FULL TEXT