DeepAI AI Chat
Log In Sign Up

Learning to Query Internet Text for Informing Reinforcement Learning Agents

by   Kolby Nottingham, et al.
University of California, Irvine

Generalization to out of distribution tasks in reinforcement learning is a challenging problem. One successful approach improves generalization by conditioning policies on task or environment descriptions that provide information about the current transition or reward functions. Previously, these descriptions were often expressed as generated or crowd sourced text. In this work, we begin to tackle the problem of extracting useful information from natural language found in the wild (e.g. internet forums, documentation, and wikis). These natural, pre-existing sources are especially challenging, noisy, and large and present novel challenges compared to previous approaches. We propose to address these challenges by training reinforcement learning agents to learn to query these sources as a human would, and we experiment with how and when an agent should query. To address the how, we demonstrate that pretrained QA models perform well at executing zero-shot queries in our target domain. Using information retrieved by a QA model, we train an agent to learn when it should execute queries. We show that our method correctly learns to execute queries to maximize reward in a reinforcement learning setting.


page 1

page 2

page 3

page 4


AnyMorph: Learning Transferable Polices By Inferring Agent Morphology

The prototypical approach to reinforcement learning involves training po...

Asking for Knowledge: Training RL Agents to Query External Knowledge Using Language

To solve difficult tasks, humans ask questions to acquire knowledge from...

Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration

Autonomous reinforcement learning agents must be intrinsically motivated...

A Narration-based Reward Shaping Approach using Grounded Natural Language Commands

While deep reinforcement learning techniques have led to agents that are...

Multitask Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies

We introduce a new RL problem where the agent is required to execute a g...

Learning to Follow Language Instructions with Compositional Policies

We propose a framework that learns to execute natural language instructi...

Programmable Agents

We build deep RL agents that execute declarative programs expressed in f...