Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning

01/13/2020
by   Michiel van der Meer, et al.
0

In this work, we present an alternative approach to making an agent compositional through the use of a diagnostic classifier. Because of the need for explainable agents in automated decision processes, we attempt to interpret the latent space from an RL agent to identify its current objective in a complex language instruction. Results show that the classification process causes changes in the hidden states which makes them more easily interpretable, but also causes a shift in zero-shot performance to novel instructions. Lastly, we limit the supervisory signal on the classification, and observe a similar but less notable effect.

READ FULL TEXT
research
09/08/2023

Compositional Learning of Visually-Grounded Concepts Using Reinforcement

Deep reinforcement learning agents need to be trained over millions of e...
research
06/15/2017

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

As a step towards developing zero-shot task generalization capabilities ...
research
08/15/2023

A^2Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models

We study the task of zero-shot vision-and-language navigation (ZS-VLN), ...
research
11/20/2019

Zero-Shot Semantic Parsing for Instructions

We consider a zero-shot semantic parsing task: parsing instructions into...
research
09/29/2022

Does Zero-Shot Reinforcement Learning Exist?

A zero-shot RL agent is an agent that can solve any RL task in a given e...
research
09/28/2021

Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations

Enabling humans to identify potential flaws in an agent's decision makin...

Please sign up or login with your details

Forgot password? Click here to reset