Privileged Information Dropout in Reinforcement Learning

Using privileged information during training can improve the sample efficiency and performance of machine learning systems. This paradigm has been applied to reinforcement learning (RL), primarily in the form of distillation or auxiliary tasks, and less commonly in the form of augmenting the inputs of agents. In this work, we investigate Privileged Information Dropout () for achieving the latter which can be applied equally to value-based and policy-based RL algorithms. Within a simple partially-observed environment, we demonstrate that outperforms alternatives for leveraging privileged information, including distillation and auxiliary tasks, and can successfully utilise different types of privileged information. Finally, we analyse its effect on the learned representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/22/2021

Return-Based Contrastive Representation Learning for Reinforcement Learning

Recently, various auxiliary tasks have been proposed to accelerate repre...
03/29/2020

Sample Efficient Ensemble Learning with Catalyst.RL

We present Catalyst.RL, an open-source PyTorch framework for reproducibl...
09/10/2019

Discovery of Useful Questions as Auxiliary Tasks

Arguably, intelligent agents ought to be able to discover their own ques...
07/01/2020

Group Equivariant Deep Reinforcement Learning

In Reinforcement Learning (RL), Convolutional Neural Networks(CNNs) have...
02/01/2020

Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning

Off-policy ensemble reinforcement learning (RL) methods have demonstrate...
09/20/2022

Soft Action Priors: Towards Robust Policy Transfer

Despite success in many challenging problems, reinforcement learning (RL...

Please sign up or login with your details

Forgot password? Click here to reset