What makes useful auxiliary tasks in reinforcement learning: investigating the effect of the target policy

04/01/2022
by   Banafsheh Rafiee, et al.
0

Auxiliary tasks have been argued to be useful for representation learning in reinforcement learning. Although many auxiliary tasks have been empirically shown to be effective for accelerating learning on the main task, it is not yet clear what makes useful auxiliary tasks. Some of the most promising results are on the pixel control, reward prediction, and the next state prediction auxiliary tasks; however, the empirical results are mixed, showing substantial improvements in some cases and marginal improvements in others. Careful investigations of how auxiliary tasks help the learning of the main task is necessary. In this paper, we take a step studying the effect of the target policies on the usefulness of the auxiliary tasks formulated as general value functions. General value functions consist of three core elements: 1) policy 2) cumulant 3) continuation function. Our focus on the role of the target policy of the auxiliary tasks is motivated by the fact that the target policy determines the behavior about which the agent wants to make a prediction and the state-action distribution that the agent is trained on, which further affects the main task learning. Our study provides insights about questions such as: Does a greedy policy result in bigger improvement gains compared to other policies? Is it best to set the auxiliary task policy to be the same as the main task policy? Does the choice of the target policy have a substantial effect on the achieved performance gain or simple strategies for setting the policy, such as using a uniformly random policy, work as well? Our empirical results suggest that: 1) Auxiliary tasks with the greedy policy tend to be useful. 2) Most policies, including a uniformly random policy, tend to improve over the baseline. 3) Surprisingly, the main task policy tends to be less useful compared to other policies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2022

Auxiliary task discovery through generate-and-test

In this paper, we explore an approach to auxiliary task discovery in rei...
research
04/01/2020

Work in Progress: Temporally Extended Auxiliary Tasks

Predictive auxiliary tasks have been shown to improve performance in num...
research
02/22/2022

Continual Auxiliary Task Learning

Learning auxiliary tasks, such as multiple predictions about the world, ...
research
09/10/2019

Discovery of Useful Questions as Auxiliary Tasks

Arguably, intelligent agents ought to be able to discover their own ques...
research
07/24/2019

Terminal Prediction as an Auxiliary Task for Deep Reinforcement Learning

Deep reinforcement learning has achieved great successes in recent years...
research
02/09/2021

Learning State Representations from Random Deep Action-conditional Predictions

In this work, we study auxiliary prediction tasks defined by temporal-di...
research
12/05/2018

Adapting Auxiliary Losses Using Gradient Similarity

One approach to deal with the statistical inefficiency of neural network...

Please sign up or login with your details

Forgot password? Click here to reset