Attentive Multi-Task Deep Reinforcement Learning

07/05/2019
by   Timo Bram, et al.
0

Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.

READ FULL TEXT
research
09/26/2013

Sample Complexity of Multi-task Reinforcement Learning

Transferring knowledge across a sequence of reinforcement-learning tasks...
research
09/12/2018

Multi-task Deep Reinforcement Learning with PopArt

The reinforcement learning community has made great strides in designing...
research
05/26/2020

Visual Interest Prediction with Attentive Multi-Task Transfer Learning

Visual interest affect prediction is a very interesting area of rese...
research
06/02/2023

Efficient Multi-Task and Transfer Reinforcement Learning with Parameter-Compositional Framework

In this work, we investigate the potential of improving multi-task train...
research
11/07/2022

Curriculum-based Asymmetric Multi-task Reinforcement Learning

We introduce CAMRL, the first curriculum-based asymmetric multi-task lea...
research
04/23/2018

Dropping Networks for Transfer Learning

In natural language understanding, many challenges require learning rela...
research
09/23/2020

Worst-Case-Aware Curriculum Learning for Zero and Few Shot Transfer

Multi-task transfer learning based on pre-trained language encoders achi...

Please sign up or login with your details

Forgot password? Click here to reset