Multitask Soft Option Learning

04/01/2019
by   Maximilian Igl, et al.
98

We present Multitask Soft Option Learning (MSOL), a hierarchical multitask framework based on Planning as Inference. MSOL extends the concept of options, using separate variational posteriors for each task, regularized by a shared prior. This allows fine-tuning of options for new tasks without forgetting their learned policies, leading to faster training without reducing the expressiveness of the hierarchical policy. Additionally, MSOL avoids several instabilities during training in a multitask setting and provides a natural way to not only learn intra-option policies, but also their terminations. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments.

READ FULL TEXT
research
06/12/2022

Matching options to tasks using Option-Indexed Hierarchical Reinforcement Learning

The options framework in Hierarchical Reinforcement Learning breaks down...
research
06/25/2020

SOAC: The Soft Option Actor-Critic Architecture

The option framework has shown great promise by automatically extracting...
research
01/30/2021

Stay Alive with Many Options: A Reinforcement Learning Approach for Autonomous Navigation

Hierarchical reinforcement learning approaches learn policies based on h...
research
12/06/2021

Flexible Option Learning

Temporal abstraction in reinforcement learning (RL), offers the promise ...
research
04/29/2019

DAC: The Double Actor-Critic Architecture for Learning Options

We reformulate the option framework as two parallel augmented MDPs. Unde...
research
08/14/2017

Benchmark Environments for Multitask Learning in Continuous Domains

As demand drives systems to generalize to various domains and problems, ...
research
02/24/2021

The Logical Options Framework

Learning composable policies for environments with complex rules and tas...

Please sign up or login with your details

Forgot password? Click here to reset