Curriculum Learning of Multiple Tasks

12/03/2014
by   Anastasia Pentina, et al.
0

Sharing information between multiple tasks enables algorithms to achieve good generalization performance even from small amounts of training data. However, in a realistic scenario of multi-task learning not all tasks are equally related to each other, hence it could be advantageous to transfer information only between the most related tasks. In this work we propose an approach that processes multiple tasks in a sequence with sharing between subsequent tasks instead of solving all tasks jointly. Subsequently, we address the question of curriculum learning of tasks, i.e. finding the best order of tasks to be learned. Our approach is based on a generalization bound criterion for choosing the task order that optimizes the average expected classification performance over all tasks. Our experimental results show that learning multiple related tasks sequentially can be more effective than learning them jointly, the order in which tasks are being solved affects the overall performance, and that our model is able to automatically discover the favourable order of tasks.

READ FULL TEXT
research
06/27/2012

Learning Task Grouping and Overlap in Multi-task Learning

In the paradigm of multi-task learning, mul- tiple related prediction ta...
research
08/13/2018

Multi-Task Learning for Sequence Tagging: An Empirical Study

We study three general multi-task learning (MTL) approaches on 11 sequen...
research
11/09/2021

Variational Multi-Task Learning with Gumbel-Softmax Priors

Multi-task learning aims to explore task relatedness to improve individu...
research
11/12/2019

Learning Sparse Sharing Architectures for Multiple Tasks

Most existing deep multi-task learning models are based on parameter sha...
research
09/27/2018

An Empirical Comparison of Syllabuses for Curriculum Learning

Syllabuses for curriculum learning have been developed on an ad-hoc, per...
research
03/27/2023

Information Maximizing Curriculum: A Curriculum-Based Approach for Training Mixtures of Experts

Mixtures of Experts (MoE) are known for their ability to learn complex c...
research
01/07/2022

Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task

Multi-task learning is to improve the performance of the model by transf...

Please sign up or login with your details

Forgot password? Click here to reset