AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning

04/08/2019
by   Han Guo, et al.
0

Multi-task learning (MTL) has achieved success over a wide range of problems, where the goal is to improve the performance of a primary task using a set of relevant auxiliary tasks. However, when the usefulness of the auxiliary tasks w.r.t. the primary task is not known a priori, the success of MTL models depends on the correct choice of these auxiliary tasks and also a balanced mixing ratio of these tasks during alternate training. These two problems could be resolved via manual intuition or hyper-parameter tuning over all combinatorial task choices, but this introduces inductive bias or is not scalable when the number of candidate auxiliary tasks is very large. To address these issues, we present AutoSeM, a two-stage MTL pipeline, where the first stage automatically selects the most useful auxiliary tasks via a Beta-Bernoulli multi-armed bandit with Thompson Sampling, and the second stage learns the training mixing ratio of these selected auxiliary tasks via a Gaussian Process based Bayesian optimization framework. We conduct several MTL experiments on the GLUE language understanding tasks, and show that our AutoSeM framework can successfully find relevant auxiliary tasks and automatically learn their mixing ratio, achieving significant performance boosts on several primary tasks. Finally, we present ablations for each stage of AutoSeM and analyze the learned auxiliary task choices.

READ FULL TEXT
research
07/02/2020

A Brief Review of Deep Multi-task Learning and Auxiliary Task Learning

Multi-task learning (MTL) optimizes several learning tasks simultaneousl...
research
09/13/2021

GradTS: A Gradient-Based Automatic Auxiliary Task Selection Method Based on Transformer Networks

A key problem in multi-task learning (MTL) research is how to select hig...
research
06/19/2018

Dynamic Multi-Level Multi-Task Learning for Sentence Simplification

Sentence simplification aims to improve readability and understandabilit...
research
12/02/2021

Transfer Learning in Conversational Analysis through Reusing Preprocessing Data as Supervisors

Conversational analysis systems are trained using noisy human labels and...
research
09/18/2023

Task Selection and Assignment for Multi-modal Multi-task Dialogue Act Classification with Non-stationary Multi-armed Bandits

Multi-task learning (MTL) aims to improve the performance of a primary t...
research
02/01/2021

Many Hands Make Light Work: Using Essay Traits to Automatically Score Essays

Most research in the area of automatic essay grading (AEG) is geared tow...
research
05/10/2023

Efficient Training of Multi-task Neural Solver with Multi-armed Bandits

Efficiently training a multi-task neural solver for various combinatoria...

Please sign up or login with your details

Forgot password? Click here to reset