Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing---and Back

03/11/2018
by   Elliot Meyerson, et al.
0

Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closely-related tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation is shown to improve performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2019

On Better Exploring and Exploiting Task Relationships in Multi-Task Learning: Joint Model and Feature Learning

Multitask learning (MTL) aims to learn multiple tasks simultaneously thr...
research
06/09/2020

Learning Functions to Study the Benefit of Multitask Learning

We study and quantify the generalization patterns of multitask learning ...
research
03/02/2017

Self-Paced Multitask Learning with Shared Knowledge

This paper introduces self-paced task selection to multitask learning, w...
research
10/31/2017

A multitask deep learning model for real-time deployment in embedded systems

We propose an approach to Multitask Learning (MTL) to make deep learning...
research
09/02/2018

Multitask Learning for Fundamental Frequency Estimation in Music

Fundamental frequency (f0) estimation from polyphonic music includes the...
research
03/25/2023

Identification of Negative Transfers in Multitask Learning Using Surrogate Models

Multitask learning is widely used in practice to train a low-resource ta...
research
06/06/2023

FAMO: Fast Adaptive Multitask Optimization

One of the grand enduring goals of AI is to create generalist agents tha...

Please sign up or login with your details

Forgot password? Click here to reset