Auxiliary Task Reweighting for Minimum-data Learning

10/16/2020
by   Baifeng Shi, et al.
0

Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce. To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task. Assigning and optimizing the importance weights for different auxiliary tasks remains an crucial and largely understudied research question. In this work, we propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task. Specifically, we formulate the weighted likelihood function of auxiliary tasks as a surrogate prior for the main task. By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior of the main task, we obtain a more accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search. In multiple experimental settings (e.g. semi-supervised learning, multi-label classification), we demonstrate that our algorithm can effectively utilize limited labeled data of the main task with the benefit of auxiliary tasks compared with previous task reweighting methods. We also show that under extreme cases with only a few extra examples (e.g. few-shot domain adaptation), our algorithm results in significant improvement over the baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2019

Constructing Multiple Tasks for Augmentation: Improving Neural Image Classification With K-means Features

Multi-task learning (MTL) has received considerable attention, and numer...
research
11/08/2021

TAGLETS: A System for Automatic Semi-Supervised Learning with Auxiliary Data

Machine learning practitioners often have access to a spectrum of data: ...
research
09/22/2018

Semi-Supervised Sequence Modeling with Cross-View Training

Unsupervised representation learning algorithms such as word2vec and ELM...
research
04/16/2021

Pareto Self-Supervised Training for Few-Shot Learning

While few-shot learning (FSL) aims for rapid generalization to new conce...
research
11/13/2019

Neural Duplicate Question Detection without Labeled Training Data

Supervised training of neural models to duplicate question detection in ...
research
10/28/2020

MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images

Semi-supervised learning via learning from limited quantities of labeled...
research
03/29/2022

Parameterized Consistency Learning-based Deep Polynomial Chaos Neural Network Method for Reliability Analysis in Aerospace Engineering

Polynomial chaos expansion (PCE) is a powerful surrogate model-based rel...

Please sign up or login with your details

Forgot password? Click here to reset