-
Constructing Multiple Tasks for Augmentation: Improving Neural Image Classification With K-means Features
Multi-task learning (MTL) has received considerable attention, and numer...
read it
-
Semi-Supervised Sequence Modeling with Cross-View Training
Unsupervised representation learning algorithms such as word2vec and ELM...
read it
-
Neural Duplicate Question Detection without Labeled Training Data
Supervised training of neural models to duplicate question detection in ...
read it
-
Discriminative Consistent Domain Generation for Semi-supervised Learning
Deep learning based task systems normally rely on a large amount of manu...
read it
-
Adapting Auxiliary Losses Using Gradient Similarity
One approach to deal with the statistical inefficiency of neural network...
read it
-
Improving Limited Labeled Dialogue State Tracking with Self-Supervision
Existing dialogue state tracking (DST) models require plenty of labeled ...
read it
-
Pseudo Shots: Few-Shot Learning with Auxiliary Data
In many practical few-shot learning problems, even though labeled exampl...
read it
Auxiliary Task Reweighting for Minimum-data Learning
Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce. To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task. Assigning and optimizing the importance weights for different auxiliary tasks remains an crucial and largely understudied research question. In this work, we propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task. Specifically, we formulate the weighted likelihood function of auxiliary tasks as a surrogate prior for the main task. By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior of the main task, we obtain a more accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search. In multiple experimental settings (e.g. semi-supervised learning, multi-label classification), we demonstrate that our algorithm can effectively utilize limited labeled data of the main task with the benefit of auxiliary tasks compared with previous task reweighting methods. We also show that under extreme cases with only a few extra examples (e.g. few-shot domain adaptation), our algorithm results in significant improvement over the baseline.
READ FULL TEXT
Comments
There are no comments yet.