Self-Supervised Generalisation with Meta Auxiliary Learning

01/25/2019
by   Shikun Liu, et al.
0

Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manually-labelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network's performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at <https://github.com/lorenmt/maxl>.

READ FULL TEXT
research
01/24/2021

Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks

Recent work on few-shot learning <cit.> showed that quality of learned r...
research
03/14/2022

MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks

In many personalized recommendation scenarios, the generalization abilit...
research
12/02/2021

Transfer Learning in Conversational Analysis through Reusing Preprocessing Data as Supervisors

Conversational analysis systems are trained using noisy human labels and...
research
03/28/2022

Decoupled Multi-task Learning with Cyclical Self-Regulation for Face Parsing

This paper probes intrinsic factors behind typical failure cases (e.g. s...
research
07/16/2020

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs

Graph neural networks have shown superior performance in a wide range of...
research
09/07/2022

Improving Self-supervised Learning for Out-of-distribution Task via Auxiliary Classifier

In real world scenarios, out-of-distribution (OOD) datasets may have a l...
research
02/07/2022

Auto-Lambda: Disentangling Dynamic Task Relationships

Understanding the structure of multiple related tasks allows for multi-t...

Please sign up or login with your details

Forgot password? Click here to reset