Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP

11/02/2021
by   Trapit Bansal, et al.
7

Meta-learning considers the problem of learning an efficient learning process that can leverage its past experience to accurately solve new tasks. However, the efficacy of meta-learning crucially depends on the distribution of tasks available for training, and this is often assumed to be known a priori or constructed from limited supervised datasets. In this work, we aim to provide task distributions for meta-learning by considering self-supervised tasks automatically proposed from unlabeled text, to enable large-scale meta-learning in NLP. We design multiple distributions of self-supervised tasks by considering important aspects of task diversity, difficulty, type, domain, and curriculum, and investigate how they affect meta-learning performance. Our analysis shows that all these factors meaningfully alter the task distribution, some inducing significant improvements in downstream few-shot accuracy of the meta-learned models. Empirically, results on 20 downstream tasks show significant improvements in few-shot learning – adding up to +4.2 accuracy (on average) to the previous unsupervised meta-learning method, and perform comparably to supervised methods on the FewRel 2.0 benchmark.

READ FULL TEXT

page 7

page 13

research
11/10/2019

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks

Self-supervised pre-training of transformer models has shown enormous su...
research
09/17/2020

Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks

Self-supervised pre-training of transformer models has revolutionized NL...
research
03/22/2023

Meta-augmented Prompt Tuning for Better Few-shot Learning

Prompt tuning is a parameter-efficient method, which freezes all PLM par...
research
04/01/2022

On the Efficiency of Integrating Self-supervised Learning and Meta-learning for User-defined Few-shot Keyword Spotting

User-defined keyword spotting is a task to detect new spoken terms defin...
research
06/21/2019

Meta-learning of textual representations

Recent progress in AutoML has lead to state-of-the-art methods (e.g., Au...
research
10/08/2019

When Does Self-supervision Improve Few-shot Learning?

We present a technique to improve the generalization of deep representat...
research
06/27/2023

Unsupervised Episode Generation for Graph Meta-learning

In this paper, we investigate Unsupervised Episode Generation methods to...

Please sign up or login with your details

Forgot password? Click here to reset