Pre-training Text Representations as Meta Learning

04/12/2020
by   Shangwen Lv, et al.
0

Pre-training text representations has recently been shown to significantly improve the state-of-the-art in many natural language processing tasks. The central goal of pre-training is to learn text representations that are useful for subsequent tasks. However, existing approaches are optimized by minimizing a proxy objective, such as the negative log likelihood of language modeling. In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks. We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps. The standard multi-task learning objective adopted in BERT is a special case of our learning algorithm where the depth of meta-train is zero. We study the problem in two settings: unsupervised pre-training and supervised pre-training with different pre-training objects to verify the generality of our approach.Experimental results show that our algorithm brings improvements and learns better initializations for a variety of downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2022

Learning a Better Initialization for Soft Prompts via Meta-Learning

Prompt tuning (PT) is an effective approach to adapting pre-trained lang...
research
12/01/2020

Profile Prediction: An Alignment-Based Pre-Training Task for Protein Sequence Models

For protein sequence datasets, unlabeled data has greatly outpaced label...
research
12/22/2022

Robust Meta-Representation Learning via Global Label Inference and Classification

Few-shot learning (FSL) is a central problem in meta-learning, where lea...
research
09/07/2022

Blessing of Class Diversity in Pre-training

This paper presents a new statistical analysis aiming to explain the rec...
research
04/18/2021

On the Influence of Masking Policies in Intermediate Pre-training

Current NLP models are predominantly trained through a pretrain-then-fin...
research
08/03/2022

GPPF: A General Perception Pre-training Framework via Sparsely Activated Multi-Task Learning

Pre-training over mixtured multi-task, multi-domain, and multi-modal dat...
research
03/31/2018

Learning Unsupervised Learning Rules

A major goal of unsupervised learning is to discover data representation...

Please sign up or login with your details

Forgot password? Click here to reset