Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?

05/01/2020
by   Yada Pruksachatkun, et al.
0

While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.

READ FULL TEXT

page 6

page 8

page 15

page 16

research
08/26/2021

Rethinking Why Intermediate-Task Fine-Tuning Works

Supplementary Training on Intermediate Labeled-data Tasks (STILTs) is a ...
research
04/16/2021

What to Pre-Train on? Efficient Intermediate Task Selection

Intermediate task fine-tuning has been shown to culminate in large trans...
research
08/15/2020

Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation

Traditional NLP has long held (supervised) syntactic parsing necessary f...
research
07/21/2021

The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding

While recent benchmarks have spurred a lot of new work on improving the ...
research
02/27/2019

An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models

A growing number of state-of-the-art transfer learning methods employ la...
research
03/10/2023

Overcoming Bias in Pretrained Models by Manipulating the Finetuning Dataset

Transfer learning is beneficial by allowing the expressive features of m...
research
02/11/2023

Divergence-Based Domain Transferability for Zero-Shot Classification

Transferring learned patterns from pretrained neural language models has...

Please sign up or login with your details

Forgot password? Click here to reset