Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models

04/17/2021
by   Zhengxuan Wu, et al.
2

There is growing evidence that pretrained language models improve task-specific fine-tuning not just for the languages seen in pretraining, but also for new languages and even non-linguistic data. What is the nature of this surprising cross-domain transfer? We offer a partial answer via a systematic exploration of how much transfer occurs when models are denied any information about word identity via random scrambling. In four classification tasks and two sequence labeling tasks, we evaluate baseline models, LSTMs using GloVe embeddings, and BERT. We find that only BERT shows high rates of transfer into our scrambled domains, and for classification but not sequence labeling tasks. Our analyses seek to explain why transfer succeeds for some tasks but not others, to isolate the separate contributions of pretraining versus fine-tuning, and to quantify the role of word frequency. These findings help explain where and why cross-domain transfer occurs, which can guide future studies and practical fine-tuning efforts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/05/2021

Robust Transfer Learning with Pretrained Language Models through Adapters

Transfer learning with large pretrained transformer-based language model...
research
01/26/2021

First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT

Multilingual pretrained language models have demonstrated remarkable zer...
research
04/04/2019

Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling

Contextualized word embeddings such as ELMo and BERT provide a foundatio...
research
10/22/2020

ConVEx: Data-Efficient and Few-Shot Slot Labeling

We propose ConVEx (Conversational Value Extractor), an efficient pretrai...
research
06/27/2023

Investigating Cross-Domain Behaviors of BERT in Review Understanding

Review score prediction requires review text understanding, a critical r...
research
03/15/2021

How Many Data Points is a Prompt Worth?

When fine-tuning pretrained models for classification, researchers eithe...
research
06/09/2023

Understanding Telecom Language Through Large Language Models

The recent progress of artificial intelligence (AI) opens up new frontie...

Please sign up or login with your details

Forgot password? Click here to reset