How transfer learning impacts linguistic knowledge in deep NLP models?

05/31/2021
by   Nadir Durrani, et al.
0

Transfer learning from pre-trained neural language models towards downstream tasks has been a predominant theme in NLP recently. Several researchers have shown that deep NLP models learn non-trivial amount of linguistic knowledge, captured at different layers of the model. We investigate how fine-tuning towards downstream NLP tasks impacts the learned linguistic knowledge. We carry out a study across popular pre-trained models BERT, RoBERTa and XLNet using layer and neuron-level diagnostic classifiers. We found that for some GLUE tasks, the network relies on the core linguistic information and preserve it deeper in the network, while for others it forgets. Linguistic information is distributed in the pre-trained language models but becomes localized to the lower layers post fine-tuning, reserving higher layers for the task specific knowledge. The pattern varies across architectures, with BERT retaining linguistic information relatively deeper in the network compared to RoBERTa and XLNet, where it is predominantly delegated to the lower layers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2022

Linguistic Correlation Analysis: Discovering Salient Neurons in deepNLP models

While a lot of work has been done in understanding representations learn...
research
10/05/2020

Linguistic Profiling of a Neural Language Model

In this paper we investigate the linguistic knowledge learned by a Neura...
research
09/13/2021

Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

Most of the recent works on probing representations have focused on BERT...
research
05/22/2020

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

Large pre-trained language models have been shown to store factual knowl...
research
07/15/2020

AdapterHub: A Framework for Adapting Transformers

The current modus operandi in NLP involves downloading and fine-tuning p...
research
05/09/2020

It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations

Training on only perfect Standard English corpora predisposes pre-traine...
research
05/15/2019

BERT Rediscovers the Classical NLP Pipeline

Pre-trained text encoders have rapidly advanced the state of the art on ...

Please sign up or login with your details

Forgot password? Click here to reset