On the Interplay Between Fine-tuning and Composition in Transformers

05/31/2021
by   Lang Yu, et al.
0

Pre-trained transformer language models have shown remarkable performance on a variety of NLP tasks. However, recent research has suggested that phrase-level representations in these models reflect heavy influences of lexical content, but lack evidence of sophisticated, compositional phrase information. Here we investigate the impact of fine-tuning on the capacity of contextualized embeddings to capture phrase meaning information beyond lexical content. Specifically, we fine-tune models on an adversarial paraphrase classification task with high lexical overlap, and on a sentiment classification task. After fine-tuning, we analyze phrasal representations in controlled settings following prior work. We find that fine-tuning largely fails to benefit compositionality in these representations, though training on sentiment yields a small, localized benefit for certain models. In follow-up analyses, we identify confounding cues in the paraphrase dataset that may explain the lack of composition benefits from that task, and we discuss potential factors underlying the localized benefits from sentiment training.

READ FULL TEXT

page 8

page 17

research
10/08/2020

Assessing Phrasal Representation and Composition in Transformers

Deep transformer models have pushed performance on NLP tasks to new limi...
research
04/13/2021

Understanding Transformers for Bot Detection in Twitter

In this paper we shed light on the impact of fine-tuning over social med...
research
06/27/2021

A Closer Look at How Fine-tuning Changes BERT

Given the prevalence of pre-trained contextualized representations in to...
research
11/03/2017

Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. Submission to PolEval task 2

We describe a variant of Child-Sum Tree-LSTM deep neural network (Tai et...
research
02/27/2019

Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition

Building meaningful phrase representations is challenging because phrase...
research
10/23/2020

Pretraining and Fine-Tuning Strategies for Sentiment Analysis of Latvian Tweets

In this paper, we present various pre-training strategies that aid in im...
research
12/28/2021

Automatic Pharma News Categorization

We use a text dataset consisting of 23 news categories relevant to pharm...

Please sign up or login with your details

Forgot password? Click here to reset