How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings

09/02/2019
by   Kawin Ethayarajh, et al.
0

Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks. However, just how contextual are the contextualized representations produced by models such as ELMo and BERT? Are there infinitely many context-specific representations for each word, or are words essentially assigned one of a finite number of word-sense representations? For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model. While representations of the same word in different contexts still have a greater cosine similarity than those of two different words, this self-similarity is much lower in upper layers. This suggests that upper layers of contextualizing models produce more context-specific representations, much like how upper layers of LSTMs produce more task-specific representations. In all layers of ELMo, BERT, and GPT-2, on average, less than 5 by a static embedding for that word, providing some justification for the success of contextualized representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2020

Catch the "Tails" of BERT

Recently, contextualized word embeddings outperform static word embeddin...
research
08/27/2018

Dissecting Contextual Word Embeddings: Architecture and Representation

Contextual word representations derived from pre-trained bidirectional l...
research
12/17/2019

The performance evaluation of Multi-representation in the Deep Learning models for Relation Extraction Task

Single implementing, concatenating, adding or replacing of the represent...
research
07/12/2022

Using Paraphrases to Study Properties of Contextual Embeddings

We use paraphrases as a unique source of data to analyze contextualized ...
research
04/25/2020

Quantifying the Contextualization of Word Representations with Semantic Class Probing

Pretrained language models have achieved a new state of the art on many ...
research
12/17/2020

BERT Goes Shopping: Comparing Distributional Models for Product Representations

Word embeddings (e.g., word2vec) have been applied successfully to eComm...
research
08/31/2021

Effectiveness of Deep Networks in NLP using BiDAF as an example architecture

Question Answering with NLP has progressed through the evolution of adva...

Please sign up or login with your details

Forgot password? Click here to reset