Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words

05/04/2020
by   Josef Klafka, et al.
0

Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with near-perfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models breakdown and prioritize word-level context information when constructing token embeddings.

READ FULL TEXT

page 6

page 8

research
12/27/2021

"A Passage to India": Pre-trained Word Embeddings for Indian Languages

Dense word vectors or 'word embeddings' which encode semantic properties...
research
08/05/2020

An exploration of the encoding of grammatical gender in word embeddings

The vector representation of words, known as word embeddings, has opened...
research
01/07/2021

Homonym Identification using BERT – Using a Clustering Approach

Homonym identification is important for WSD that require coarse-grained ...
research
04/29/2020

Asking without Telling: Exploring Latent Ontologies in Contextual Representations

The success of pretrained contextual encoders, such as ELMo and BERT, ha...
research
10/21/2022

Probing with Noise: Unpicking the Warp and Weft of Embeddings

Improving our understanding of how information is encoded in vector spac...
research
06/08/2021

Obtaining Better Static Word Embeddings Using Contextual Embedding Models

The advent of contextual word embeddings – representations of words whic...
research
04/27/2023

Idioms, Probing and Dangerous Things: Towards Structural Probing for Idiomaticity in Vector Space

The goal of this paper is to learn more about how idiomatic information ...

Please sign up or login with your details

Forgot password? Click here to reset