Learning to Remove: Towards Isotropic Pre-trained BERT Embedding

04/12/2021
by   Yuxin Liang, et al.
0

Pre-trained language models such as BERT have become a more common choice of natural language processing (NLP) tasks. Research in word representation shows that isotropic embeddings can significantly improve performance on downstream tasks. However, we measure and analyze the geometry of pre-trained BERT embedding and find that it is far from isotropic. We find that the word vectors are not centered around the origin, and the average cosine similarity between two random words is much higher than zero, which indicates that the word vectors are distributed in a narrow cone and deteriorate the representation capacity of word embedding. We propose a simple, and yet effective method to fix this problem: remove several dominant directions of BERT embedding with a set of learnable weights. We train the weights on word similarity tasks and show that processed embedding is more isotropic. Our method is evaluated on three standardized tasks: word similarity, word analogy, and semantic textual similarity. In all tasks, the word embedding processed by our method consistently outperforms the original embedding (with average improvement of 13 methods. Our method is also proven to be more robust to changes of hyperparameter.

READ FULL TEXT
research
10/24/2022

Subspace-based Set Operations on a Pre-trained Word Embedding Space

Word embedding is a fundamental technology in natural language processin...
research
04/08/2019

Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation

The neural language models (NLM) achieve strong generalization capabilit...
research
12/04/2020

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

While the success of pre-trained language models has largely eliminated ...
research
09/12/2022

emojiSpace: Spatial Representation of Emojis

In the absence of nonverbal cues during messaging communication, users e...
research
10/14/2021

BI-RADS BERT Using Section Tokenization to Understand Radiology Reports

Radiology reports are the main form of communication between radiologist...
research
02/05/2017

All-but-the-Top: Simple and Effective Postprocessing for Word Representations

Real-valued word representations have transformed NLP applications, popu...

Please sign up or login with your details

Forgot password? Click here to reset