SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings

01/11/2023
by   Jan Engler, et al.
0

Adding interpretability to word embeddings represents an area of active research in text representation. Recent work has explored thepotential of embedding words via so-called polar dimensions (e.g. good vs. bad, correct vs. wrong). Examples of such recent approaches include SemAxis, POLAR, FrameAxis, and BiImp. Although these approaches provide interpretable dimensions for words, they have not been designed to deal with polysemy, i.e. they can not easily distinguish between different senses of words. To address this limitation, we present SensePOLAR, an extension of the original POLAR framework that enables word-sense aware interpretability for pre-trained contextual word embeddings. The resulting interpretable word embeddings achieve a level of performance that is comparable to original contextual word embeddings across a variety of natural language processing tasks including the GLUE and SQuAD benchmarks. Our work removes a fundamental limitation of existing approaches by offering users sense aware interpretations for contextual word embeddings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2020

The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings

We introduce POLAR - a framework that adds interpretability to pre-train...
research
09/18/2019

Cross-Lingual Contextual Word Embeddings Mapping With Multi-Sense Words In Mind

Recent work in cross-lingual contextual word embedding learning cannot h...
research
11/05/2019

Incremental Sense Weight Training for the Interpretation of Contextualized Word Embeddings

We present a novel online algorithm that learns the essence of each dime...
research
06/23/2020

Supervised Understanding of Word Embeddings

Pre-trained word embeddings are widely used for transfer learning in nat...
research
09/18/2021

Augmenting semantic lexicons using word embeddings and transfer learning

Sentiment-aware intelligent systems are essential to a wide array of app...
research
04/02/2019

Identification, Interpretability, and Bayesian Word Embeddings

Social scientists have recently turned to analyzing text using tools fro...
research
11/14/2019

Sparse associative memory based on contextual code learning for disambiguating word senses

In recent literature, contextual pretrained Language Models (LMs) demons...

Please sign up or login with your details

Forgot password? Click here to reset