The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings

01/27/2020
by   Binny Mathew, et al.
22

We introduce POLAR - a framework that adds interpretability to pre-trained word embeddings via the adoption of semantic differentials. Semantic differentials are a psychometric construct for measuring the semantics of a word by analysing its position on a scale between two polar opposites (e.g., cold – hot, soft – hard). The core idea of our approach is to transform existing, pre-trained word embeddings via semantic differentials to a new "polar" space with interpretable dimensions defined by such polar opposites. Our framework also allows for selecting the most discriminative dimensions from a set of polar dimensions provided by an oracle, i.e., an external source. We demonstrate the effectiveness of our framework by deploying it to various downstream tasks, in which our interpretable word embeddings achieve a performance that is comparable to the original word embeddings. We also show that the interpretable dimensions selected by our framework align with human judgement. Together, these results demonstrate that interpretability can be added to word embeddings without compromising performance. Our work is relevant for researchers and engineers interested in interpreting pre-trained word embeddings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2023

SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings

Adding interpretability to word embeddings represents an area of active ...
research
11/05/2019

Incremental Sense Weight Training for the Interpretation of Contextualized Word Embeddings

We present a novel online algorithm that learns the essence of each dime...
research
09/03/2019

Interpretable Word Embeddings via Informative Priors

Word embeddings have demonstrated strong performance on NLP tasks. Howev...
research
04/18/2019

Analytical Methods for Interpretable Ultradense Word Embeddings

Word embeddings are useful for a wide variety of tasks, but they lack in...
research
04/17/2021

Embodying Pre-Trained Word Embeddings Through Robot Actions

We propose a promising neural network model with which to acquire a grou...
research
06/23/2020

Supervised Understanding of Word Embeddings

Pre-trained word embeddings are widely used for transfer learning in nat...
research
09/29/2020

Leader: Prefixing a Length for Faster Word Vector Serialization

Two competing file formats have become the de facto standards for distri...

Please sign up or login with your details

Forgot password? Click here to reset