Supervised Understanding of Word Embeddings

06/23/2020
by   Halid Ziya Yerebakan, et al.
0

Pre-trained word embeddings are widely used for transfer learning in natural language processing. The embeddings are continuous and distributed representations of the words that preserve their similarities in compact Euclidean spaces. However, the dimensions of these spaces do not provide any clear interpretation. In this study, we have obtained supervised projections in the form of the linear keyword-level classifiers on word embeddings. We have shown that the method creates interpretable projections of original embedding dimensions. Activations of the trained classifier nodes correspond to a subset of the words in the vocabulary. Thus, they behave similarly to the dictionary features while having the merit of continuous value output. Additionally, such dictionaries can be grown iteratively with multiple rounds by adding expert labels on top-scoring words to an initial collection of the keywords. Also, the same classifiers can be applied to aligned word embeddings in other languages to obtain corresponding dictionaries. In our experiments, we have shown that initializing higher-order networks with these classifier weights gives more accurate models for downstream NLP tasks. We further demonstrate the usefulness of supervised dimensions in revealing the polysemous nature of a keyword of interest by projecting it's embedding using learned classifiers in different sub-spaces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2017

Semantic Structure and Interpretability of Word Embeddings

Dense word embeddings, which encode semantic meanings of words to low di...
research
06/28/2019

Supervised Contextual Embeddings for Transfer Learning in Natural Language Processing Tasks

Pre-trained word embeddings are the primary method for transfer learning...
research
01/11/2023

SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings

Adding interpretability to word embeddings represents an area of active ...
research
06/13/2021

Shape of Elephant: Study of Macro Properties of Word Embeddings Spaces

Pre-trained word representations became a key component in many NLP task...
research
01/27/2020

The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings

We introduce POLAR - a framework that adds interpretability to pre-train...
research
05/23/2017

Second-Order Word Embeddings from Nearest Neighbor Topological Features

We introduce second-order vector representations of words, induced from ...
research
06/04/2019

Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914

We investigate some aspects of the history of antisemitism in France, on...

Please sign up or login with your details

Forgot password? Click here to reset