Imparting Interpretability to Word Embeddings

07/19/2018
by   Aykut Koc, et al.
0

As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words but the vector corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget's Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the Thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. We also demonstrate the preservation of semantic coherence of the resulting vector space by using word-analogy and word-similarity tests. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.

READ FULL TEXT
research
11/01/2017

Semantic Structure and Interpretability of Word Embeddings

Dense word embeddings, which encode semantic meanings of words to low di...
research
08/14/2018

Embedding Grammars

Classic grammars and regular expressions can be used for a variety of pu...
research
06/09/2019

Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings

Word embeddings typically represent different meanings of a word in a si...
research
06/17/2020

On the Learnability of Concepts: With Applications to Comparing Word Embedding Algorithms

Word Embeddings are used widely in multiple Natural Language Processing ...
research
12/16/2019

Scale-dependent Relationships in Natural Language

Natural language exhibits statistical dependencies at a wide range of sc...
research
09/05/2018

Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not: Testing whether Word Embeddings Can Tell

This paper presents an approach for investigating the nature of semantic...
research
05/29/2018

Quantum-inspired Complex Word Embedding

A challenging task for word embeddings is to capture the emergent meanin...

Please sign up or login with your details

Forgot password? Click here to reset