Neural Vector Conceptualization for Word Vector Space Interpretation

04/02/2019
by   Robert Schwarzenberg, et al.
0

Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end, we train a neural model to conceptualize word vectors, which means that it activates higher order concepts it recognizes in a given vector. Contrary to prior approaches, our model operates in the original vector space and is capable of learning non-linear relations between word vectors and concepts. Furthermore, we show that it produces considerably less entropic concept activation profiles than the popular cosine similarity.

READ FULL TEXT
research
06/24/2016

Issues in evaluating semantic spaces using word analogies

The offset method for solving word analogies has become a standard evalu...
research
10/11/2018

Towards Understanding Linear Word Analogies

A surprising property of word vectors is that vector algebra can often b...
research
09/24/2018

Text Similarity in Vector Space Models: A Comparative Study

Automatic measurement of semantic text similarity is an important task i...
research
06/08/2017

Deriving a Representative Vector for Ontology Classes with Instance Word Vector Embeddings

Selecting a representative vector for a set of vectors is a very common ...
research
01/09/2019

High Fidelity Vector Space Models of Structured Data

Machine learning systems regularly deal with structured data in real-wor...
research
11/02/2016

Fuzzy paraphrases in learning word representations with a lexicon

A synonym of a polysemous word is usually only the paraphrase of one sen...
research
05/13/2016

Semantic Spaces

Any natural language can be considered as a tool for producing large dat...

Please sign up or login with your details

Forgot password? Click here to reset