Predicting Concreteness and Imageability of Words Within and Across Languages via Word Embeddings
The notions of concreteness and imageability, traditionally important in psycholinguistics, are gaining significance in semantic-oriented natural language processing tasks. In this paper we investigate the predictability of these two concepts via supervised learning, using word embeddings as explanatory variables. We perform predictions both within and across languages by exploiting collections of cross-lingual embeddings aligned to a single vector space. We show that the notions of concreteness and imageability are highly predictable both within and across languages, with a moderate loss of up to 20 the cross-lingual transfer via word embeddings is more efficient than the simple transfer via bilingual dictionaries.
READ FULL TEXT