As a method to represent natural language on computer, researchers have utilized distributed word representation. The distributed word representation is to represent a word as n-dimensional float vector, hypothesizing that some or all of the dimensions may capture semantic meaning of the word. The representation has worked well in various NLP tasks, substituting one-hot representation Turian et al. (2010). Two major algorithms learning the distributed word representation are CBOW (Continuous Bag-of-Words) and skip-gram Mikolov et al. (2013b)
. Both CBOW and skip-gram learn the representation using one hidden neural networks. The difference is that CBOW learns the representation of a center word from neighbor words whereas skip-gram gets the representation of neighbor words from a center word. Therefore, the algorithms have to depend on word order, because their objective function is to maximize the probability of occurrence of neighbor words given the center word. Then a problem occurs because the word representations do not have any information to distinguish synonyms and antonyms. For example,worthy and desirable should be mapped closely on the vector space as well as agree and disagree should be mapped apart, although they occur on a very similar pattern. Researchers have focused on the problem, and their main approaches are to use semantic lexicons Faruqui et al. (2014); Mrkšić et al. (2016); Speer et al. (2017); Vulić et al. (2017); Camacho-Collados et al. (2015). One of the successful works is Faruqui’s retrofitting111The retrofitting codes are available at
https://github.com/mfaruqui/retrofitting, which can be summarized as pulling word vectors of synonyms close together by weighted averaging the word vectors on a fixed vector space (it will be explained in Section 2.1). The retrofitting greatly improves word similarity between synonyms, and the result not only corresponds with human intuition on words but also performs better on document classification tasks with comparison to original word embeddings Kiela et al. (2015). From the idea of retrofitting, our method hypothesize that we can enrich not only word representation but also its vector space using semantic lexicons222Our codes are available at
https://github.com/HwiyeolJo/Extrofitting. We call our method as extrofitting, which retrofits word vectors by expanding its dimensions.
Retrofitting Faruqui et al. (2014) is a post-processing method to enrich word vectors using synonyms in semantic lexicons. The algorithm learns the word embedding matrix with the objective function :
where an original word vector is , its synonym vector is , and inferred word vector is
. The hyperparameterand control the relative strengths of associations. The can be derived by the following online update:
2.2 Linear Discriminant Analysis (LDA)
LDA Welling (2005) is one of the dimension reduction algorithms that project data into different vector space, while minimizing the loss of class information as much as possible. As a result, the algorithm finds linear vector spaces which minimize the distance of data in the same class as well as maximize the distance among the different class. The algorithm can be summarized as follows:
Calculating between-class scatter matrix and within-class scatter matrix .
When we denote data as , classes as , and can be formulated as follows:
where the overall average of is , and the partial average in class is denoted by .
Maximizing the objective function .
The objective function that we should maximize can be defined as
and its solution can be reduced to find U that satisfies
Therefore, is derived by eigen-decomposition of ; choosing eigen vectors which have the top- eigen values, and composing transform matrix of .
Transforming data onto new vector space
Using transform matrix , we can get transformed data by
3 Enriching Representations of Word Vector and The Vector Space
3.1 Expanding Word Vector with Enrichment
We simply enrich the word vectors by expanding dimension(s) that add 1 or more dimension to original vectors, filling with its representative value , which can be a mean value. We denote an original word vectors as where D denotes the number of word vector dimension. Then, the representative value can be formulated as . Intuitively, if we expand more additional dimensions, the word vectors will strengthen its own meaning. Likewise, the ratio of the number of expanded dimension to the number of original dimensions will affect the meaning of the word vectors.
3.2 Transferring Semantic Knowledge
To transfer semantic knowledge on the representative value , we also take a simple approach of averaging all the representative values of each synonym pair, substituting each of its previous value. We get the synonym pairs from lexicons we introduced in Section 3. The transferred representative value can be formulated as where L refers to the lexicon consisting of synonym pairs , and is the number of synonyms. This manipulation makes the representation of the synonym pairs close to one another.
3.3 Enriching Vector Space
With the enriched vectors and the semantic knowledge, we perform Linear Discriminant Analysis for dimension reduction as well as clustering the synonyms from semantic knowledge. LDA finds new vector spaces to cluster and differentiate the labeled data, which are synonym pairs in this experiment. We can get the extrofitted word embedding matrix as follows:
where is the word embedding matrix composed of word vectors and is the index of the synonym pair. We implement our method using Python2.7 with scikit-learn Pedregosa et al. (2011).
4 Experiment Data
4.1 Pretrained Word Vectors
GloVe Pennington et al. (2014) has lots of variations in respect to word dimension, number of tokens, and train sources. We used glove.6B trained on Wikipedia+Gigawords and glove.42B.300d trained on Common Crawl. The other pretrained GloVe do not fit in our experiment because they have different word dimension or are case-sensitive. We also use 300-dimensional Word2Vec Mikolov et al. (2013a) with negative sampling trained on GoogleNews corpus. Fasttext Bojanowski et al. (2016) is an extension of Word2Vec, which utilizes subword information to represent an original word. We used 300-dimensional pretrained Fasttext trained on Wikipedia (wiki.en.vec), using skip-gram.
4.2 Semantic Lexicons
We borrow the semantic lexicons from retrofitting Faruqui et al. (2014). Faruqui et al. extracted the synonyms from PPDB Ganitkevitch et al. (2013) by finding a word that more than two words in another language are corresponding with. Retrofitting also used WordNet Miller (1995) database which grouped words into set of synonyms (synsets). We used two versions of WordNet lexicon, one which consists of synonym only (WordNet) and the other with additional hypernyms, hyponyms included (WordNet). Lastly, synonyms were extracted from FrameNet Baker et al. (1998), which contains more than 200,000 manually annotated sentences linked to semantic frames. Faruqui et al. regarded words as synonyms if the words can be grouped with any of the frames.
4.3 Evaluation Data
We evaluate our methods on word similarity tasks using 4 different kinds of dataset. MEN-3k Bruni et al. (2014) consists of 3000-word pairs rated from 0 to 50. WordSim-353 Finkelstein et al. (2001) consists of 353-word pairs rated from 0 to 10. SimLex-999 Hill et al. (2015) includes 999-word pairs rated from 0 to 10. RG-65 Rubenstein and Goodenough (1965) has 65 words paired scored from 0 to 4. MEN-3k and WordSim-353 were split into train (or dev) set and test set, but we combined them together solely for evaluation purpose. The other datasets have lots of out-of-vocabulary, so we disregard them for future work.
|3 Cue Word||Method||
Top-10 Nearest Words(Cosine Similarity Score)
5 Experiments on Word Similarity Task
The word similarity task is to calculate Spearman’s correlation Daniel (1990) between two words as word vector format. We first apply extrofitting to GloVe from different data sources and present the result in Table 1.
The result shows that although the number of the extrofitted word with FrameNet is less than the other lexicons, its performance is on par with other lexicons. We can also ensure that our method improves the performance of original pretrained word vectors.
Next, we perform extrofitting on GloVe in different word dimension and compare the performance with retrofitting. We use WordNet lexicon on both retrofitting and extrofitting to compare the performances in the ideal environment for retrofitting. We present the results in Table 2. We can demonstrate that our method outperforms retrofitting on some of word similarity tasks, MEN-3k and WordSim-353. We believe that extrofitting on SimLex-999 and RG-65 is less powerful because all word pairs in the datasets are included on WordNet lexicon. Since retrofitting forces the word similarity to be improved by weighted averaging their word vectors, it is prone to be overfitted on semantic lexicons. On the other hand, extrofitting also uses synonyms to improve word similarity but it works differently that extrofitting projects the synonyms both close together on a new vector space and far from the other words. Therefore, our method can make more generalized word representation than retrofitting. We plot top-100 nearest words using t-SNE Maaten and Hinton (2008), as shown in Figure 1. We can find that retrofitting strongly collects synonym words together whereas extrofitting weakly disperses the words, resulting loss in cosine similarity score. However, the result of extrofitting can be interpreted as generalization that the word vectors strengthen its own meaning by being far away from each other, still keeping synonyms relatively close together (see Table 3). When we list up top-10 nearest words, extrofitting shows more favorable results than retrofitting. We can also observe that extrofitting even can be applied to words which are not included in semantic lexicons.
Lastly, we apply extrofitting to other well-known pretrained word vectors trained by different algorithms (see Subsection 4.1). The result is presented in Table 4. Extrofitting can be also applied to Word2Vec and Fasttext, enriching their word representations except on WordSim-353 and RG-65, respectively. We find that our method can distort the well-established word embeddings. However, our results are noteworthy in that extrofitting can be applied to other kinds of pretrained word vectors for further enrichment.
We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting
. Our method takes a simple approach that (i) expanding word dimension (ii) transferring semantic knowledge on the word vectors (iii) projecting the vector space with enrichment. We show that our method outperforms another post-processing method, retrofitting, on some of word similarity task. Our method is robust in respect to the dimension of word vector and the size of vocabulary, only including an explainable hyperparameter; the number of dimension to be expanded. Further, our method does not depend on the order of synonym pairs. As a future work, we will do further research about our method to generalize and improve its performance; First, we can experiment on other word similarity datasets for generalization. Second, we can also utilize AutoencoderBengio et al. (2009) for non-linear projection with a constraint of preserving spatial information of each dimension of word vector.
Thanks for Jaeyoung Kim to discuss this idea. Also, greatly appreciate the reviewers for critical comments.
- Baker et al. (1998) Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 17th international conference on Computational linguistics-Volume 1, pages 86–90. Association for Computational Linguistics.
Bengio et al. (2009)
Yoshua Bengio et al. 2009.
Learning deep architectures for ai.
Foundations and trends® in Machine Learning, 2(1):1–127.
- Bojanowski et al. (2016) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606.
Bruni et al. (2014)
Elia Bruni, N Tram, Marco Baroni, et al. 2014.
Multimodal distributional semantics.
The Journal of Artificial Intelligence Research, 49:1–47.
- Camacho-Collados et al. (2015) José Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Nasari: a novel approach to a semantically-aware representation of items. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 567–577.
- Daniel (1990) Wayne W Daniel. 1990. Spearman rank correlation coefficient. Applied nonparametric statistics, pages 358–365.
- Faruqui et al. (2014) Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166.
- Finkelstein et al. (2001) Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406–414. ACM.
- Ganitkevitch et al. (2013) Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764.
Hill et al. (2015)
Felix Hill, Roi Reichart, and Anna Korhonen. 2015.
Simlex-999: Evaluating semantic models with (genuine) similarity estimation.Computational Linguistics, 41(4):665–695.
Kiela et al. (2015)
Douwe Kiela, Felix Hill, and Stephen Clark. 2015.
Specializing word embeddings for similarity or relatedness.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2044–2048.
- Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605.
- Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
- Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119.
- Miller (1995) George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41.
- Mrkšić et al. (2016) Nikola Mrkšić, Diarmuid O Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. arXiv preprint arXiv:1603.00892.
- Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830.
- Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
- Rubenstein and Goodenough (1965) Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633.
- Speer et al. (2017) Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, pages 4444–4451.
Turian et al. (2010)
Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010.
Word representations: a simple and general method for semi-supervised learning.In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 384–394. Association for Computational Linguistics.
- Vulić et al. (2017) Ivan Vulić, Nikola Mrkšić, and Anna Korhonen. 2017. Cross-lingual induction and transfer of verb classes based on word vector space specialisation. arXiv preprint arXiv:1707.06945.
- Welling (2005) Max Welling. 2005. Fisher linear discriminant analysis. Department of Computer Science, University of Toronto, 3(1).