We present a simple and efficient unsupervised method for pairwise matching of documents from heterogeneous collections. Following Gong et al. (2018), we consider two document collections heterogeneous if their documents differ systematically with respect to vocabulary and / or level of abstraction. With these defining differences, there often also comes a difference in length, which, however, by itself does not make document collections heterogeneous. Examples include collections in which expert answers are mapped to non-expert questions (e.g. InsuranceQA by Feng et al. (2015)), but also so-called community QA collections (Blooma and Kurian (2011)), where the lexical mismatch between Q and A documents is often less pronounced than the length difference.
Like many other approaches, the proposed method is based on word embeddings as universal meaning representations, and on vector cosine as the similarity metric. However, instead of computing pairs of document representations and measuring their similarity, our method assesses the document-pair similarity on the basis of selected pairwise word similarities. This has the following advantages, which make our method a viable candidate for practical, real-world applications:efficiency, because pairwise word similarities can be efficiently (pre-)computed and cached, and transparency, because the selected words from each document are available as evidence for what the similarity computation was based on.
We demonstrate our method with the Concept-Project matching task (Gong et al. (2018)), which is described in the next section.
2 Task, Data Set, and Original Approach
The Concept-Project matching task is a binary classification task where each instance is a pair of heterogeneous documents: one concept, which is a short science curriculum item from NGSS111https://www.nextgenscience.org, and one project, which is a much longer science project description for school children from ScienceBuddies222https://www.sciencebuddies.org. The publicly available data set333https://github.com/HongyuGong/Document-Similarity-via-Hidden-Topics contains labelled pairs444Of the original labelled pairs, were duplicates, which we removed. involving unique concepts and unique projects. A pair is annotated as if the project matches the concept (), and as otherwise (). The annotation was done by undergrad engineering students. Gong et al. (2018) do not provide any specification, or annotation guidelines, of the semantics of the ’matches’ relation to be annotated. Instead, they create gold standard annotations based on a majority vote of three manual annotations. Figure 1 provides an example of a matching C-P pair. The concept labels can be very specific, potentially introducing vocabulary that is not present in the actual concept descriptions. The extent to which this information is used by Gong et al. (2018) is not entirely clear, so we experiment with several setups (cf. Section 4).
2.1 Gong et al. (2018)’s Approach
The approach by Gong et al. (2018) is based on the idea that the longer document in the pair is reduced to a set of topics which capture the essence of the document in a way that eliminates the effect of a potential length difference. In order to overcome the vocabulary mismatch, these topics are not based on words and their distributions (as in LSI (Deerwester et al. (1990)) or LDA (Blei et al. (2003)
)), but on word embedding vectors. Then, basically, matching is done by measuring the cosine similarity between the topic vectors and the short document words.Gong et al. (2018) motivate their approach mainly with the length mismatch argument, which they claim makes approaches relying on document representations (incl. vector averaging) unsuitable. Accordingly, they use Doc2Vec (Le and Mikolov (2014)) as one of their baselines, and show that its performance is inferior to their method. They do not, however, provide a much simpler averaging-based baseline. As a second baseline, they use Word Mover’s Distance (Kusner et al. (2015)), which is based on word-level distances, rather than distance of global document representations, but which also fails to be competitive with their topic-based method. Gong et al. (2018) use two different sets of word embeddings: One (topic_wiki) was trained on a full English Wikipedia dump, the other (wiki_science) on a smaller subset of the former dump which only contained science articles.
3 Our Method
We develop our method as a simple alternative to that of Gong et al. (2018). We aim at comparable or better classification performance, but with a simpler model. Also, we design the method in such a way that it provides human-interpretable results in an efficient way. One common way to compute the similarity of two documents (i.e. word sequences) and is to average over the word embeddings for each sequence first, and to compute the cosine similarity between the two averages afterwards. In the first step, weighting can be applied by multiplying a vector with the TF, IDF, or TF*IDF score of its pertaining word. We implement this standard measure (AVG_COS_SIM) as a baseline for both our method and for the method by Gong et al. (2018). It yields a single scalar similarity score. The core idea of our alternative method is to turn the above process upside down, by computing the cosine similarity of selected pairs of words from and first, and to average over the similarity scores afterwards (cf. also Section 6). More precisely, we implement a measure TOP_n_COS_SIM_AVG as the average of the highest pairwise cosine similarities of the top-ranking words in and . Ranking, again, is done by TF, IDF, and TF*IDF. For each ranking, we take the top-ranking words from and , compute similarities, rank by decreasing similarity, and average over the top similarities. This measure yields both a scalar similarity score and a list of tuples, which represent the qualitative aspects of and on which the similarity score is based.
All experiments are based on off-the-shelf word-level resources: We employ WOMBAT (Müller and Strube (2018)) for easy access to the 840B GloVe (Pennington et al. (2014)) and the GoogleNews555https://code.google.com/archive/p/word2vec/ Word2Vec (Mikolov et al. (2013)
) embeddings. These embedding resources, while slightly outdated, are still widely used. However, they cannot handle out-of-vocabulary tokens due to their fixed, word-level lexicon. Therefore, we also use a pretrained English fastText model666https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.en.300.bin.gz (Bojanowski et al. (2017); Grave et al. (2018)), which also includes subword information. IDF weights for approx. mio. different words were obtained from the English Wikipedia dump provided by the Polyglot project (Al-Rfou et al. (2013)). All resources are case-sensitive, i.e. they might contain different entries for words that only differ in case (cf. Section 5).
We run experiments in different setups, varying both the input representation (GloVe vs. Google vs. fastText embeddings, TF-weighting, and IDF-weighting) for concepts and projects, and the extent to which concept descriptions are used: For the latter, Label means only the concept label (first and second row in the example), Description means only the textual description of the concept, and Both means the concatenation of Label and Description. For the projects, we always use both label and description. For the project descriptions, we extract only the last column of the original file (CONTENT), and remove user comments and some boiler-plate. Each instance in the resulting data set is a tuple of , where and are bags of words, with case preserved and function words777We use the list provided by Gong et al. (2018), with an additional entry for cannot. removed, and is either or .
Our method is unsupervised, but we need to define a threshold parameter which controls the minimum similarity that a concept and a project description should have in order to be considered a match. Also, the TOP_n_COS_SIM_AVG measure has a parameter which controls how many ranked words are used from and , and how many similarity scores are averaged to create the final score. Parameter tuning experiments were performed on a random subset of of our data set (% positive). Note that Gong et al. (2018) used only of their instances data set as tuning data. The tuning data results of the best-performing parameter values for each setup can be found in Tables 1 and 2
. The top F scores per type of concept input (Label, Description, Both) are given inbold. For AVG_COS_SIM and TOP_n_COS_SIM_AVG, we determined the threshold values (T) on the tuning data by doing a simple step search over the range from to . For TOP_n_COS_SIM_AVG, we additionally varied the value of in steps of from to .
The top tuning data scores for AVG_COS_SIM (Table 1) show that the Google embeddings with TF*IDF weighting yield the top F score for all three concept input types ( - ). Somewhat expectedly, the best overall F score () is produced in the setting Both, which provides the most information. Actually, this is true for all four weighting schemes for both GloVe and Google, while fastText consistently yields its top F scores ( - ) in the Label setting, which provides the least information. Generally, the level of performance of the simple baseline measure AVG_COS_SIM on this data set is rather striking.
For TOP_n_COS_SIM_AVG, the tuning data results (Table 2) are somewhat more varied: First, there is no single best performing set of embeddings: Google yields the best F score for the Label setting (), while GloVe (though only barely) leads in the Description setting (). This time, it is fastText which produces the best F score in the Both setting, which is also the best overall tuning data F score for TOP_n_COS_SIM_AVG (). While the difference to the Google result for Label is only minimal, it is striking that the best overall score is again produced using the ’richest’ setting, i.e. the one involving both TF and IDF weighting and the most informative input.
We then selected the best performing parameter settings for every concept input and ran experiments on the held-out test data. Since the original data split used by Gong et al. (2018) is unknown, we cannot exactly replicate their settings, but we also perform ten runs using randomly selected of our
instances test data set, and report average P, R, F, and standard deviation. The results can be found in Table3. For comparison, the two top rows provide the best results of Gong et al. (2018).
|Gong et al. (2018)||topic_science|
The first interesting finding is that the AVG_COS_SIM measure again performs very well: In all three settings, it beats both the system based on general-purpose embeddings (topic_wiki) and the one that is adapted to the science domain (topic_science), with again the Both setting yielding the best overall result (). Note that our Both
setting is probably the one most similar to the concept input used byGong et al. (2018). This result corroborates our findings on the tuning data, and clearly contradicts the (implicit) claim made by Gong et al. (2018) regarding the infeasibility of document-level matching for documents of different lengths. The second, more important finding is that our proposed TOP_n_COS_SIM_AVG measure is also very competitive, as it also outperforms both systems by Gong et al. (2018) in two out of three settings. It only fails in the setting using only the Description input.888Remember that this setup was only minimally superior ( F score) to the next best one on the tuning data. This is the more important as we exclusively employ off-the-shelf, general-purpose embeddings, while Gong et al. (2018) reach their best results with a much more sophisticated system and with embeddings that were custom-trained for the science domain. Thus, while the performance of our proposed TOP_n_COS_SIM_AVG method is superior to the approach by Gong et al. (2018), it is itself outperformed by the ’baseline’ AVG_COS_SIM method with appropriate weighting. However, apart from raw classification performance, our method also aims at providing human-interpretable information on how a classification was done. In the next section, we perform a detail analysis on a selected setup.
5 Detail Analysis
The similarity-labelled word pairs from concept and project description which are selected during classification with the TOP_n_COS_ SIM_AVG measure provide a way to qualitatively evaluate the basis on which each similarity score was computed. We see this as an advantage over average-based comparison (like AVG_COS_SIM), since it provides a means to check the plausibility of the decision. Here, we are mainly interested in the overall best result, so we perform a detail analysis on the best-performing Both setting only (fastText, TF*IDF weighting, , ). Since the Concept-Project matching
task is a binary classification task, its performance can be qualitatively analysed by providing examples for instances that were classified correctly (True Positive (TP) and True Negative (TN)) or incorrectly (False Positive (FP) and False Negative (FN)).
Table 4 shows the concept and project words from selected instances (one TP, FP, TN, and FN case each) of the tuning data set. Concept and project words are ordered alphabetically, with concept words appearing more than once being grouped together. According to the selected setting, the number of word pairs is . The bottom line in each column provides the average similarity score as computed by the TOP_n_COS_SIM_AVG measure. This value is compared against the threshold . The similarity is higher than in the TP and FP cases, and lower otherwise. Without going into too much detail, it can be seen that the selected words provide a reasonable idea of the gist of the two documents. Another observation relates to the effect of using unstemmed, case-sensitive documents as input: the top-ranking words often contain inflectional variants (e.g. enzyme and enzymes, level and levels in the example), and words differing in case only can also be found. Currently, these are treated as distinct (though semantically similar) words, mainly out of compatibility with the pretrained GloVe and Google embeddings. However, since our method puts a lot of emphasis on individual words, in particular those coming from the shorter of the two documents (the concept), results might be improved by somehow merging these words (and their respective embedding vectors) (see Section 7).
|TP ()||FP ()||TN ()||FN ()|
|Avg. Sim||.447||Avg. Sim||.367||Avg. Sim||.195||Avg. Sim||.278|
6 Related Work
While in this paper we apply our method to the Concept-Project matching task only, the underlying task of matching text sequences to each other is much more general. Many existing approaches follow the so-called compare-aggregate framework (Wang and Jiang (2017)). As the name suggests, these approaches collect the results of element-wise matchings (comparisons) first, and create the final result by aggregating these results later. Our method can be seen as a variant of compare-aggregate which is characterized by extremely simple methods for comparison (cosine vector similarity) and aggregation (averaging). Other approaches, like He and Lin (2016) and Wang and Jiang (2017)
, employ much more elaborated supervised neural networks methods. Also, on a simpler level, the idea of averaging similarity scores (rather than scoring averaged representations) is not new:Camacho-Collados and Navigli (2016) use the average of pairwise word similarities to compute their compactness score.
7 Conclusion and Future Work
We presented a simple method for semantic matching of documents from heterogeneous collections as a solution to the Concept-Project matching task by Gong et al. (2018). Although much simpler, our method clearly outperformed the original system in most input settings. Another result is that, contrary to the claim made by Gong et al. (2018), the standard averaging approach does indeed work very well even for heterogeneous document collections, if appropriate weighting is applied. Due to its simplicity, we believe that our method can also be applied to other text matching tasks, including more ’standard’ ones which do not necessarily involve heterogeneous document collections. This seems desirable because our method offers additional transparency by providing not only a similarity score, but also the subset of words on which the similarity score is based. Future work includes detailed error analysis, and exploration of methods to combine complementary information about (grammatically or orthographically) related words from word embedding resources. Also, we are currently experimenting with a pretrained ELMo (Peters et al. (2018)) model as another word embedding resource. ELMo takes word embeddings a step further by dynamically creating contextualized vectors from input word sequences (normally sentences). Our initial experiments have been promising, but since ELMo tends to yield different, context-dependent vectors for the same word in the same document, ways have still to be found to combine them into single, document-wide vectors, without (fully) sacrificing their context-awareness.
The code used in this paper is available at https://github.com/nlpAThits/TopNCosSimAvg.
Acknowledgements The research described in this paper was funded by the Klaus Tschira Foundation. We thank the anonymous reviewers for their useful comments and suggestions.
- Al-Rfou et al. (2013) Al-Rfou, R., B. Perozzi, and S. Skiena (2013, August). Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, Sofia, Bulgaria, pp. 183–192. Association for Computational Linguistics.
et al. (2003)
Blei, D. M., A. Y. Ng, and M. I. Jordan (2003).
Latent dirichlet allocation.
Journal of Machine Learning Research3, 993–1022.
- Blooma and Kurian (2011) Blooma, M. J. and J. C. Kurian (2011). Research issues in community based question answering. In P. B. Seddon and S. Gregor (Eds.), Pacific Asia Conference on Information Systems, PACIS 2011: Quality Research in Pacific Asia, Brisbane, Queensland, Australia, 7-11 July 2011, pp. 29. Queensland University of Technology.
- Bojanowski et al. (2017) Bojanowski, P., E. Grave, A. Joulin, and T. Mikolov (2017). Enriching word vectors with subword information. TACL 5, 135–146.
- Camacho-Collados and Navigli (2016) Camacho-Collados, J. and R. Navigli (2016). Find the word that does not belong: A framework for an intrinsic evaluation of word vector representations. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, RepEval@ACL 2016, Berlin, Germany, August 2016, pp. 43–50. Association for Computational Linguistics.
- Deerwester et al. (1990) Deerwester, S. C., S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman (1990). Indexing by latent semantic analysis. JASIS 41(6), 391–407.
et al. (2015)
Feng, M., B. Xiang, M. R. Glass, L. Wang, and B. Zhou (2015).
Applying deep learning to answer selection: A study and an open task.In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2015, Scottsdale, AZ, USA, December 13-17, 2015, pp. 813–820. IEEE.
- Gong et al. (2018) Gong, H., T. Sakakini, S. Bhat, and J. Xiong (2018). Document similarity for texts of varying lengths via hidden topics. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2341–2351. Association for Computational Linguistics.
- Grave et al. (2018) Grave, E., P. Bojanowski, P. Gupta, A. Joulin, and T. Mikolov (2018). Learning word vectors for 157 languages. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, and T. Tokunaga (Eds.), Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
- He and Lin (2016) He, H. and J. J. Lin (2016). Pairwise word interaction modeling with deep neural networks for semantic similarity measurement. In K. Knight, A. Nenkova, and O. Rambow (Eds.), NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pp. 937–948. The Association for Computational Linguistics.
- Kusner et al. (2015) Kusner, M. J., Y. Sun, N. I. Kolkin, and K. Q. Weinberger (2015). From word embeddings to document distances. In F. R. Bach and D. M. Blei (Eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, Volume 37 of JMLR Workshop and Conference Proceedings, pp. 957–966. JMLR.org.
- Le and Mikolov (2014) Le, Q. V. and T. Mikolov (2014). Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, Volume 32 of JMLR Workshop and Conference Proceedings, pp. 1188–1196. JMLR.org.
- Mikolov et al. (2013) Mikolov, T., I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013). Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Processing Systems 26. Lake Tahoe, Nev., 5–8 December 2013, pp. 3111–3119.
- Müller and Strube (2018) Müller, M. and M. Strube (2018). Transparent, efficient, and robust word embedding access with WOMBAT. In D. Zhao (Ed.), COLING 2018, The 27th International Conference on Computational Linguistics: System Demonstrations, Santa Fe, New Mexico, August 20-26, 2018, pp. 53–57. Association for Computational Linguistics.
et al. (2014)
Pennington, J., R. Socher, and C. D. Manning (2014).
Glove: Global vectors for word representation.
In A. Moschitti, B. Pang, and W. Daelemans (Eds.),
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1532–1543. ACL.
- Peters et al. (2018) Peters, M. E., M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018). Deep contextualized word representations. In M. A. Walker, H. Ji, and A. Stent (Eds.), Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pp. 2227–2237. Association for Computational Linguistics.
- Wang and Jiang (2017) Wang, S. and J. Jiang (2017). A compare-aggregate model for matching text sequences. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.