Issues in evaluating semantic spaces using word analogies

06/24/2016 ∙ by Tal Linzen, et al. ∙ Cole Normale Suprieure 0

The offset method for solving word analogies has become a standard evaluation tool for vector-space semantic models: it is considered desirable for a space to represent semantic relations as consistent vector offsets. We show that the method's reliance on cosine similarity conflates offset consistency with largely irrelevant neighborhood structure, and propose simple baselines that should be used to improve the utility of the method in vector space evaluation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Vector space models of semantics (VSMs) represent words as points in a high-dimensional space [Turney and Pantel2010]. There is considerable interest in evaluating VSMs without needing to embed them in a complete NLP system. One such intrinsic evaluation strategy that has gained in popularity in recent years uses the offset approach to solving word analogy problems [Levy and Goldberg2014, Mikolov et al.2013c, Mikolov et al.2013a, Turney2012]. This method assesses whether a linguistic relation — for example, between the base and gerund form of a verb (debug and debugging

) — is consistently encoded as a particular linear offset in the space. If that is the case, estimating the offset using one pair of words related in a particular way should enable us to go back and forth between other pairs of words that are related in the same way, e.g.,

scream and screaming in the base-to-gerund case (Figure 1).

Since VSMs are typically continuous spaces, adding the offset between debug and debugging to scream is unlikely to land us exactly on any particular word. The solution to the analogy problem is therefore taken to be the word closest in cosine similarity to the landing point. Formally, if the analogy is given by

(1)

where in our example is debug, is debugging and is scream, then the proposed answer to the analogy problem is

(2)

where

(3)

debug

debugging

scream

screaming?
Figure 1: Using the vector offset method to solve the analogy task [Mikolov et al.2013c].

The central role of cosine similarity in this method raises the concern that the method does not only evaluate the consistency of the offsets and but also the neighborhood structure of . For instance, if and are very similar to each other (as scream and screaming are likely to be) the nearest word to may simply be the nearest neighbor of . If in a given set of analogies the nearest neighbor of tends to be , then, the method may give the correct answer regardless of the consistency of the offsets (Figure 2).

In this note we assess to what extent the performance of the offset method provides evidence for offset consistency despite its potentially problematic reliance on cosine similarity. We use two methods. First, we propose new baselines that perform the task without using the offset and argue that the performance of the offset method should be compared to those baselines. Second, we measure how the performance of the method is affected by reversing the direction of each analogy problem (Figure 3). If the method truly measures offset consistency, this reversal should not affect its accuracy.

Figure 2: When is small and and are close, the expected answer may be returned even when the offsets are inconsistent (here screaming is closest to ).

debug

debugging

scream?

screaming
Figure 3: Reversing the direction of the task.

2 Analogy functions

We experiment with the following functions. In all of the methods, every word in the vocabulary can serve as a guess, except when , or

are explicitly excluded as noted below. Since the size of the vocabulary is typically very large, chance performance, or the probability of a random word in the vocabulary being the correct guess, is extremely low.

Vanilla:

This function implements the offset method literally (Equation 2).

Add:

The obtained from Equation 2 is often trivial (typically equal to ). In practice, most studies exclude , and from consideration:

(4)

Only-b:

This method ignores both and and simply returns the nearest neighbor of :

(5)

As shown in Figure 2, this baseline is likely to give a correct answer in cases where is small and happens to be the nearest neighbor of .

Ignore-a:

This baseline ignores and returns the word that is most similar to both and :

(6)

A correct answer using this method indicates that is closest to a point that lies mid-way between and (i.e. that maximizes the similarity to both words).

Add-opposite:

This function takes the logic behind the Only-b baseline a step further – if the neighborhood of is sufficiently sparse, we will get the correct answer even if we go in the opposite direction from the offset :

(7)

Multiply:

levy2014linguistic show that Equation 2 is equivalent to adding and subtracting cosine similarities, and propose replacing it with multiplication and division of similarities:

(8)

Reverse (Add):

This is simply Add applied to the reverse analogy problem: if the original problem is debug : debugging :: scream :      , the reverse problem is debugging : debug :: screaming :      . A substantial difference in accuracy between the two directions in a particular type of analogy problem (e.g., base-to-gerund compared to gerund-to-base) would indicate that the neighborhoods of one of the word categories (e.g., gerund) tend to be sparser than the neighborhoods of the other type (e.g., base).

Reverse (Only-b):

This baseline is equivalent to Only-b, but applied to the reverse problem: it returns , in the notation of the original analogy problem.

3 Experimental setup

Analogy problems:

Common capitals: athens greece 506
All capitals: abuja nigeria 4524
US cities: chicago illinois 2467
Currencies: algeria dinar 866
Nationalities: albania albanian 1599
Gender: boy girl 506
Plurals: banana bananas 1332
Base to gerund: code coding 1056
Gerund to past: dancing danced 1560
Base to third person: decrease decreases 870
Adj. to adverb: amazing amazingly 992
Adj. to comparative: bad worse 1332
Adj. to superlative: bad worst 1122
Adj. un- prefixation: acceptable unacceptable 812
Table 1: The analogy categories of mikolov2013efficient and the number of problems per category.

We use the analogy dataset proposed by mikolov2013efficient. This dataset, which has become a standard VSM evaluation set [Baroni et al.2014, Faruqui et al.2015, Schnabel et al.2015, Zhai et al.2016], contains 14 categories; see Table 1 for a full list. A number of these categories, sometimes referred to as “syntactic”, test whether the structure of the space captures simple morphological relations, such as the relation between the base and gerund form of a verb (scream : screaming). Others evaluate the knowledge that the space encodes about the world, e.g., the relation between a country and its currency (latvia : lats). A final category that doesn’t fit neatly into either of those groups is the relation between masculine and feminine versions of the same concept (groom : bride). We follow levy2014linguistic in calculating separate accuracy measures for each category.

Semantic spaces:

In addition to comparing the performance of the analogy functions within a single VSM, we seek to understand to what extent this performance can differ across VSMs. To this end, we selected three VSMs out of the set of spaces evaluated by linzen2016quantificational. All three spaces were produced by the skip-gram with negative sampling algorithm implemented in word2vec [Mikolov et al.2013b], and were trained on the concatenation of ukWaC [Baroni et al.2009] and a 2013 dump of the English Wikipedia.

The spaces, which we refer to as , and , differed only in their context window parameters. In , the window consisted of two words on either side of the focus word. In it included five words on either side of the focus word, and was “dynamic” – that is, it was expanded if any of the context words were excluded for low or high frequency (for details, see levy2015improving). Finally, the context in

was a dynamic window of ten words on either side. All other hyperparameters were set to standard values.

4 Results

Baselines:

Figure 4: Accuracy of all functions on space .

Figure 4 shows the success of all of the analogy functions in recovering the intended analogy target in space . In line with levy2014linguistic, there was a slight advantage for Multiply over Add (mean difference in accuracy: .03), as well as dramatic variability across categories (ranging from to in Add). This variability cuts across the distinction between the world-knowledge and morphological categories; performance on currencies and adjectives-to-adverbs was poor, while performance on capitals and comparatives was high.

Although Add and Multiply always outperformed the baselines, the margin varied widely across categories. The most striking case is the plurals category, where the accuracy of Only-b reached , and even Add-opposite achieved a decent accuracy (). Taking but not into account (Ignore-a) outperformed Only-b in ten out of 14 categories. Finally, the poor performance of Vanilla confirms that , and must be excluded from the pool of potential answers for the offset method to work. When these words were not excluded, the nearest neighbor of was in 93% of the cases and in 5% of the cases (it was never ).

Reversed analogies:

Accuracy decreased in most categories when the direction of the analogy was reversed (mean difference ). The changes in the accuracy of Add between the original and reversed problems were correlated across categories with the changes in the performance of the Only-b baseline before and after reversal (Pearson’s ). The fact that the performance of the baseline that ignores the offset was a reliable predictor of the performance of the offset method again suggests that the offset method when applied to the mikolov2013efficient sets jointly evaluates the consistency of the offsets and the probability that is the nearest neighbor of .

The most dramatic decrease was in the US cities category (.69 to .17). This is plausibly due to the fact that the city-to-state relation is a many-to-one mapping; as such, the offsets derived from two specific city-states pairs — e.g., Sacramento:California and Chicago:Illinois — are unlikely to be exactly the same. Another sharp decrease was observed in the common capitals category (.9 to .53), even though that category is presumably a one-to-one mapping.

Comparison across spaces:

Space Add Add - Ignore-a Add - Only-b
.53 .41 .42
.6 .29 .36
.58 .26 .33
Table 2: Overall scores and the advantage of Add over two of the baselines across spaces.

The overall accuracy of Add was similar across spaces, with a small advantage for (Table 2). Yet the breakdown of the results by category (Figure 5) shows that the similarity in average performance across the spaces obscures differences across categories: performed much better than in some of the morphological inflection categories (e.g., .7 compared to .44 for the base-to-third-person relation), whereas had a large advantage in some of the world-knowledge categories (e.g., .68 compared to .42 in the US cities category). The advantage of smaller window sizes in capturing “syntactic” information is consistent with previous studies [Redington et al.1998, Sahlgren2006]. Note also that overall accuracy figures are potentially misleading in light of the considerable variability in the number of analogies in each category (see Table 1): the “all capitals” category has a much greater effect on overall accuracy than gender, for example.

Figure 5: Comparison across spaces. The leftmost panel shows the accuracy of Add, and the next two panels show the improvement in accuracy of Add over the baselines.

Spaces also differed in how much Add improved over the baselines. The overall advantage over the baselines was highest for and lowest for (Table 2). In particular, although accuracy was similar across spaces in the nationalities and common capitals categories, much more of this accuracy was already captured by the Ignore-a baseline in than in (Figure 5)

5 Discussion

The success of the offset method in solving word analogy problems has been taken to indicate that systematic relations between words are represented in the space as consistent vector offsets [Mikolov et al.2013c]. The present note has examined potential difficulties with this interpretation. A literal (“vanilla”) implementation of the method failed to perform the task: the nearest neighbor of was almost always or .111A human with any reasonable understanding of the analogy task is likely to also exclude , and

as possible responses, of course. However, such heuristics that are baked into an analogy solver, while likely to improve its performance, call into question the interpretation of the success of the analogy solver as evidence for the geometric organization of the underlying semantic space.

Even when those candidates were excluded, some of the success of the method on the analogy sets that we considered could also be obtained by baselines that ignored or even both and . Finally, reversing the direction of the analogy affected accuracy substantially, even though the same offset was involved in both directions.

The performance of the baselines varied widely across analogy categories. Baseline performance was poor in the adjective-to-superlative relation, and was very high in the plurals category (even when both and were ignored). This suggests that analogy problems in the plural category category may not measure whether the space encodes the single-to-plural relation as a vector offset, but rather whether the plural form of a noun tends to be close in the vector space to its singular form. Baseline performance varied across spaces as well; in fact, the space with the weakest overall performance () showed the largest increases over the baselines, and therefore the most evidence for consistent offsets.

We suggest that future studies employing the analogy task report the performance of the simple baselines we have suggested, in particular Only-b and possibly also Ignore-a. Other methods for evaluating the consistency of vector offsets may be less vulnerable to trivial responses and neighborhood structure, and should be considered instead of the offset method [Dunbar et al.2015].

Our results also highlight the difficulty in comparing spaces based on accuracy measures averaged across heterogeneous and unbalanced analogy sets [Gladkova et al.2016]. Spaces with similar overall accuracy can vary in their success on particular categories of analogies; effective representations of “world-knowledge” information are likely to be useful for different downstream tasks than effective representations of formal linguistic properties. Greater attention to the fine-grained strengths of particular spaces may lead to the development of new spaces that combine these strengths.

Acknowledgments

I thank Ewan Dunbar, Emmanuel Dupoux, Omer Levy and Benjamin Spector for comments and discussion. This research was supported by the European Research Council (grant ERC-2011-AdG 295810 BOOTPHON) and the Agence Nationale pour la Recherche (grants ANR-10-IDEX-0001-02 PSL and ANR-10-LABX-0087 IEC).

References

  • [Baroni et al.2009] Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209–226.
  • [Baroni et al.2014] Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 238–247.
  • [Dunbar et al.2015] Ewan Dunbar, Gabriel Synnaeve, and Emmanuel Dupoux. 2015. Quantitative methods for comparing featural representations. In Proceedings of the 18th International Congress of Phonetic Sciences.
  • [Faruqui et al.2015] Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015.

    Retrofitting word vectors to semantic lexicons.

    In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615, Denver, Colorado, May–June. Association for Computational Linguistics.
  • [Gladkova et al.2016] Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka. 2016. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proceedings of the NAACL Student Research Workshop, pages 8–15, San Diego, California, June. Association for Computational Linguistics.
  • [Levy and Goldberg2014] Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Language Learning, pages 171–180.
  • [Levy et al.2015] Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225.
  • [Linzen et al.2016] Tal Linzen, Emmanuel Dupoux, and Benjamin Spector. 2016. Quantificational features in distributional word representations. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics (*SEM 2016).
  • [Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. Proceedings of ICLR.
  • [Mikolov et al.2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119.
  • [Mikolov et al.2013c] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of NAACL-HLT, pages 746–751.
  • [Redington et al.1998] Martin Redington, Nick Chater, and Steven Finch. 1998. Distributional information: A powerful cue for acquiring syntactic categories. Cognitive Science, 22(4):425–469.
  • [Sahlgren2006] Magnus Sahlgren. 2006. The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces. Ph.D. thesis, Stockholm University.
  • [Schnabel et al.2015] Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of EMNLP.
  • [Turney and Pantel2010] Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics.

    Journal of Artificial Intelligence Research

    , 37(1):141–188.
  • [Turney2012] Peter D Turney. 2012. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research, pages 533–585.
  • [Zhai et al.2016] Michael Zhai, Johnny Tan, and Jinho D Choi. 2016. Intrinsic and extrinsic evaluations of word embeddings. In Thirtieth AAAI Conference on Artificial Intelligence.