Swivel: Improving Embeddings by Noticing What's Missing

02/06/2016 ∙ by Noam Shazeer, et al. ∙ Google 0

We present Submatrix-wise Vector Embedding Learner (Swivel), a method for generating low-dimensional feature embeddings from a feature co-occurrence matrix. Swivel performs approximate factorization of the point-wise mutual information matrix via stochastic gradient descent. It uses a piecewise loss with special handling for unobserved co-occurrences, and thus makes use of all the information in the matrix. While this requires computation proportional to the size of the entire matrix, we make use of vectorized multiplication to process thousands of rows and columns at once to compute millions of predicted values. Furthermore, we partition the matrix into shards in order to parallelize the computation across many nodes. This approach results in more accurate embeddings than can be achieved with methods that consider only observed co-occurrences, and can scale to much larger corpora than can be handled with sampling methods.



There are no comments yet.


page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Dense vector representations of words have proven to be useful for natural language tasks such as determining semantic similarity, parsing, and translation. Recently, work by Mikolov et al. (2013a) and others has inspired an investigation into the construction of word vectors using stochastic gradient descent methods. Models tend to fall into one of two categories: matrix factorization or sampling from a sliding window: Baroni et al. (2014) refers to these as “count” and “predict” methods, respectively.

In this paper, we present the Submatrix-wise Vector Embedding Learner

(Swivel), a “count-based” method for generating low-dimensional feature embeddings from a co-occurrence matrix. Swivel uses stochastic gradient descent to perform a weighted approximate matrix factorization, ultimately arriving at embeddings that reconstruct the point-wise mutual information (PMI) between each row and column feature. Swivel uses a piecewise loss function to differentiate between observed and unobserved co-occurrences.

Swivel is designed to work in a distributed environment. The original co-occurrence matrix (which may contain millions of rows and millions of columns) is “sharded” into smaller submatrices, each containing thousands of rows and columns. These can be distributed across multiple workers, each of which uses vectorized matrix multiplication to rapidly produce predictions for millions of individual PMI values. This allows the computation to be distributed across a cluster of computers, resulting in an efficient way to learn embeddings.

This paper is organized as follows. First, we describe related word embedding work and note how two popular methods are similar to one another in their optimization objective. We then discuss Swivel in detail, and describe experimental results on several standard word embedding evaluation tasks. We conclude with analysis of our results and discussion of the algorithm with regard to the other approaches.

2 Related Work

While there are a number of interesting approaches to creating word embeddings, Skipgram Negative Sampling (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) are two relatively recent approaches that have received quite a bit of attention. These methods compress the distributional structure of the raw language co-occurrence statistics, yielding compact representations that retain the properties of the original space. The intrinsic quality of the embeddings can be evaluated in two ways. First, words with similar distributional contexts should be near to one another in the embedding space. Second, manipulating the distributional context directly by adding or removing words ought to lead to similar translations in the embedded space, allowing “analogical” traversal of the vector space.

Skipgram Negative Sampling. The word2vec program released by Mikolov et al. (2013a) generates word embeddings by sliding a window over a large corpus of text. The “focus” word in the center of the window is trained to predict each “context” word that surrounds it by 1) maximizing the dot product between the sampled words’ embeddings, and 2) minimizing the dot product between the focus word and a randomly sampled non-context word. This method of training is called skipgram negative sampling (SGNS).

Levy and Goldberg (2014) examine SGNS and suggest that the algorithm is implicitly performing weighted low-rank factorization of a matrix whose cell values are related to the point-wise mutual information between the focus and context words. Point-wise mutual information (PMI) is a measure of association between two events, defined as follows:


In the case of language, the frequency statistics of co-occurring words in a corpus can be used to estimate the probabilities that comprise PMI. Let

be the number of times that focus word co-occurs with the context word , be total number of times that focus word appears in the corpus, be the total number of times that context word appears appears in the corpus, and be the total number of co-occurrences. Then we can re-write (1) as:

It is important to note that, in the case that is zero – i.e., no co-occurrence of and has been observed – PMI is infinitely negative.

SGNS can be seen as producing two matrices, for focus words and for context words, such that their product approximates the observed PMI between respective word/context pairs. Given a specific focus word and context word , SGNS minimizes the magnitude of the difference between and , tempered by a monotonically increasing weighting function of the observed co-occurrence count, :

Due to the fact that SGNS slides a sampling window through the entire training corpus, a significant drawback of the algorithm is that it requires training time proportional to the size of the corpus.

GloVe. Pennington et al.’s 2014 GloVe is an approach that instead works from the precomputed corpus co-occurrence statistics. The authors posit several constraints that should lead to preserving the “linear directions of meaning”. Based on ratios of conditional probabilities of words in context, they suggest that a natural model for learning such linear structure should minimize the following cost function for a given focus word and context word :

Here, and are bias terms that are specific to each focus word and each context word, respectively. Again is a function that weights the cost according to the frequency of the co-occurrence count . Using stochastic gradient descent, GloVe learns the model parameters for , , , and : it selects a pair of words observed to co-occur in the corpus, retrieves the corresponding embedding parameters, computes the loss, and back-propagates the error to update the parameters. GloVe therefore requires training time proportional to the number of observed co-occurrence pairs, allowing it to scale independently of corpus size.

Although GloVe was developed independently from SGNS (and, as far as we know, without knowledge of Levy and Goldberg’s 2014 analysis), it is interesting how similar these two models are.

  • Both seek to minimize the difference between the model’s estimate and the log of the co-occurrence count. GloVe has additional free “bias” parameters that, in SGNS, are pegged to the corpus frequency of the individual words. Empirically, it can be observed that the bias terms are highly correlated to the frequency of the row and column features in a trained GloVe model.

  • Both weight the loss according to the frequency of the co-occurrence count such that frequent co-occurrences incur greater penalty than rare ones.111This latter similarity is reminiscent of weighted alternating least squares (Hu et al., 2008), which treats as a confidence estimate that favors accurate estimation of certain parameters over uncertain ones.

Levy et al. (2015) note these algorithmic similarities. In their controlled empirical comparison of several different embedding approaches, results produced by SGNS and GloVe differ only modestly.

There are subtle differences, however. The negative sampling regime of SGNS ensures that the model does not place features near to one another in the embedding space whose co-occurrence isn’t observed in the corpus. This is distinctly different from GloVe, which trains only on the observed co-occurrence statistics. The GloVe model incurs no penalty for placing features near to one another whose co-occurrence has not been observed. As we shall see in Section 4, this can result in poor estimates for uncommon features.

3 Swivel

Swivel is an attempt to have our cake and eat it, too. Like GloVe, it works from co-occurrence statistics rather than by sampling; like SGNS, it makes use of the fact that many co-occurrences are unobserved in the corpus. Like both, Swivel performs a weighted approximate matrix factorization of the PMI between features. Furthermore, Swivel is designed to work well in a distributed environment; e.g., distbelief (Dean et al., 2012).

At a high level, Swivel begins with an co-occurrence matrix between row and column features. Each row feature and each column feature is assigned a -dimensional embedding vector. The vectors are grouped into blocks, each of which defines a submatrix “shard”. Training proceeds by selecting a shard (and thus, its corresponding row block and column block), and performing a matrix multiplication of the associated vectors to produce an estimate of the PMI values for each co-occurrence. This is compared with the observed PMI, with special handling for the case where no co-occurrence was observed and the PMI is undefined. Stochastic gradient descent is used to update the individual vectors and minimize the difference.

As will be discussed in more detail below, splitting the matrix into shards allows the problem to be distributed across many workers in a way that allows for utilization of high-performance vectorized hardware, amortizes the overhead of transferring model parameters, and distributes parameter updates evenly across the feature embeddings.

3.1 Construction

To begin, an co-occurrence matrix is constructed, where each cell in the matrix contains the observed co-occurrence count of row feature with column feature . The marginal counts of each row feature () and each column feature () are computed, as well as the overall sum of all the cells in the matrix, . As with other embedding methods, Swivel is agnostic to both the domain from which the features are drawn, and to the exact set of features that are used. Furthermore, the “feature vocabulary” used for the rows need not necessarily be the same as that which is used for the columns.

The rows are sorted in descending order of feature frequency, and are then collected into -element row blocks, where is chosen based on computational efficiency considerations discussed below. This results in row blocks whose elements are selected by choosing rows that are congruent mod . For example, if there are total rows in the co-occurrence matrix, for , every row is selected to form a row block: the first row block contains rows , the second row block contains rows , and so on. Since rows were originally frequency-sorted, this construction results in each row block containing a mix of common and rare row features.

The process is repeated for the columns to yield column blocks. As with the row blocks, each column block contains a mix of common and rare column features.

For each (row block, column block) pair , we construct a submatrix shard from the original co-occurrence matrix by selecting the appropriate co-occurrence cells:

This results in shards in all. Typically, the vast majority of these elements are zero. Figure 1 illustrates this process: lighter pixels represent more frequent co-occurrences, which naturally tend to occur for more frequent features.

Figure 1: The matrix re-organization process creates shards with a mixture of common and rare features, which naturally leads to a mixture of large and small co-occurrence values (brighter and darker pixels, respectively).

3.2 Training

Prior to training, the two -dimensional feature embeddings are initialized with small, random values.222We specifically used values drawn from : the choice was arbitrary and we did not investigate the effects of other initialization schemes. is the matrix of embeddings for the row features (e.g., words), is the matrix of embeddings for the column features (e.g., word contexts).

Training then proceeds iteratively as follows. A submatrix shard is chosen at random, along with the row embedding vectors from row block , and the column vectors from column block . The matrix product is computed to produce predicted PMI values, which are then compared to the observed PMI values for shard . The error between the predicted and actual values is used to compute gradients: these are accumulated for each row and column. Figure 2 illustrates this process. The gradient descent is dampened using Adagrad (Duchi et al., 2011), and the process repeats until the error no longer decreases appreciably.

Figure 2: A shard is selected for training. The corresponding row vectors and column vectors are multiplied to produce estimates that are compared to the observed PMI derived from the count statistics. Error is computed and back-propagated.

Although each shard is considered separately, it shares the embedding parameters for all other shards in the same row block, and the embedding parameters for all the other shards in the same column block. By storing the parameters in a central parameter server (Dean et al., 2012), it is possible to distribute training by processing several shards in parallel on different worker machines. An individual worker selects a shard, retrieves the appropriate embedding parameters, performs the cost and gradient computation, and then communicates the parameter updates back to the parameter server. We do this in a lock-free fashion (Recht et al., 2011) using Google’s asynchronous stochastic gradient descent infrastructure distbelief (Dean et al., 2012).

Even on a very fast network, transferring the parameters between the parameter server and a worker machine is expensive: for each block, we must retrieve (and then update) parameter values each for and . Fortunately, this cost is amortized over the individual estimates that are computed by the matrix multiplication. Choosing a reasonable value for is therefore a balancing act between compute and network throughput: the larger the value of , the more we amortize the cost of communicating with the parameter server. And up to a point, vectorized matrix multiplication is essentially a constant-time operation on a high-performance GPU. Clearly, this is all heavily dependent on the underlying hardware fabric; we achieved good performance in our environment with .

3.3 Training Objective and Cost Function

Swivel approximates the observed PMI of row feature and column feature with . It uses a piecewise loss function that treats observed and unobserved co-occurrences distinctly. Table 1 summarizes the piecewise cost function, and Figure 3 shows the different loss function variants with respect to for an arbitrary objective value of 2.0.

Squared error: the model must accurately reconstruct observed PMI subject to our confidence in .
“Soft hinge:” the model must not over-estimate the PMI of common features whose co-occurrence is unobserved.
Table 1: Training objective and cost functions. The function refers to the “smoothed” PMI function described in the text, where the actual value of for is replaced with .

Observed co-occurrences. For co-occurrences that have been observed (), we’d like to accurately estimate subject to how confident we are in the observed count . Swivel computes the weighted squared error between the embedding dot product and the PMI of feature and feature :

This encourages to correctly estimate the observed PMI, as Figure 3 illustrates. The loss is modulated by a monotonically increasing confidence function : the more frequently a co-occurrence is observed, the more the model is required to accurately approximate . We experimented with several different variants for

, and discovered that a linear transformation of

produced good results.

Unobserved Co-occurrences. Unfortunately, if feature and feature are never observed together, , , and the squared error cannot be computed.

What would we like the model to do in this case? Treating as a sample, we can ask: how significant is it that its observed value is zero? If the two features and are rare, their co-occurrence could plausibly have gone unobserved due to the fact that we simply haven’t seen enough data. On the other hand, if features and are common, this is less likely: it becomes significant that a co-occurrence hasn’t been observed, so perhaps we ought to consider that the features are truly anti-correlated. In either case, we certainly don’t want the model to over-estimate the PMI between features, and so we can encourage the model to respect an upper bound on its PMI estimate .

We address this by smoothing the PMI value as if a single co-occurrence had been observed (i.e., computing PMI as if ), and using an asymmetric cost function that penalizes over-estimation of the smoothed PMI. The following “soft hinge” cost function (plotted as the dotted line in Figure 3) accomplishes this:

Here, refers to the smoothed PMI computation where ’s actual count of is replaced with . This loss penalizes the model for over-estimating the objective value; however, it applies negligible penalty – i.e., is non-committal – if the model under-estimates it.

Numerically, behaves as follows. If features and are common, the marginal terms and are large. In order to minimize the loss, the model must produce a small – or even negative – value for , thus capturing the anti-correlation between features and . On the other hand, if features and are rare, then the marginal terms are also small, so the model is allowed much more latitude with respect to before incurring serious penalty. In this way, the “soft hinge” loss enforces an upper bound on the model’s estimate for that reflects the our confidence in the unobserved co-occurrence.

Figure 3: Loss as a function of predicted value , evaluated for arbitrary objective value of . The solid line shows (squared error); the dotted line shows (the “soft hinge”).

4 Experiments

Bruni et al.
Radinsky et al.
M. Turk
Luong et al.
Rare Words
Hill et al.
SGNS 0.737 0.592 0.743 0.686 0.467 0.397 0.692 0.592
GloVe 0.651 0.541 0.738 0.627 0.386 0.360 0.716 0.578
Swivel 0.748 0.616 0.762 0.720 0.483 0.403 0.739 0.622
CBOW 0.700 0.527 0.708 0.664 0.439 0.358 0.667 0.570
Table 2: Performance of SGNS, GloVe, and Swivel vectors across different tasks with respect to methods tested by Levy et al. (2015), with CBOW is included for reference. Word similarity tasks report Spearman’s with human annotation; analogy tasks report accuracy. In all cases, larger numbers indicate better performance.

We performed several experiments to evaluate the embeddings produced by Swivel.

Corpora. Following Pennington et al. (2014), we produced 300-dimensional embeddings from an August 2015 Wikipedia dump combined with the Gigaword5 corpus. The corpus was tokenized, lowercased, and split into sentences. Punctuation was discarded, but hyphenated words and contractions were preserved. The resulting training data included 3.3 billion tokens across 89 million sentences. The most frequent 397,312 unigram tokens were used to produce the vocabulary, and the same vocabulary is used for all experiments.

Baselines. In order to ensure a careful comparison, we re-created embeddings using these corpora with the publicly-available word2vec333https://code.google.com/p/word2vec and GloVe444http://nlp.stanford.edu/projects/glove programs as our baselines.

word2vec was configured to generate skipgram embeddings using five negative samples. We set the window size so that it would include ten tokens to the left of the focus and ten tokens to the right. We ran it for five iterations over the input corpus. Since the least frequent word in the corpus occurs 65 times, training samples the rarest words at least 300 times each. Since the same vocabulary is used for both word and context features, we modified word2vec to emit both word and context embeddings. We experimented with adding word vector with its corresponding context vector (Pennington et al., 2014); however, best performance was achieved using the word vector alone, as was originally reported by Mikolov et al. (2013a).

GloVe was similarly configured to use its “symmetric” co-occurrence window that spans ten tokens to the left of the focus word and ten tokens to the right. We ran GloVe for 100 training epochs using the default parameter settings for initial learning rate (

), the weighting exponent (), and the weighting function cut-off (). GloVe produces both word and context vectors: unlike word2vec, the sum of the word vector with its corresponding context vector produced slightly better results than the word vector alone. (This was also noted by Pennington et al. (2014).)

Our results for these baselines vary slightly from those reported elsewhere. We speculate that this may be due to differences in corpora, preprocessing, and vocabulary selection, and simply note that this evaluation should at least be internally consistent.

Swivel Training. The unigram vocabulary was used for both row and column features. Co-occurrence was computed by examining ten words to the left and ten words to the right of the focus word. As with GloVe, co-occurrence counts were accumulated using a harmonically scaled window: for example, a token that is three tokens away from the focus was counted as of an occurrence.555We experimented with both linear and uniform scaling windows, and neither performed as well. So it turns out that GloVe and Swivel were trained from identical co-occurrence statistics.666Following Levy and Goldberg’s 2014 suggestion that SGNS factors a shifted PMI matrix, we experimented with shifting the objective PMI value by a small amount. Specifically, Levy and Goldberg (2014) suggest that the SGNS PMI objective is shifted by , where is the number of negative samples drawn from the unigram distribution. Since we’d configured word2vec with , we experimented with shifting the PMI objective by (). This did not yield significantly different results than just using the original PMI objective.

We trained the model for a million “steps”, where each step trains an individual submatrix shard. Given a vocabulary size of roughly 400,000 words and , there are approximately 100 row blocks and 100 column blocks, yielding 10,000 shards overall. Therefore each shard was sampled about 100 times.

We experimented with several different weighting functions to modulate the squared error based on cell frequency of the form and found that , , and yielded good results.

Finally, once the embeddings were produced, we discovered that adding the word vector to its corresponding context vector produced better results than the word vector alone, just as it did with GloVe.

Evaluation. We evaluated the embeddings using the same datasets that were used by Levy et al. (2015). For word similarity, we used WordSim353 (Finkelstein et al., 2001) partitioned into WordSim Similarity and WordSim Relatedness (Zesch et al., 2008; Agirre et al., 2009); Bruni et al.’s 2012 MEN dataset; Radinsky et al.’s 2011 Mechanical Turk, Luong et al.’s 2013 Rare Words; and Hill et al.’s 2014

SimLex-999 dataset. These datasets contain word pairs with human-assigned similarity scores: the word vectors are evaluated by ranking the pairs according to their cosine similarities and measuring the correlation with the human ratings using Spearman’s

. Out-of-vocabulary words are ignored.

The analogy tasks present queries of the form “A is to B as C is to X”: the system must predict X from the entire vocabulary. As with Levy et al. (2015), we evaluated Swivel using the MSR and Google datasets (Mikolov et al., 2013b, a). The former contains syntactic analogies (e.g., “good is to best as smart is to smartest”). The latter contains a mix of syntactic and semantic analogies (e.g., “Paris is to France as Tokyo is to Japan

”). The evaluation metric is the number of queries for which the embedding that maximizes the cosine similarity is the correct answer. As with

Mikolov et al. (2013a), any query terms are discarded from the result set and out-of-vocabulary words are scored as losses.

Results. The results are summarized in Table 2. Embeddings produced by word2vec’s CBOW are also included for reference. As can be seen, Swivel outperforms GloVe, SGNS, and CBOW on both the word similarity and analogy tasks. We also note that, except for the Google analogy task, SGNS outperforms GloVe.

Our hypothesis is that this occurs because both SGNS and Swivel take unobserved co-occurrences into account, but GloVe does not. Swivel incorporates information about unobserved co-occurrences directly, including them in among the predictions and applying the “soft hinge” loss to avoid over-estimating the feature pair’s PMI. SGNS indirectly models unobserved co-occurrences through negative sampling. GloVe, on the other hand, only trains on positive co-occurrence data.

We hypothesize that by not taking the unobserved co-occurrences into account, GloVe is under-constrained: there is no penalty for placing unobserved but unrelated embeddings near to one another. Quantitatively, the fact that both SGNS and Swivel out-perform GloVe by a large margin on Luong et al.’s 2013 Rare Words evaluation seems to support this hypothesis. Inspection of some very rare words (Table 3) shows that, indeed, SGNS and Swivel have produced reasonable neighbors, but GloVe has not.

Vocabulary Rank
SGNS GloVe Swivel
bootblack 393,709 shoeshiner, newsboy, shoeshine, stage-struck, bartender, bellhop, waiter, housepainter, tinsmith redbull, 240, align=middle, 18, 119, dannit, concurrence/dissent, 320px, dannitdannit newsboy, shoeshine, stevedore, bellboy, headwaiter, stowaway, tibbs, mister, tramp
chigger 373,844 chiggers, webworm, hairballs, noctuid, sweetbread, psyllids, rostratus, narrowleaf, pigweed dannit, dannitdannit, upupidae, bungarus, applause., .774, amolops, maxillaria, paralympic.org mite, chiggers, mites, batatas, infestation, jigger, infested, mumbo, frog’s
decretal 374,123 decretals, ordinatio, sacerdotalis, constitutiones, theodosianus, canonum, papae, romanae, episcoporum regesta, agatho, afl.com.au, dannitdannit, dannit, emptores, beatifications, 18, 545 decretals, decretum, apostolicae, sententiae, canonum, unigenitus, collectio, fidei, patristic
tuxedoes 396,973 tuxedos, ballgowns, tuxes, well-cut, cable-knit, open-collared, organdy, high-collared, flouncy hairnets, dhotis, speedos, loincloths, zekrom, shakos, mortarboards, caftans, nightwear ballgowns, tuxedos, tuxes, cummerbunds, daywear, bridesmaids’, gowns, strapless, flouncy
Table 3: Nearest neighbors for some very rare words.

To be fair, GloVe was explicitly designed to capture the relative geometry in the embedding space: the intent was to optimize for performance on analogies rather than on word similarity. Nevertheless, we see that word frequency has a marked effect on analogy performance, as well. Figure 4 plots analogy task accuracy against the base-10 log of the mean frequency of the four words involved.

Figure 4: Analogy accuracy as a function of the log mean frequency of the four words.

To produce the plot, we considered both the MSR and Google analogies. For each analogy, we computed the mean frequency of the four words involved, and then bucketed it with other analogies that have similar mean frequencies. Each bucket contains at least 100 analogies.

Notably, Swivel performs better than SGNS at all word frequencies, and better than GloVe on all but the most frequent words. GloVe under-performs SGNS on rare words, but begins to out-perform SGNS as the word frequency increases. We hypothesize that GloVe is fitting the common words at the expense of rare ones.

It is also interesting to note that all algorithms tend to perform poorly on the most frequent words. This is probably because very frequent words a) tend to appear in many contexts, making it difficult to determine an accurate point representation, and b) they tend to be polysemous, appear as both verbs and nouns, and have subtle gradations in meaning (e.g., man and time).

5 Discussion

Swivel grew out of a need to build embeddings over larger feature sets and more training data. We wanted an algorithm that could both handle a large amount of data, and produced good estimates for both common and rare features.

Statistics vs. Sampling. Like GloVe, Swivel trains from co-occurrence statistics: once the co-occurrence matrix is constructed, training Swivel requires computational resources in proportion to the matrix size. This allows Swivel to handle much more data than can be practically processed with a sampling method like SGNS, which requires training time in proportion to the size of the corpus.

Unobserved Co-occurrences. Our experiments indicate that GloVe pays a performance cost for only training on observed co-occurrences. In particular, the model may produce unstable estimates for rare features since there is no penalty for placing features near one another whose co-occurrence isn’t observed.

Nevertheless, computing values for every pair of features potentially entails significantly more computation than is required by GloVe, whose training complexity is proportional to the number of non-zero entries in the co-occurrence matrix. Swivel mitigates this in two ways.

First, it makes use of vectorized hardware to perform matrix multiplication of thousands of embedding vectors at once. Performing about a dozen matrix multiplications per GPU compute unit per second is typical: we have observed that a single GPU can estimate about 200 million cell values per second for 1024-dimensional embedding vectors.

Second, the blocked matrix shards can be separately processed by several worker machines to allow for coarse-grained parallelism. The block structure amortizes the overhead of transferring embedding parameters to and from the parameter server across millions of individual estimates. We found that Swivel did, in fact, parallelize easily in our environment, and have been able to run experiments that use hundreds of concurrent worker machines.

Piecewise Loss. It seems fruitful to consider the co-occurrence matrix as itself containing estimates rather than point values. A corpus is really just a sample of language, and so a co-occurrence matrix derived from a corpus itself contains samples whose values are uncertain.

We used a weighted piecewise loss function to capture this uncertainty. If a co-occurrence was observed, we can produce a PMI estimate, and we require the model to fit it more or less accurately based on the observed co-occurrence frequency. If a co-occurrence was not observed, we simply require that the model avoid over-estimating a smoothed PMI value. While this works well, it does seem ad hoc: we hope that future investigation can yield a more principled approach.

6 Conclusion

Swivel produces low-dimensional feature embeddings from a co-occurrence matrix. It optimizes an objective that is very similar to that of SGNS and GloVe: the dot product of a word embedding with a context embedding ought to approximate the observed PMI of the two words in the corpus.

Unlike SGNS, Swivel’s computational requirements depend on the size of the co-occurrence matrix, rather than the size of the corpus. This means that it can be applied to much larger corpora.

Unlike GloVe, Swivel explicitly considers all the co-occurrence information – including unobserved co-occurrences – to produce embeddings. In the case of unobserved co-occurrences, a “soft hinge” loss prevents the model from over-estimating PMI. This leads to demonstrably better embeddings for rare features without sacrificing quality for common ones.

Swivel capitalizes on vectorized hardware, and uses block structure to amortize parameter transfer cost and avoid contention. This results in the ability to handle very large co-occurrence matrices in a scalable way that is easy to parallelize.

We would like to thank Andrew McCallum, Samy Bengio, and Julian Richardson for their thoughtful comments on this work.


  • Agirre et al. (2009) Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Paşca, and Aitor Soroa. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27. Association for Computational Linguistics, 2009.
  • Baroni et al. (2014) Marco Baroni, Georgiana Dinu, and Germán Kruszewski. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 238–247, 2014.
  • Bruni et al. (2012) Elia Bruni, Gemma Boleda, Marco Baroni, and Nam-Khanh Tran. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 136–145. Association for Computational Linguistics, 2012.
  • Dean et al. (2012) Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223–1231, 2012.
  • Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization.

    The Journal of Machine Learning Research

    , 12:2121–2159, 2011.
  • Finkelstein et al. (2001) Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406–414. ACM, 2001.
  • Hill et al. (2014) Felix Hill, Roi Reichart, and Anna Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv preprint arXiv:1408.3456, 2014.
  • Hu et al. (2008) Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, pages 263–272. IEEE, 2008.
  • Levy and Goldberg (2014) Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pages 2177–2185, 2014.
  • Levy et al. (2015) Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225, 2015.
  • Luong et al. (2013) Minh-Thang Luong, Richard Socher, and Christopher D Manning.

    Better word representations with recursive neural networks for morphology.

    CoNLL-2013, 104, 2013.
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a.
  • Mikolov et al. (2013b) Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746–751, 2013b.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation.

    Proceedings of the Empirical Methods in Natural Language Processing (EMNLP 2014)

    , 12:1532–1543, 2014.
  • Radinsky et al. (2011) Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337–346. ACM, 2011.
  • Recht et al. (2011) Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693–701, 2011.
  • Zesch et al. (2008) Torsten Zesch, Christof Müller, and Iryna Gurevych. Using wiktionary for computing semantic relatedness. In AAAI, volume 8, pages 861–866, 2008.