Towards a Better Understanding of Predict and Count Models

11/06/2015 ∙ by S Sathiya Keerthi, et al. ∙ 0

In a recent paper, Levy and Goldberg pointed out an interesting connection between prediction-based word embedding models and count models based on pointwise mutual information. Under certain conditions, they showed that both models end up optimizing equivalent objective functions. This paper explores this connection in more detail and lays out the factors leading to differences between these models. We find that the most relevant differences from an optimization perspective are (i) predict models work in a low dimensional space where embedding vectors can interact heavily; (ii) since predict models have fewer parameters, they are less prone to overfitting. Motivated by the insight of our analysis, we show how count models can be regularized in a principled manner and provide closed-form solutions for L1 and L2 regularization. Finally, we propose a new embedding model with a convex objective and the additional benefit of being intelligible.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Distributional semantic models [12], also called count models [1] have been popular in the computational linguistics literature for several decades. In the last few years, however, predict models such as the Skip-Gram model and Continuous Bag-Of-Words model have become the de facto standard for word modeling [5, 6]. These methods have origins in neural language modeling [7], where the goal was to improve classic language models. A recent evaluation study [1]

suggested that the predict models are superior to the count models on a range of word similarity tasks. Newer work, however, attributes a large amount of the differences in performance between count and predict models to a lack of proper hyperparameter optimization 

[4]. This has prompted the need for understanding the differences between the two types of models.

To this end, Levy and Goldberg [2] made an interesting connection between two key models: a traditional count model based on pointwise mutual information (PMI) and a predict model, namely the skip-gram model with negative sampling (SGNS). The key result is that the SGNS model is equivalent to a shifted version of the PMI method, where all values get shifted by a factor of

. However, the proof assumes that the input and output dimensions are the same; hence the word vectors will still be of the size of the vocabulary. As we show in our analysis, this assumption has important implications for subsequent steps, such as dimensionality reduction methods like singular value decomposition (SVD).

The aim of this paper is to analyze connections between count and predict models in a more detailed fashion. In particular, we investigate the differences between SGNS and PMI methods more deeply. We make several observations that help this understanding. The first useful result is that the Shifted PMI model comes out as an explicit version of SGNS in which the context word vectors are fixed at their one-hot representations. We point out two differences between the Shifted PMI model and the SGNS model from an optimization perspective. First, SGNS usually works in lower dimensions, making it possible for all word vectors two interact. Second, as count models have usually more parameters, this might cause them to overfit on the data.

We use the insight from our analysis to propose several interesting extensions to classic word embedding models. For example, our analysis allows regularization to be added into PMI methods in a systematic way. We also propose a new convex word vector model based on CBOW which offers interpretability and a well-defined training objective.

In short, we draw connections between existing count and prediction models and augment them in several ways. However, in order to fully understand the factors that differentiate various methods, a comprehensive set of experiments is needed. We plan to carry out these experiments as part of future work.

2 Notations

Consider a corpus that is a sequence of words over a vocabulary . We will use when referring to the index of a target word and when referring to the index of a context word. We define the exact notion of context in Section 2.1. Note that and are integers taking values from to .

Each word and context word has a vectorial representation. We will use and to denote the word vector and context vector corresponding to word and context word . For vector , will denote the -th component of . Let and denote the set of all word and context word vectors. One can also think of and as matrices and whose -th and -th rows are given by and respectively. We will use and to synonymously refer to and .

Let us use # to denote a count and weighting function. In the simplest case, for example, # could denote the number of occurrences of the (word, context word) pair in corpus . Let us define and , the total counts for word and context vector .

2.1 Defining context

An important design choice in count models is the definition of context. The definition of context determines which co-occurrences get considered when creating a model, along with the importance of single co-occurrences. A common way of defining context is based on choosing a window around each occurrence of a word in . There are many options to customize this window; for example, it can either be symmetric or asymmetric around . There could also exist weights (e.g., dependent on the relative positions of and ) associated with each occurrence in . One may also include down-weighting of common words, possibly for both for the current word as well as for the context words 111Mikolov et al [6] used down-weighting of common words. It is unclear if Levy and Goldberg [2] used it.. All these choices determine the value of #. Note that since the weightings can be real, #, and can take non-negative real values in general.

If the window is symmetric around and as well as the weighting functions are symmetric, then

  1. is symmetric, i.e., and

  2. if and denote the same word

We will refer to this as the symmetry condition. In Section 3.2.1 we will revisit this condition and see that it helps our understanding of non-uniqueness and interchangeability of word vectors and context word vectors.

Example. Consider an asymmetric window that includes words to the left and words to the right of a word . In other words,

Take one (word, context word) pair , where . Let

be the probability with which word

is chosen for inclusion into the training set. For example, Mikolov et al [6] suggest the following down-sampling probability:

(1)

where is the frequency of word in the corpus , and is a threshold. Let be the weight associated with as context word. Here again we may want to down-sample frequent words similar to (1). Let be the weight associated with the position of the context word in the context window. Using all these individual weights we can define the overall weight for the pair:

(2)

Accumulating all this info over the entire corpus gives us the total value for (word, context word) pairs:

(3)

Now # and # can be formed using # as described in the beginning of this section.

3 Background

Remember that the goal of this paper is to investigate the differences between count and prediction models. To do this, we start by summarizing two special instances of count and prediction models. We first introduce PMI models as representatives of count models in Section 3.1, and then discuss the Skip-Gram model with Negative Sampling (SGNS) in Section 3.2.

3.1 PMI models

PMI models have been used extensively in distributional semantic models [12] to compute similarities between words. As the name implies, the key quantity in these models is Pointwise Mutual Information (PMI)

. Its definition and MLE estimate from data are given by

(4)

where is the simple count function. For a given word , PMI represents by forming a vector whose components are for all .

Note that PMI is not defined when . To circumvent this problem, Positive PMI (PPMI) replaces all negative values, i.e.,

More recently, Levy and Goldberg [2] have defined shifted variants of PMI and the PPMI metrics. Shifted PMI (SPMI) is just defined as , and Shifted Positive PMI (SPPMI) is defined as where is some positive integer. PMI and PPMI are, respectively, special cases of Shifted PMI and Shifted PPMI, with set to . We will return to Shifted PMI and Shifted PPMI in the next section, where we talk about the equivalence of certain methods.

As all vectors produced by PMI have the vocabulary size as the dimensionality, dimensionality reduction techniques are often applied to the original matrix to decrease the memory and computational requirements.

3.2 Skip-Gram with Negative Sampling (SGNS)

SGNS [6] is a popular predict model that aims to predict which word occurs with a context word . One can derive the SGNS [6] model from a binary classification setting where the target variable specifies whether a word occurs with context word . SGNS tries to solve this task with the logistic loss applied to the input score .

Remember that our corpus is a sequence of words, . To make notation easier, here we assume that the context is defined by all the words within a window around each word . Let the corresponding sequence of sets of context words be . To construct the training set for the SGNS model, one forms one training example for each and each context word . For each , a set of negative context words,

is chosen at random and, the logistic loss function

is applied. This loss is the aggregation of losses for one positive example and negative examples. More formally, the loss of one training example is:

(5)

where is the logistic loss corresponding to the word-context pair and target . The loss over the entire corpus is then the sum of all individual losses:

(6)

Let us refer to this way of writing the training objective (sum over occurrences in the corpus) as the corpus format.

Levy and Goldberg [2] showed that, by accumulating data over co-occurrences of words and context words, the objective function in (6) can be rewritten as222Levy and Goldberg [2] write the problem as the maximization of likelihood; hence the here and the one in [2] are negatives of each other.:

(7)
(8)

Let us refer to this (equivalent) way of writing the training objective as the co-occurrence format. Note that in the co-occurrence format, we do not compute losses for each token individually, but instead compute a loss for each unique pair.

In (8) we will from now on also consider other loss functions apart from the logistic loss like in the original formulation. We consider general loss functions, (here is the target variable) with the logistic loss being a special case. This generalization allows us to define custom loss functions that can incorporate domain knowledge or other side information. Table 1 lists a set of popular loss functions.

Loss name
Logistic
Squared
Squared Hinge
Hinge
Huber if ; otherwise
Table 1: A list of popular loss functions, .
Observation 1

Unlike SGNS, not all models can be reduced to the co-occurrence format - an example is the CBOW model [5]. In fact, even with the skip-gram model, the reduction to the co-occurrence format is not possible if we use the traditional softmax or hierarchical softmax [7] approach to speed up training. In general, models reducible to the co-occurrence format have to only be based on counts of words, context words and their co-occurrences. That property makes them have close relations with count models [1, 2, 10]. Another model that can be expressed in co-occurrence form is Glove [9].

3.2.1 Remarks on symmetry

In case we assume symmetric windows as outlined in Section 2.1 for counting co-occurrences, we can swap word and context vectors at optimality.

Observation 2

Suppose , is a minimizer of . Then , is also a minimizer of ; in other words, at optimality, if the word vectors and context word vectors are completely swapped, optimality still holds. Thus, depending on how the numerical optimization of (7) initialized, and can end up being completely swapped.

Similar swap properties can be given for other models in the co-occurrence format, e.g., Glove [9].

Observation 3

Note that Observation 2 does not mean that . If had been a convex function, then, using the fact that any convex combination of optimal solutions is also optimal, one can show the existence of an optimal solution with . However, the objective function in (7) is far from convex. Such non-convex objectives are usually associated with optima in which the symmetric components take very different values. In general, even when the symmetry condition does not hold, this discussion indicates that SGNS can end up at different optima depending on the initialization of the optimization process.

Observation 4

Mikolov et al. [6] derive (7) starting from the problem of predicting a context word given a word. However, when using negative sampling, the symmetry condition implies that we get exactly the same formulation and solution as for a model where we would predict a word given a context word. It is useful to note that this comment does not hold if traditional softmax or hierarchical softmax [7]

or noise-contrastive estimation 

[8] is used to model probabilities with the associated negative likelihood loss.

4 Connecting count and predict models

In this section, we revisit the idea that Levy and Goldberg [2] used to connect PMI models with SGNS and extend it. The central step is to explicitly solve SGNS in closed form; we can then see that the explicit solution of SGNS corresponds to the vectors that are constructed in a Shifted PMI model.

4.1 Solving SGNS in closed form

Here, we extend the analysis of Levy and Goldberg [2]. We provide closed-form solutions for SGNS and a broad class of loss functions. As Levy and Goldberg showed, it turns out that the closed-form solution of the SGNS objective is equivalent to a solution constructed by the Shifted PMI model. In contrast to Levy and Goldberg, we apply approximate quadratic analysis to the SGNS objective to give a more detailed insight for a broad class of loss functions.

The central idea of the analysis is to assume that, given a sufficient number of dimensions, each score can be minimized independently of all other scores. Let us define

(9)

To get , the minimizer of , we solve

(10)

The Taylor series around is given by

(11)
(12)

which, in terms of , is

(13)

Let

(14)

For the loss functions listed in Table 1, takes the form

(15)

Table 2 gives details for the various loss functions.

Observation 5

When , the first terms in (8), (10) and (12) involving go away, causing to take the extreme value of for the logistic loss and for other losses. Consider the expressions for in Table 2 to verify this. This issue does not arise for SGNS because it operates with reduced dimensional embeddings of words and context words in which information associated with various (word, context word) pairs interact heavily. We believe that this is a crucial difference between SGNS and PMI methods.

L PosCondition
Logistic
Squared
Squared Hinge
Hinge if NA
otherwise
Huber
Table 2: Expressions for and for various losses. PosCondition refers to the condition under which .
Observation 6

For building semantic representations that are used for computing similarity, it is often not desirable to have negative components in the word vectors. PosCondition in the last column of Table 2 checks when this case occurs. This condition turns out to be the same for all losses, which is interesting. There is no difference between squared loss, squared hinge loss and Huber loss. This is because, in the interval , all these losses have identical and the minimizer of always occurs in . These three losses have an expression for that is quite different from that for the logistic loss. Hinge loss prefers to set at the extremes: or .

4.2 Connecting SGNS and Shifted PMI

One of the key results of Levy and Goldberg [2] is that the vectors created by Shifted PMI are a solution to the SGNS objective. We use the quadratic analysis of Section 4.1 to say this more cleanly.

Let denote the unit vector in whose -th component is and all other components are zero. Suppose we use a one-hot representation for context vectors, i.e., we fix for all . Thus, we are fixing .

Observation 7

Suppose we fix each context vector to the one-hot representation given above. Then, only remains as the set of variables, and the following hold.

  1. The minimizer of (8) is given by , where is a vector with as components.

  2. is the Shifted PMI representation as defined in Section 3.1.

  3. Also, together form an optimal solution of SGNS.

In other words, we have a closed-form solution for SGNS (assuming ). Though Levy and Goldberg [2] do not mention the above construction, this is a simple observation that easily follows from their analysis.

Proof of Observation 7. Let’s take one . By the way we defined , we have . In given by (7), the variable occurs only in the term . Therefore, , which proves part (a) of Observation 7. Part (c) follows from the fact that, the pair, is such that is minimized for every pair and it is not possible to do better than that.

5 Count and predict models: differences

Empirically, there seems to be evidence that predict and count models perform differently (e.g., [1]). This is interesting since they all consider the same input data – namely the co-occurrences of words in text. What are the reasons for this? Although the previous section pointed out a strong connection between SGNS and PMI methods (with Observation 7 even indicating a near equivalence), we believe there are two main differences between the two methods pertaining (a) the dimension of the embeddings, and (b) being fixed as the one-hot representation. Let us now expand on the two factors.

Observation 8

Dimension of embeddings. As already mentioned in Observation 5, the small embedding dimension used by SGNS for words can be crucial for learning good word vectors. For example, co-occurring words can influence each other. The full dimension used by PMI methods does not allow this to happen; the full dimension also has the disadvantage of suffering from overfitting due to an excessive number of variables.

Observation 9

One-hot representation for . Similar to the previous observation, there is a difference in which variables are optimized. SGNS operates with both, and as variables. As shown in Subsection 4.2, PMI methods, on the other hand, correspond to using a fixed one-hot representation for . An important consequence of such a representation is that it does not allow close context words to influence each other. Future work should empirically investigate the role of this factor.

5.1 Further differences in Shifted PPMI

Recall that the solution for SGNS given by Shifted PMI is unusable in practice, since we have entries that are

. Also, the Shifted PMI solution has a high number of dimensions, and this might not be useful in practice. Levy and Goldberg present two heuristics remedy these problems. First, they propose omitting negative terms in the objective function to make a solution feasible. Second, they suggest a subsequent SVD step to reduce the dimensionality of the Shifted PPMI matrix. We below discuss each of these heuristics and their consequences in more detail.

5.1.1 Omitting terms

Since in practice, we cannot work with vectors that have entries, Goldberg and Levy propose Shifted PPMI instead of Shifted PMI to remedy this problem. Shifted PPMI corresponds to leaving out all terms from (7) that have negative Shifted PMI values. This is equivalent to a modified SGNS method in which during negative sampling examples not satisfying the PosCondition, i.e., those with are left out.

There is two issues with the above approach. First, we no longer can guarantee optimality for this solution. Second, this also seems inconsistent with the main idea behind negative sampling, which is to sample from unobserved pairs. Levy and Goldberg [2] make the following statement in the second paragraph of Section 5.1 of their paper: “We observe that Shifted PPMI is indeed a near-perfect approximation of the optimal solution, even though it discards a lot of information when considering only positive cells.” However, this does not explain the role of the discarded terms. In particular, when training SGNS with a low number of embedding dimensions, discarding those terms could be of real importance.

5.1.2 Applying SVD

Levy and Goldberg also propose to apply SVD to the SPPMI matrix in order to obtain low-dimensional embeddings. If we follow this step, we loose again some of the optimality we had with the Shifted PMI solution. More formally, consider a SPPMI matrix whose ’th term is . To form word vectors of dimension lower than , Goldberg and Levy suggest to apply SVD to the matrix . Let denote the SVD. If one is interested in an embedding of dimension , the best rank approximation of , is used. To form word vectors, one can use either or the “symmetric version”, .

Observation 10

Levy and Goldberg [2] propose that SVD is done on or one of its variants.333Levy and Goldberg [2] recommend using the matrix corresponding to Shifted PPMI. However, if remaining faithful to SGNS is the aim, (13) indicates that the weighting term also be included and that we solve

(16)

It is non-trivial to solve this problem; an SVD based approach will not work.

Let us now look at the limiting full case, i.e., , to point out some relations and differences within the methods in the PMI class.

Observation 11

We can use the full SVD of and define word vectors and . Note that these word vectors are not the same as Shifted PPMI which uses itself as word vectors. However, because , the dot products between any two word vectors is identical for (i.e., ) and . On the other hand, in general, . What this means is that SVD is consistent with Shifted PPMI, but Symmetric SVD is not consistent with Shifted PPMI.

Levy and Goldberg [2] recommend Shifted PPMI and refer to the spectral word vectors for it as SVD and Symmetric SVD. Their experiments showed the symmetric version to yield better results than SGNS.

5.2 Summary

Figure 1: Differences between various methods. Here “discarding” refers to the leaving out of certain negative examples satisfying some condition, after their creation.

The above subsections went into various ways of connecting SGNS and PMI methods and also brought out various differences. Figure 1 gives a rough and quick view of the various morphings of SGNS into different PMI methods. The use of a small embedding dimension by SGNS as opposed to the full dimensional one hot representation used by PMI methods is probably the most important difference. The discarding of certain negative examples, the approximation of the original non-linear objective by a quadratic, and the leaving out of the factors from the quadratic objective, are additional differences. Carefully designed experiments are needed to understand the individual effects of each difference on the quality of the resulting word vectors. The various differences also imply that, even though the methods at the two left ends of the figure involve low dimensional embeddings, they could be vastly different due to the various reasons given in this section.

6 Extensions

In the following two subsections, we propose two extensions to the basic PMI model. First, we show how to add regularization to PMI models and give explicit solutions for and regularization. We hope that regularization will help improve issues with data sparsity. After that, we propose a new convex model for word embeddings that is not only easy to learn, but also yields intuitive word vectors since each dimension corresponds exactly to one context word.

6.1 Adding regularization to PMI methods

Let us continue with the formulation of Section 4.1 and add regularization. If overfitting is one of the causes of the inferior performance of PMI methods compared to SGNS, then regularization should help improve performance. Consider the decoupled determination of . Our modified objective is now

where is the regularization term. As we argued in Section 4.1, each is decoupled from other variables. Hence, we can solve each one-dimensional optimization problem in isolation – even with regularization added. We now derive closed-form solutions for and regularization.

6.1.1 regularization

Figure 2: The determination of the regularized solution. Note that, even though is a nonlinear function, the straight line from A to B approximates that part of the curve well.

We can down-weigh frequently occurring words and context words and add a standard regularizer as follows:

(17)

We will discuss regularizer later. Let us also focus only on the logistic loss. Dividing by we get

(18)

Setting the derivative of the objective to zero and simplifying, we get to be the solution of

(19)

The line from to in Figure 2 approximates nicely in the region where the optimal lies. The equation of this line is given by

(20)

Solve for to get the optimal as

(21)

6.1.2 regularization

Similarly, it is easy to work out closed form expressions for for regularization. In this case, many of the optimal values go to zero: the larger the value of , more is the number of zeros. There are three cases to consider. For regularization, the value, plays a key role.

Case 1. . For this case it is easy to check that, at , the left hand side derivative of the objective in (18) is non-positive, and the right hand side derivative is non-negative. Hence is the optimal solution.

Case 2. . For this case, the right hand side derivative at is negative and so the optimum value is positive. The optimum is found by solving , which yields

(22)

Case 3. . For this case, the left hand side derivative at is positive and so the optimum value is negative. The optimum is found by solving , which yields

(23)

6.1.3 Discussion

Adding regularization to count models has two benefits. First, while the non-regularized solution (see Table 2; same as SPMI) is unusable because goes to extreme values when , regularized solutions are always well-defined. Second, we expect regularization to help if overfitting is degrading performance with PMI models. Future work is needed to empirically study and validate the usefulness of regularization in count models.

6.2 A convex formulation for word vectors

Motivated by the analysis in Section 4.1, we develop a new and simple convex formulation for determining word vectors. Similar to the skip-gram model, we phrase our model as the task of predicting a word conditioned on context. However, instead of learning representations for both words and their context words, we fix the context vectors to a one-hot representation. This makes our objective function convex. Another advantage is that this results in intelligible models – each vector entry refers to a weight given to a context word. Also, to obtain compact representations, we instead use regularization on the word vectors instead of learning embeddings in lower-dimensional spaces like traditional predict models.

Let us describe the formulation in some more detail. Let denote the dimension of the vector representation of context. Depending on the context modeling employed, can be one to several times of . Let denote the context representation. is a function of the context, , written as . The weight vectors are also points in . The score of a positive example is . Remember that the input corpus is a sequence of words, . Let the corresponding sequence of contexts be . We solve the following minimization problem.

(24)

In the above, is a regularization constant444One can also think of schemes for making dependent on , for example, make dependent on the number of context words to which word is attached. which can be tuned to balance sparsity of and performance; includes the effect of positive as well as negative examples. Other forms of regularization are also possible, for example a combination of and regularization (Generalized Elastic net).

Context. We can define context in several ways. Here are a few possibilities:

  1. A simple way is to have a context , with each dimension corresponding to one word in the vocabulary . Similar to the SGNS model, we only assume individual interactions between words and context pairs, i.e., each will exactly contain one non-zero entry. This way, we can rewrite the objective in the co-occurrence form and obtain all vectors in closed form by accumulating the statistics for each (word, context word) pair.

  2. We can do better than option 1 above and pair a word with a window of context words simultaneously, i.e., can now have multiple non-zero entries. A bag-of-words representation can be used to represent the context input using the context words. For example, we can set if occurs in the context window and zero otherwise, where is some weighting that is a function of how far is from .

  3. If we want to make use of word order, we can define context inputs where is the length of the context window and one block of inputs is used for each context word, and blocks concatenated in the proper order of occurrence in the window.

  4. Depending on the purpose for which the word vectors are developed we can also use dependency-tree based context features [3]. Again, we can use one-hot representations of these features.

Learning. Except option 1, aggregating at the individual level (to convert the objective into the co-occurrence form (7)) as well as forming a closed form solution, is in general infeasible. This means we have to train our model using the objective in corpus format. To do the training, we can choose from various optimization strategies. We can employ: (a) proper softmax over all words; (b) hierarchical softmax [7]; (c) Noise-contrastive estimation (NCE) [8]; or, (d) negative sampling like in SGNS [5].

We now present examples for two of the four optimization strategies mentioned above. With softmax, our objective is

(25)

Let us now consider optimization via Negative Sampling. Let denote a randomly sampled set of negative examples. Then

(26)

where .

Observation 12

One of the advantages of the convex formulation described above is interpretability. If the component of some is large and positive, it can be directly translated to the role played by the context term of . This direct interpretability also means that domain knowledge may be easy to infuse into the formulation by specification of constraints and extra regularization on the weights.

Observation 13

The model developed in this section is in between conventional low-dimensional word embeddings (no interpretability, allows for higher-order effects) and distributional word representations (full interpretability, no higher-order effects). Here, the term “higher-order effects” refers to the effects (influences) that occur during the joint optimization of context and word representations, e.g, two words can be made similar because they occur in contexts that are also similar.

7 Conclusion

In this report, we showed how to explicitly solve the SGNS objective for a broad range of loss functions. This step allowed us to connect Shifted PMI models to SGNS models under general loss functions. Furthermore, we pointed out two important differences between the Shifted PMI model and SGNS model. First, in the SGNS model, far fewer embeddings are used in practice, making the SGNS model less prone to overfitting. Second, in the PMI model, the context vectors are fixed; hence, there is also no interaction between context vectors and word vectors.

Finally, we proposed two extensions to existing models. First, we showed how we can incorporate regularization into PMI models to alleviate overfitting. Second, we presented a new embedding model that not only has a convex objective, but also results in intelligible embeddings. Future work is needed to empirically validate our proposed methods and extensions.

References

  • [1] M. Baroni, G. Dinu and G. Kruszewski. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. ACL, 2014.
  • [2] O. Levy and Y. Goldberg. Neural word embedding as implicit factorization. NIPS 2014.
  • [3] O. Levy and Y. Goldberg. Linguistic Regularities in Sparse and Explicit Word Representations. CoNLL, 2014.
  • [4] O. Levy, Y. Goldberg and I. Dagan. Improving distributional similarity with lessons learned from word embeddings. TACL, 2015.
  • [5] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
  • [6] T. Mikolov, I. Sutskever, K. Chen, G.S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. NIPS, 2013.
  • [7] A. Mnih and G. Hinton. A scalable hierarchical distributed language model. NIPS, 2009.
  • [8] A. Mnih and K. Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation. NIPS, 2013.
  • [9] J. Pennington, R. Socher, C. D. Manning. GloVe: Global vectors for word representation. EMNLP 2014.
  • [10] T. Shi and Z. Liu. Linking Glove with word2vec.arXiv: 1411.5595v2, 2014.
  • [11] K. Stratos, M. Collins and D. Hsu. Model-based Word Embeddings from Decompositions of Count Matrices. ACL, 2015.
  • [12] P. Turney and P. Pantel. From frequency to meaning: Vector space models of semantics. JAIR, 2010.