Learning the Dimensionality of Word Embeddings

11/17/2015 ∙ by Eric Nalisnick, et al. ∙ University of California, Irvine Princeton University 0

We describe a method for learning word embeddings with data-dependent dimensionality. Our Stochastic Dimensionality Skip-Gram (SD-SG) and Stochastic Dimensionality Continuous Bag-of-Words (SD-CBOW) are nonparametric analogs of Mikolov et al.'s (2013) well-known 'word2vec' models. Vector dimensionality is made dynamic by employing techniques used by Cote & Larochelle (2016) to define an RBM with an infinite number of hidden units. We show qualitatively and quantitatively that SD-SG and SD-CBOW are competitive with their fixed-dimension counterparts while providing a distribution over embedding dimensionalities, which offers a window into how semantics distribute across dimensions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Word Embeddings (WEs) (Bengio et al., 2003; Mnih and Hinton, 2009; Turian et al., 2010; Mikolov et al., 2013) have received wide-spread attention for their ability to capture surprisingly detailed semantic information without supervision. However, despite their success, WEs still have deficiencies. One flaw is that the embedding dimensionality must be set by the user and thus requires some form of cross-validation be performed. Yet, this issue is not just one of implementation. Words naturally vary in their semantic complexity, and since vector dimensionality is standardized across the vocabulary, it is difficult to allocate an appropriate number of parameters to each word. For instance, the meaning of the word race varies with context (ex: competition vs anthropological classification), but the meaning of regatta is rather specific and invariant. It seems unlikely that race and regatta’s representations could contain the same number of parameters without one overfitting or underfitting.

To better capture the semantic variability of words, we propose a novel extension to the popular Skip-Gram and Continuous Bag-of-Words models (Mikolov et al., 2013) that allows vectors to have stochastic, data-dependent dimensionality. By employing the same mathematical tools that allow the definition of an

Infinite Restricted Boltzmann Machine

(Côté and Larochelle, 2016)

, we define two log-bilinear energy-based models named

Stochastic Dimensionality Skip-Gram (SD-SG) and Stochastic Dimensionality Continuous Bag-of-Words (SD-CBOW) after their fixed dimensional counterparts. During training, SD-SG and SD-CBOW allow word representations to grow naturally based on how well they can predict their context. This behavior, among other things, enables the vectors of specific words to use few dimensions (since their context is reliably predictable) and the vectors of vague or polysemous words to elongate to capture as wide a semantic range as needed. As far as we are aware, this is the first word embedding method that allows vector dimensionality to be learned and to vary across words.

2 Fixed Dimension Word Embeddings

We first review the original Skip-Gram (SG) and Continuous Bag-of-Words (CBOW) architectures (Mikolov et al., 2013) before describing our novel extensions. In the following model definitions, let be a -dimensional, real-valued vector representing the th input word , and let be a vector representing the th context word appearing in a -sized window around an instance of in some training corpus .

2.1 Skip-Gram

SG learns a word’s embedding via maximizing the log probability of that word’s context (i.e. the words occurring within a fixed-sized window around the input word). Training the SG model reduces to maximizing the following objective function:

(1)

where

is the size of the context vocabulary. Stochastic gradient descent is used to update not only

but also and . A hierarchical softmax or negative sampling is commonly used to approximate the normalization term in the denominator (Mikolov et al., 2013).

2.2 Continuous Bag-of-Words

CBOW can be viewed as the inverse of SG: context words serve as input in the prediction of a center word . The CBOW objective function is then written as

(2)

where , , and are defined as above for SG. Again, the denominator is approximated via negative sampling or a hierarchical softmax.

3 Word Embeddings with Stochastic Dimensionality

Having introduced the fixed-dimensional embedding techniques SG and CBOW, we now define extensions that make vector dimensionality a random variable. In order for embedding growth to be unconstrained, word vectors

and context vectors are considered infinite dimensional and initialized such that the first few dimensions are non-zero and the rest zero.

3.1 Stochastic Dimensionality Skip-Gram

Define the Stochastic Dimensionality Skip-Gram (SD-SG) model to be the following joint Gibbs distribution over , , and a random positive integer denoting the maximum index over which to compute the embedding inner product: where , also known as the partition function. Define the energy function as where , and is a weight on the L2 penalty. Notice that SD-SG is essentially the same as SG except for three modification: (1) the inner product index is a random variable, (2) an L2 term is introduced to penalize vector length, and (3) an additional term is introduced to ensure the infinite sum over dimensions results in a convergent geometric series (Côté and Larochelle, 2016). This convergence, in turn, yields a finite partition function; see the Appendix for the underlying assumptions and derivation.

For SD-SG’s learning objective, ideally, we would like to marginalize out :

(3)

where is the maximum index at which or contains a non-zero value. must exist under the sparsity assumption that makes the partition function finite (see Appendix), but that assumption gives no information about ’s value. One work-around is to fix at some constant, but that would restrict the model and make

a crucial hyperparameter.

A better option is to sample values and rely on the randomness to make learning dynamic yet tractable. This way can grow arbitrarily (i.e. its the observed maximum sample) while the vectors retain a finite number of non-zero elements. Thus we write the loss in terms of an expectation

(4)

Notice that this is the evidence bound widely used for variational inference except here there is equality, not a bound, because we have set the variational distribution to the posterior

, which is tractable. The sampling we desire then comes about via a score function estimate of the gradient:

(5)

where samples are drawn from . Note the presence of the term—the very term that we said was problematic in Equation 3 since was not known. We can compute this term in the Monte Carlo objective by setting to be the largest value sampled up to that point in training. The presence of is a boon because, since it does not depend

, there is no need for control variates to stabilize the typically high variance term

.

Yet there’s still a problem in that and therefore a very large dimensionality (say, a few thousand or more) could be sampled, resulting in the gradient incurring painful computational costs. To remedy this situation, if a value greater than the current value of is sampled, we set , restricting the model to grow only one dimension at a time. Constraining growth in this way is computationally efficient since can be drawn from a ()-dimensional multinoulli distribution with parameters . The intuition is the model can sample a dimension less than or equal to if is already sufficiently large or draw the ()th option if not, choosing to increase the model’s capacity. The hyperparameter controls this growing behavior: as approaches one from the right, approaches infinity.

3.2 Stochastic Dimensionality Continuous Bag-of-Words

The Stochastic Dimensionality Continuous Bag-of-Words (SD-CBOW) model is a conditional Gibbs distribution over a center word and a random positive integer denoting the maximum index as before, given multiple context words : where . The energy function is defined just as for the SD-SG and admits a finite partition function using the same arguments. The primary difference between SD-SG and SD-CBOW is that the latter assumes all words appearing together in a window have the same vector dimensionality. The SD-SG, on the other hand, assumes just word-context pairs share dimensionality.

Like with SD-SG, learning SD-CBOW’s parameters is done via a Monte Carlo objective. Define the SD-CBOW objective as

(6)

Again we use a score function estimate of the gradient to produce dynamic vector growth: where samples are drawn from . Vectors are constrained to grow only one dimension at a time as done for the SD-SG by sampling from a th dimensional multinoulli with .

WordSim-353 MEN
SG-50 0.607 0.650
CBOW-50 0.609 0.659
SD-CBOW 0.614 0.660
SD-SG 0.620 0.674
CBOW-200 0.643 0.712
SG-200 0.696 0.736
(a) Semantic similarity (Spearman’s rank correlation)
(a) Histogram of the expected dimensionalities.
(b)
(c)
Figure 1: Subfigure (a) shows results for semantic similarity tasks. The Spearman’s rank correlation between model and human scores are calculated for the WordSim-353 and MEN datasets. Subfigure (b) shows a histogram of the expected vector dimensionalities after training SD-CBOW. Subfigures (c) and (d) show the distribution over dimensionalities SD-SG learned for the words race and player.

4 Related Work

As mentioned in the Introduction, we are aware of no previous work that defines embedding dimensionality as a random variable whose distribution is learned from data. Yet, there is much existing work on increasing the flexibility of embeddings via auxiliary parameters. For instance, Vilnis & McCallum (2015) represent each word with a multivariate Gaussian distribution. The Gaussian embedding’s mean parameter serves much like a traditional vector representation, and the method’s novelty arises from the covariance parameter’s ability to capture word specificity

(Vilnis and McCallum, 2015). Other related work proposes using multiple embeddings per word in order to handle polysemy and homonymy (Huang et al., 2012; Reisinger and Mooney, 2010; Neelakantan et al., 2014; Tian et al., 2014; Bartunov et al., 2016). Bartunov et al. (2016)’s AdaGram model is closest in spirit to SD-SG and SD-CBOW in that it uses the Dirichlet Process to learn an unconstrained, data-dependent number of embeddings for each word. Yet, in contrast to SD-SG/-CBOW, the dimensionality of each embedding is still user-specified.

5 Evaluation

We evaluate SD-SG and SD-CBOW quantitatively and qualitatively against original SG and CBOW. For all experiments, models were trained on a one billion word subset of Wikipedia (6/29/08 snapshot). The same learning rate ( for CBOW,

for SG), number of negative samples (5), context window size (6), and number of training epochs (1) were used for all models. SD-SG and SD-CBOW were initialized to ten dimensions.

Quantitative Evaluation. We test each model’s ability to rank word pairs according to their semantic similarity, a task commonly used to gauge the quality of WEs. We evaluate our embeddings on two standard test sets: WordSim353 (Finkelstein et al., 2001) and MEN (Bruni et al., 2014). As is typical for evaluation, we measure the Spearman’s rank correlation between the similarity scores produced by the model and those produced by humans. The correlations for all models are reported in Subtable (a) of Figure 1. We see that the SD-SG and SD-CBOW perform better than their 50 dimensional counterparts but worse than their 200 dimensional counterparts. All scores are relatively competitive though, separated by no more than .

Qualitative Evaluation. Observing that the SD-SG and SD-CBOW models perform comparatively to finite versions somewhere between 50 and 200 dimensions, we qualitatively examine their distributions over vector dimensionalities. Subfigure (b) of Figure 1 shows a histogram of the expected dimensionality—i.e. —of each vector after training the SD-CBOW model. As expected, the distribution is long-tailed, and vague words occupy the tail while specific words are found in the head. As shown by the annotations, the word photon has an expected dimensionality of 19 while the homograph race has 150. Note that expected dimensionality correlates with word frequency—due to the fact that multi-sense words, by definition, can be used more frequently—but does not follow it strictly. For instance, the word william is the 482nd most frequently occurring word in the corpus but has an expected length of 62, which is closer to the lengths of much rarer words (around 20-40 dimensions) than to similarly frequent words.

In subfigures (c) and (d) of Figure 1, we plot the quantity for two homographs, race (c) and player (d), as learned by SD-SG, in order to examine if their multiple meanings are conspicuous in their distribution over dimensionalities. For race, we see that the distribution does indeed have at least two modes: the first at around 70 dimensions represents car racing, as determined by computing nearest neighbors with that dimension as a cutoff, while the second at around 100 dimensions encodes the anthropological meaning.

6 Conclusions

We propose modest modifications to SG and CBOW that allow embedding dimensionality to be learned from data in a probabilistically principled way. Our models preserve performance on semantic similarity tasks while providing a view–via the distribution –into how embeddings utilize their parameters and distribute semantic information.

References

  • Bartunov et al. (2016) Sergey Bartunov, Dmitry Kondrashkin, Anton Osokin, and Dmitry Vetrov. 2016. Breaking sticks and ambiguities with adaptive skip-gram.

    International Conference on Artificial Intelligence and Statistics

    .
  • Bengio et al. (2003) Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model.

    The Journal of Machine Learning Research

    3:1137–1155.
  • Bruni et al. (2014) Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Int. Res. 49(1):1–47.
  • Côté and Larochelle (2016) Marc-Alexandre Côté and Hugo Larochelle. 2016. An infinite restricted boltzmann machine. Neural computation .
  • Finkelstein et al. (2001) Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th International Conference on World Wide Web. WWW ’01, pages 406–414.
  • Huang et al. (2012) Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Annual Meeting of the Association for Computational Linguistics (ACL).
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119.
  • Mnih and Hinton (2009) Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems. pages 1081–1088.
  • Neelakantan et al. (2014) Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space.

    Conference on Empirical Methods in Natural Language Processing

    .
  • Reisinger and Mooney (2010) Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 109–117.
  • Tian et al. (2014) Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In Proceedings of COLING. pages 151–160.
  • Turian et al. (2010) Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010.

    Word representations: a simple and general method for semi-supervised learning.

    In Proceedings of the 48th annual meeting of the association for computational linguistics. Association for Computational Linguistics, pages 384–394.
  • Vilnis and McCallum (2015) Luke Vilnis and Andrew McCallum. 2015. Word representations via gaussian embedding. International Conference on Learning Representations .

Appendix

A Finite Partition Function

Stochastic Dimensionality Skip-Gram’s partition function, containing a sum over all countably infinite values of , would seem to be divergent and thus incomputable. However, it is not, due to two key properties first proposed by (Côté and Larochelle, 2016) to define a Restricted Boltzmann Machine with an infinite number of hidden units (iRBM). They are:

  1. Sparsity penalty: The penalty in (i.e. the and terms) ensures the word and context vectors must have a finite two-norm under iterative gradient-based optimization with a finite initial condition. In other words, no proper optimization method could converge to the infinite solution if all and vectors are initialized to have a finite number of non-zero elements (Côté and Larochelle, 2016).

  2. Per-dimension constant penalty: The energy function’s term results in dimensions greater than becoming a convergent geometric series. This is discussed further below.

With those two properties in mind, consider the conditional distribution of given an input and context word:

(7)

Again, the denominator looks problematic due to the infinite sum, but notice the following:

(8)

The sparsity penalty allows the sum to be split as it is in step #2 into a finite term () and an infinite sum () at an index such that . After is factored out of the second term, all remaining and terms are zero. A few steps of algebra then reveal the presence of a convergent geometric series. Intuitively, we can think of the second term, , as quantifying the data’s need to expand the model’s capacity given and .