Balancing the composition of word embeddings across heterogenous data sets

01/14/2020
by   Stephanie Brandl, et al.
0

Word embeddings capture semantic relationships based on contextual information and are the basis for a wide variety of natural language processing applications. Notably these relationships are solely learned from the data and subsequently the data composition impacts the semantic of embeddings which arguably can lead to biased word vectors. Given qualitatively different data subsets, we aim to align the influence of single subsets on the resulting word vectors, while retaining their quality. In this regard we propose a criteria to measure the shift towards a single data subset and develop approaches to meet both objectives. We find that a weighted average of the two subset embeddings balances the influence of those subsets while word similarity performance decreases. We further propose a promising optimization approach to balance influences and quality of word embeddings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/10/2019

Better Word Embeddings by Disentangling Contextual n-Gram Information

Pre-trained word vectors are ubiquitous in Natural Language Processing a...
08/20/2016

Learning Word Embeddings from Intrinsic and Extrinsic Views

While word embeddings are currently predominant for natural language pro...
08/25/2019

On Measuring and Mitigating Biased Inferences of Word Embeddings

Word embeddings carry stereotypical connotations from the text they are ...
08/31/2016

Hash2Vec, Feature Hashing for Word Embeddings

In this paper we propose the application of feature hashing to create wo...
11/03/2017

Compressing Word Embeddings via Deep Compositional Code Learning

Natural language processing (NLP) models often require a massive number ...
02/05/2018

Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings

The words of a language reflect the structure of the human mind, allowin...
12/04/2019

Natural Alpha Embeddings

Learning an embedding for a large collection of items is a popular appro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The advent of word embeddings (Mikolov et al., 2013a; Pennington et al., 2014) has shifted the entire field of Natural Language Processing (NLP) from sparse representations, such as Bag-of-Words, to dense, vectorial representations that have proven to be capable of capturing meaningful syntactic and semantic concepts. Word embeddings are widely used in, e.g., text classification (Joulin et al., 2016) and machine translations (Mikolov et al., 2013b). Subsequently word embeddings have a crucial impact on downstream applications and, moreover, such models inherit (hidden) assumptions and properties of the data.

Text corpora for training word embeddings are typically composed of subsets with different properties. Properties can manifest, e.g., as U.K./U.S. English, but can also be induced by the authors, e.g., texts written by different genders, in different periods of time or in different contexts such as arts and politics. While it is the intention in the first place to capture semantic and syntactic information from the data in the best possible way — ideally by learning from as much data as possible, we argue that on second thought it is desirable to influence the composition of data (sub)sets.

Given a corpus where one category outnumbers the other, joint word embeddings will expose a bias towards the former — yet this might not reflect (actual) word semantics appropriately or can be simply undesired.

The desideratum would be for example that in a transfer learning setting word embeddings trained on a large data set are to be fine-tuned on a small task-specific data set. Or in order to achieve semantics of cultural diversity, several smaller newspaper data sets with different foci could be added to a large base newspaper data set with a Euro/US centered focus.

The problem of bias increases for word embeddings as they are often used as a starting point in e.g. downstream tasks. Those methods usually work in a black box manner whose decision making is difficult to see through.

Typical state-of-the-art embedding learning algorithms do not distinguish between different data subsets and thus merge their properties in an incidental manner. A notable exception is the work of Goikoetxea et al. (2016) that shows how text-based and wordnet-based Miller (1995) embeddings can be combined to improve the embedding quality, yet does not align the contribution of the individual data sets. For more details on related work we refer to Appendix 2.

In this contribution we research if and how the influence of individual subsets can be aligned, while retaining embedding quality w.r.t. word vectors learned on all the data. For this aim we propose a measure for the retained semantics of a subset in the final embedding and compare a total of 9 different combination methods (1-9) which are explained in detail in Section 4. The combinations vary in that they are (1) trained on the complete data set, (2-4) created without Goikoetxea et al. (2016), and (5-9) with consideration of the data distribution (our approaches).

2 Related work

Various authors combined text-based word embeddings with additional resources, as for instance wordnet-based information, embeddings trained by different algorithms or additional data sets (Goikoetxea et al., 2016; Rothe and Schütze, 2017; Speer and Chin, 2016; Henriksson et al., 2014). The main goal in those articles is to improve the quality of word embeddings overall.
However, to the best of our knowledge, so far no one adressed the influence subsets have on a combined embedding systematically in order to balance the impact of different data sets after their composition, while retaining the quality of the word embeddings.

3 Evaluating the influence of data subsets on word embeddings

New York Times Wikipedia
Analogy-test (in %) Analogy-test (in %)
  n=1   n=5   n=10   n=1   n=5   n=10
(a) 1.00 0.15 0.85 0.57 1.61 6.82 9.33 1.00 0.20 0.80 0.60 7.47 29.05 36.69
(b) 0.15 1.00 -0.85 0.57 1.55 9.66 13.29 0.20 1.00 -0.80 0.60 19.37 53.54 62.12
(1) 0.20 0.42 -0.21 0.31 3.82 16.05 20.91 0.27 0.55 -0.27 0.41 21.21 60.58 69.13
(2) AVG 0.24 0.44 -0.21 0.34 2.67 11.24 15.31 0.32 0.54 -0.22 0.43 19.21 53.15 62.20
(3) CON 0.30 0.39 -0.09 0.34 1.61 12.16 16.03 0.39 0.45 -0.06 0.42 13.70 52.78 61.71
(4) PCA 0.28 0.36 -0.08 0.32 1.94 9.94 13.52 0.36 0.43 -0.07 0.40 17.08 48.44 57.10
(5) SAMP 0.22 0.30 -0.08 0.26 4.29 18.46 23.48 0.30 0.45 -0.15 0.38 20.36 60.86 69.13
(6) WAVG 0.31 0.31 -0.01 0.31 2.76 10.74 14.10 0.41 0.42 -0.01 0.42 16.79 48.26 57.56
(7) 0.21 0.37 -0.16 0.29 2.57 14.32 18.69 0.28 0.48 -0.20 0.38 14.71 53.94 63.01
(8) 0.23 0.37 -0.14 0.30 2.35 12.32 16.28 0.29 0.47 -0.19 0.38 14.85 52.95 62.25
(9) 0.30 0.32 -0.01 0.31 2.67 10.94 14.20 0.38 0.42 -0.04 0.40 15.33 47.87 57.38
Table 1: The evaluation of different embeddings and both data sets. Within each data set, the first group is trained with GloVe on different subsets, the second group are embeddings created without and the last group with consideration of the data distribution. For we are hoping for a value close to , for the analogy test higher values mean better performance. The measures are described in detail in Section 3. Best values are in bold, best values within groups are in italic.

Considering how embeddings encode word contexts, we illustrate the influence of data subsets on the final embedding on two real world data sets.

New York Times 1990-2016: The New York Times dataset111https://sites.google.com/site/zijunyaorutgers/ (NYT) contains headlines and lead texts of news articles published online and offline in the New York Times between 1990 and 2016 with a total of 99.872 documents. Political offices as well as sports teams are very closely discussed based on their representatives players, hot topics and their current score. Their context changes over time. As word embeddings are mainly based on the context of a word, their connotation and vectorial representation are influenced by those changes. We investigate the influence of these changes on common word embeddings by splitting this data set in subsets, the first one reaching from 1990-1999 (33.383 articles) and a second one from 2000-2016 (62.058 articles)

English Wikipedia: The Wikipedia data set (Wiki) contains articles from the English Wikipedia snapshot from April 1st, 2019. We select 12.236 articles from the category Arts as well as 24.473 articles from the category Politics to analyse the individual influence of those 2 fields on joint word embeddings.

As a first example, we consider the word shooting whose nearest neighbors (NNs) in both category groups of the Wiki data set are shown in Fig.1. Clearly, within Politics, shooting refers mostly to the firing of a gun, for Arts, shooting rather relates to a photo or movie shooting. When we train embeddings on the joint data set, the new vector reflects both realities, but is biased towards Politics due to the increased number of articles (23/100 and 51/100 common neighbors with the embedding from Arts and Politics, respectively).

Figure 1: 2D tSNE embeddings of the word shooting with its NN in different embeddings trained on Wiki: (a), (b), (8) in red, blue and orange respectively.

Given this intuition, we would like to quantify the retained influence of the data subset (a) and (b) on embeddings

. Inspired by the Jaccard index we compare the neighboring words of a given embedding trained on a subset and those of the composed embeddings

. In more detail, given the sets of NNs, and for two embeddings and of a word , the ratio of shared nearest words is:

(1)

and we denote the average over all words as . For instance would mean that words in and share on average of their NNs. We will use and

to indicate the retained influence of the according subsets on a resulting embedding. We use the cosine similarity to compute NNs for neighborhoods of different sizes

.

4 Methods

We use a number of different embeddings that can be divided into three groups: merging the data before learning the embeddings, static merging algorithms, and dynamic merging approaches.

Baselines - (a), (b), (1) As baselines we train word embeddings with GloVe Pennington et al. (2014) on NYT on articles from (a) 1990-1999 and (b) 2000-2016. The resulting embeddings learned on (a), (b), and (1) are denoted as , , and . We further trained word embeddings with GloVe on Wiki for

(a) Arts (), (b) Politics () and (1) the merged data ().
GloVe embeddings are trained as 50-dimensional word embeddings on both NYT and Wiki with , . We choose a context window size of 15 for NYT and 5 for Wiki as the data set is considerably larger than NYT. We select one vocabulary for each data set and consider only words that occur at least 40 (NYT) and 250 (Wiki) times in the whole data set which leads to vocabularies of size 21398 (NYT) and 19936 (Wiki).

Static merging - (2), (3), (4) In constrast to (1) — merging before learning — the following approaches merge trained embeddings. They were proposed by Goikoetxea et al. (2016). Given the embeddings and of the subsets, method (2) is to average them, i.e. , (3) is to concatenate them to a 100-dimensional embedding, and (4) extends (3) by extracting the 50 most informative dimensions using PCA. (3) and (4) obtained good results in Goikoetxea et al. (2016).

Dynamic merging - (5), (6), (7), (8), (9) We found that previously presented embeddings are biased towards the larger subset: . To alleviate this we propose the following approaches. A first attempt (5) is to upsample the smaller subset to the same size of the larger set. This leads to embeddings with a high score in analogy tests but a decrease in . We further intent to balance the impact of the subsets by taking an average that is weighted by their inverse proportions (6): .
Unfortunately, we found that this approach results in embeddings with inferior quality. We define an optimization problem that on one hand optimizes the GloVe loss to obtain qualitative good embeddings and on the other hand tries to balance the influence of the respective subsets by regularizing the distance of the solution to the weighted embeddings . Given the co-occurence matrix and the GloVe weighting function  Pennington et al. (2014) the embeddings are created by optimizing:

(2)

and denotes a point-wise multiplication. The regularization parameter allows to trade-off between embedding quality and a balanced influence. We restrict the solution space to the ”rectangle” between and and leave exploring an unconstrained version to future work. We optimize Eq. 4 with gradient descent. We therefore use Adam with a learning rate of and default values for . The optimization is stopped after

steps. We have implemented this in PyTorch.

max width= (a) 90 90/00 W-AVG (b) 00 90/00 W-AVG war 0 0 war 0 0 vietnam 3 1 ii 2 2 persian 5 3 irag 1 6 gulf 9 4 vietnam 3 1 era 17 5 fight 4 10 bidding - 7 combat 13 11 ii 2 2 wag - 21 veteran - 8 terrorism 18 15 cold 16 9 enemy 21 22 confrontaion - - hero - - capture - 16 invasion 12 -

Table 2: 10 NNs of the word war are displayed for and in column 1 and 4 (NYT). In column 2 and 3 one can see the position the respective word gets after merging for and W-AVG. On the right side of the table we did the same for .

5 Results

Figure 2: Values for and for for different weighting parameters of W-AVG

We evaluate the quality of the obtained embeddings by measuring their performance on analogy tests Mikolov et al. (2013b) and how the influence of the subset is balanced by measuring the number of common neighbors , , their average and their difference (see Section 3). and indicate the respective average over for slices (mall) and (arge). Results for all methods, evaluated and averaged over the entire vocabulary, are summarized in Table 1.

Balanced influence: First we note that embeddings of the subsets (a) and (b) have only few NNs in common. Furthermore, when trained on both subsets the embeddings (1) show a clear shift towards the larger subset (b). Qualitatively this can also be observed in Table 2 where we depict the NNs for the word ”war” in subset (a) and subset (b); and the position of the word in the ranked neighbors of (1) in the column ”90/00”. We observe that most of the NNs of (a) are not present in the first NNs of (1), while for (b) the set of 4 NNs is identical with (1). Moreover, we note that also the static merging approaches (2), (3) & (4) exhibit the same shift (see Table 1).
We try to increase the influence of by upsampling (5) the data set to the size of before training GloVe embeddings. This leads to the same (or even better) quality of the word embeddings as (1) but also results in a decreased . To alleviate this we propose a weighted average (6) in order to consider the subset proportions. The results in Table 1 indicate that this simple approach indeed yields, in terms of our measure, balanced embeddings. This can also be observed exemplary in Table 2 where NNs of (6) correlate much more with the NNs of the respective subsets.

Unfortunately, we will see that the embedding quality suffers when performing a weighted average. With the aim to align both desiderata — balanced influence of the subsets and quality of the embeddings — we proposed an optimization procedure (7-9). From Table 1 we read that the resulting embeddings for different regularization strengths are balanced, but surprisingly the influence of the respective subsets decreases in comparison to (1). As a control experiment we consider the embeddings given by a weighted average between (1) and (6) (Figure 2), where this drop of influence cannot be noted. Yet none of the such averaged embeddings yields good performance and balancing; which justifies the application of an optimization procedure.

Embedding quality: We measure the embedding quality by means of analogy tests. The embeddings trained on all the data (1) perform best in this context — hinting that it is beneficial to leverage as much information from data as possible. The statically merged embeddings (2), (3), (4) do not perform as well on our task, in contrast to the results of Goikoetxea et al. (2016).
Furthermore, we note that the weighted average (6) results also in a decrease in embedding quality. In contrast, we find that our optimization approach is able to capture both, embedding quality and balances the influence of the subsets.

6 Discussion

Considering that text corpora are often composed of subsets, embedding learners merge them in incidental manner — either by merging the text before or the word vectors after training. We argue this can lead to undesired shifts in the embedded semantics and propose a measure for this shift as well as approaches to balance the composition of the subsets.

Our preliminary results show that one can indeed level the impact of different subsets. A weighted average of the subset embeddings yields balanced word embeddings, yet their quality decreases. The proposed optimization routine results in word vectors with good quality and balanced, yet decreased influence of the subsets.

As future work we aim to extend our empirical results and investigate the proposed optimization routine in more detail, e.g., by removing the constraints. As additional experiments we would like to investigate the influence of the different combination methods on downstream tasks, such as classification of sub-categories of the Wikipedia articles. This will further our understanding of the workings of the combination methods in comparison to the analogy tests that are not data slice specific. As alternative to the current regularization — that minimizes the distance to another, presumably balanced embedding — we would like to develop a (differentiable) regularization term that is closer related to our measure . Adapting the work of Berman et al. (2018), which proposes surrogate losses for the Jaccard index, seems to be a promising direction for this goal.

An interesting question posed by our results is how merging of data subsets impacts the resulting embedding semantics — considering that many NNs of are not NNs for the subset embeddings and .

Acknowledgments

This work was supported by the Federal Ministry of Education and Research (BMBF) for the Berlin Big Data Center BBDC (01IS14013A) and for the MALT III project (01IS17058). We thank L. Ruff, T. Schnake, O. Eberle and S. Dogadov for fruitful discussions. We also thank the reviewers from ACL and the Workshop on Ethical, Social and Governance Issues in AI at NeurIPS 2018 for their valuable comments.

References