NeuralDenoising
Neuralbased Noise Filtering from Word Embeddings
view repo
Word embeddings have been demonstrated to benefit NLP tasks impressively. Yet, there is room for improvement in the vector representations, because current word embeddings typically contain unnecessary information, i.e., noise. We propose two novel models to improve word embeddings by unsupervised learning, in order to yield word denoising embeddings. The word denoising embeddings are obtained by strengthening salient information and weakening noise in the original word embeddings, based on a deep feedforward neural network filter. Results from benchmark tasks show that the filtered word denoising embeddings outperform the original word embeddings.
READ FULL TEXT VIEW PDF
Word embeddings are undoubtedly very useful components in many NLP tasks...
read it
Prediction without justification has limited utility. Much of the succes...
read it
It is wellunderstood that different algorithms, training processes, and...
read it
In neural networkbased models for natural language processing (NLP), th...
read it
While one of the first steps in many NLP systems is selecting what embed...
read it
The positive effect of adding subword information to word embeddings has...
read it
An experimental approach to studying the properties of word embeddings i...
read it
Neuralbased Noise Filtering from Word Embeddings
Word embeddings aim to represent words as lowdimensional dense vectors. In comparison to distributional count vectors, word embeddings address the problematic sparsity of word vectors and achieved impressive results in many NLP tasks such as sentiment analysis (e.g., Kim:14), word similarity (e.g., Pennington:14), and parsing (e.g., Lazaridou:13). Moreover, word embeddings are attractive because they can be learned in an unsupervised fashion from unlabeled raw corpora. There are two main approaches to create word embeddings. The first approach makes use of neuralbased techniques to learn word embeddings, such as the Skipgram model
[Mikolov et al.2013]. The second approach is based on matrix factorization [Pennington et al.2014], building word embeddings by factorizing wordcontext cooccurrence matrices.In recent years, a number of approaches have focused on improving word embeddings, often by integrating lexical resources. For example, Adel/Schuetze:14 applied coreference chains to Skipgram models in order to create word embeddings for antonym identification. Pham:15 proposed an extension of a Skipgram model by integrating synonyms and antonyms from WordNet. Their extended Skipgram model outperformed a standard Skipgram model on both general semantic tasks and distinguishing antonyms from synonyms. In a similar spirit, Nguyen:16 integrated distributional lexical contrast into every single context of a target word in a Skipgram model for training word embeddings. The resulting word embeddings were used in similarity tasks, and to distinguish between antonyms and synonyms. Faruqui:15 improved word embeddings without relying on lexical resources, by applying ideas from sparse coding to transform dense word embeddings into sparse word embeddings. The dense vectors in their models can be transformed into sparse overcomplete vectors or sparse binary overcomplete vectors. They showed that the resulting vector representations were more similar to interpretable features in NLP and outperformed the original vector representations on several benchmark tasks.
In this paper, we aim to improve word embeddings by reducing their noise. The hypothesis behind our approaches is that word embeddings contain unnecessary information, i.e. noise. We start out with the idea of learning word embeddings as suggested by Mikolov:13, relying on the distributional hypothesis [Harris1954] that words with similar distributions have related meanings. We address those distributions in embedded vectors of words that decrease the value of such vector representations. For instance, consider the sentence the quick brown fox gazing at the cloud jumped over the lazy dog. The context jumped can be used to predict the words fox, cloud and dog in a window size of 5 words; however, a cloud cannot jump. The context jumped is therefore considered as noise in the embedded vector of cloud. We propose two novel models to smooth word embeddings by filtering noise: We strengthen salient contexts and weaken unnecessary contexts.
The first proposed model is referred to as complete word denoising embeddings model (CompEmb). Given a set of original word embeddings, we use a filter to learn a denoising matrix, and then project the set of original word embeddings into this denoising matrix to produce a set of complete word denoising embeddings. The second proposed model is referred to as overcomplete word denoising embeddings model (OverCompEmb). We make use of a sparse coding method to transform an input set of original word embeddings into a set of overcomplete word embeddings, which is considered as the “overcomplete process”. We then apply a filter to train a denoising matrix, and thereafter project the set of original word embeddings into the denoising matrix to generate a set of overcomplete word denoising embeddings. The key idea in our models is to use a filter for learning the denoising matrix. The architecture of the filter is a feedforward, nonlinear and parameterized neural network with a fixed depth that can be used to learn the denoising matrices and reduce noise in word embeddings. Using stateoftheart word embeddings as input vectors, we show that the resulting word denoising embeddings outperform the original word embeddings on several benchmark tasks such as word similarity and word relatedness tasks, synonymy detection and noun phrase classification. Furthermore, the implementation of our models is made publicly available^{1}^{1}1https://github.com/nguyenkh/NeuralDenoising.
The remainder of this paper is organized as follows: Section 2
presents the two proposed models, the loss function, and the sparse coding technique for overcomplete vectors. In Section
3, we demonstrate the experiments on evaluating the effects of our word denoising embeddings, tuning hyperparameters, and we analyze the effects of filter depth. Finally, Section
4 concludes the paper.In this section, we present the two contributions of this paper. Figure 1 illustrates our two models to learn denoising for word embeddings. The first model on the top, the complete word denoising embeddings model “CompEmb” (Section 2.1), filters noise from word embeddings to produce complete word denoising embeddings , in which the vector length of in comparison to is unchanged after denoising (called complete). The second model at the bottom of the figure, the overcomplete word denoising embeddings model “OverCompEmb” (Section 2.2), filters noise from word embeddings to yield overcomplete word denoising embeddings , in which the vector length of tends to be greater than the vector length of (called overcomplete).
For the notations, let is an input set of word embeddings in which is the vocabulary size, and is the vector length of . Furthermore, is the overcomplete word embeddings in which is the vector length of (); finally, is the pretrained dictionary (Section 2.4).
In this subsection, we aim to reduce noise in the given input word embeddings by learning a denoising matrix . The complete word denoising embeddings are then generated by projecting into . More specifically, given an input , we seek to optimize the following objective function:
(1) 
where is a filter; is a lateral inhibition matrix; and is a regularization hyperparameter. Inspired by studies on sparse modeling, the matrix is chosen to be symmetric and has zero on the diagonal.
The goal of this matrix is to implement excitatory interaction between neurons, and to increase the convergence speed of the neural network
[Szlam et al.2011]. More concretely, the matrices and are initialized with and , which are identity matrices, and the Lipschitz constant:The underlying idea for reducing noise is to make use of a filter to learn a denoising matrix ; hence, we design the filter as a nonlinear, parameterized, feedforward architecture with a fixed depth that can be trained to approximate to as in Figure 1(a). As a result, noise from word embeddings will be filtered by layers of the filter . The filter is encoded as a recursive function by iterating over the number of fixed depth , as the following recursive Equation 2 shows:
(2) 
is a nonlinear activation function. The matrices
and are learned to produce the lowest possible error in a given number of iterations. Matrix , in the architecture of filter , acts as a controllable matrix to filter unnecessary information on embedded vectors, and to impose restrictions on further reducing the computational burden (e.g., solving lowrank approximation problem or keeping the number of terms at zero [Gregor and LeCun2010]). Moreover, the initialization of the matrices , and enhances a highly efficient minimization of the objective function in Equation 1, due to the pretrained dictionary that carries the information of reconstructing .The architecture of the filter is a recursive feedforward neural network with the fixed depth , so the number of plays a significant role in controlling the approximation of . The effects of will be discussed later in Section 3.4. When is trained, the complete word denoising embeddings are yielded by projecting into , as shown by the following Equation 3:
(3) 
Now we introduce our method to reduce noise and overcomplete vectors in the given input word embeddings. To obtain overcomplete word embeddings, we first use a sparse coding method to transform the given input word embeddings into overcomplete word embeddings . Secondly, we use overcomplete word embeddings as the intermediate word embeddings to optimize the objective function: A set of input word embeddings is transformed to overcomplete word embeddings by applying sparse coding method in Section 2.4. We then make use of the pretrained dictionary and to learn the denoising matrix by minimizing the following Equation 4:
(4) 
The initialization of the parameters , , and follows the same procedure as described in Section 2.1, and with the same interpretation of the filter architecture in Figure 1(b). The overcomplete word denoising embeddings are then generated by projecting into the denoising matrix and using the nonlinear activation function in the following Equation 5:
(5) 
For each pair of term vectors and
, we make use of the cosine similarity to measure the similarity between
and as follows:(6) 
Let be the difference between and , equivalently . We then optimize the objective function in Equation 1 by minimizing ; and the same loss function is also applied to optimize the objective function in Equation 4
. Training is done through Stochastic Gradient Descent with the Adadelta update rule
[Zeiler2012].Sparse coding is a method to represent vector representations as a sparse linear combination of elementary atoms of a given dictionary. The underlying assumption of sparse coding is that the input vectors can be reconstructed accurately as a linear combination of some basis vectors and a few number of nonzero coefficients [Olshausen and Field1996].
The goal is to approximate a dense vector in by a sparse linear combination of a few columns of a matrix in which is a new vector length and the matrix be called a dictionary. Concretely, given input vectors of dimensions , the dictionary and sparse vectors can be formulated as the following minimization problem:
(7) 
carries the decomposition coefficients of ; and represents a scalar to control the sparsity level of . The dictionary is typically learned by minimizing Equation 7 over input vectors . In the case of overcomplete representations , the vector length is typically implied as .
In the method of overcomplete word denoising embeddings (Section 2.2), our approach makes use of overcomplete word embeddings as the intermediate word embeddings reconstructed by applying a sparse coding method to word embeddings . The overcomplete word embeddings are then utilized to optimize Equation 4. To obtain overcomplete word embeddings and dictionaries, we use the SPAMS package^{2}^{2}2http://spamsdevel.gforge.inria.fr to implement sparse coding for word embeddings and to train the dictionaries .
As input word embeddings, we rely on two stateoftheart word embeddings methods: word2vec [Mikolov et al.2013] and GloVe [Pennington et al.2014]. We use the word2vec tool^{3}^{3}3https://code.google.com/p/word2vec/ and the web corpus ENCOW14A [Schäfer and Bildhauer2012, Schäfer2015] which contains approximately 14.5 billion tokens, in order to train Skipgram models with 100 and 300 dimensions. For the GloVe method, we use pretrained vectors of 100 and 300 dimensions^{4}^{4}4http://wwwnlp.stanford.edu/projects/glove/ that were trained on 6 billion words from Wikipedia and English Gigaword. The function is used as the nonlinear activation function in both approaches. The fixed depth of filter is set to 3; further hyperparameters are chosen as discussed in Section 3.2. To train the networks, we use the Theano framework [Theano Development Team2016] to implement our models with a minibatch size of 100. Regularization is applied by dropouts of 0.5 and 0.2 for input and output layers (without tuning), respectively.
In both methods of denoising word embeddings, the regularization penalty is set to 0.5 without tuning in Equation 1 and 4. The method of learning overcomplete word denoising embeddings relies on the mediate word embeddings to minimize the objective function in Equation 4. The sparsity of depends on the regularization in Equation 7; and the length vector of is implied as . Therefore, we aim to tune and such that represents the nearest approximation of the original vector representation . We perform a grid search on and , developing on the word similarity task WordSim353 (to be discussed on Section 3.3). The hyperparameter tunings are illustrated in Figures 2(a) and 2(b) for sparsity and overcomplete vector length tuning, respectively. In both approaches, we set to and to 10 for the sparsity and length of overcomplete word embeddings.
In this section, we quantify the effects of word denoising embeddings on three kinds of tasks: similarity and relatedness tasks, detecting synonymy, and bracketed noun phrase classification task. In comparison to the performance of word denoising embeddings, we take into account stateoftheart word embeddings (Skipgram and GloVe word embeddings) as baselines. Besides, we also use the public source code^{5}^{5}5https://github.com/mfaruqui/sparsecoding to reimplement the two methods suggested by Faruqui:15 which are vectors (sparse overcomplete vectors) and (sparse binary overcomplete vectors).
The effects of the word denoising embeddings on the tasks are shown in Table 1. The results show that the vectors and outperform the original vectors and , except for the NP task, in which the vectors based on the 300dimensional GloVe vectors are best. The effect of the vectors is slightly less impressive, when compared to the overcomplete vectors . The overcomplete word embeddings strongly differ from the word embeddings ; hence, the denoising is affected. However, the performance of the vectors still outperforms the original vectors and after the denoising process.
Vectors 










33.7  72.9  69.7  74.5  65.5  48.9  62.0  72.8  
33.2  72.8  70.6  74.8  66.0  53.0  64.5  78.5  
SG100  35.9  74.4  71.2  75.2  68.1  53.0  62.0  79.1  
32.5  69.8  65.5  69.5  60.2  55.1  51.8  78.8  
31.9  70.4  65.8  72.6  62.2  53.0  58.2  74.1  
36.1  74.7  71.0  75.9  66.1  59.1  72.1  77.9  
37.1  75.8  71.8  76.4  66.9  59.1  74.6  79.3  
SG300  36.5  75.0  70.6  76.4  64.4  57.1  77.2  78.6  
32.9  72.4  67.5  71.9  63.4  53.0  65.8  78.3  
32.7  71.2  63.3  68.7  56.2  51.0  70.8  78.6  
29.7  69.3  52.9  60.3  49.5  46.9  82.2  76.4  
31.7  70.9  58.0  63.8  57.3  53.0  88.6  77.4  
GloVe100  30.0  70.9  56.0  62.8  53.8  57.0  81.0  77.3  
30.7  70.7  54.9  62.2  51.2  55.1  78.4  77.1  
31.0  69.2  57.3  62.3  53.7  46.9  73.4  76.4  
37.0  74.8  60.5  66.3  57.2  61.2  89.8  74.3  
40.2  76.8  64.9  69.8  62.0  61.2  92.4  76.3  
GloVe300  39.0  75.2  63.0  67.9  59.7  57.1  86.0  75.7  
36.7  74.1  61.5  67.7  57.8  55.1  87.3  79.9  
33.1  70.2  57.0  62.2  53.0  51.0  91.4  80.0 
For the relatedness task, we use two kinds of datasets: MEN [Bruni et al.2014] consists of 3000 word pairs comprising 656 nouns, 57 adjectives and 38 verbs. The WordSim353 relatedness dataset [Finkelstein et al.2001] contains 252 word pairs. Concerning the similarity tasks, we evaluate the denoising vectors again on two kinds of datasets: SimLex999 [Hill et al.2015] contains 999 word pairs including 666 noun, 222 verb and 111 adjective pairs. The WordSim353 similarity dataset consists of 203 word pairs. In addition, we evaluate our denoising vectors on the WordSim353 dataset which contains 353 pairs for both similarity and relatedness relations. We calculate cosine similarity between the vectors of two words forming a test pair, and report the Spearman rankorder correlation coefficient [Siegel and Castellan1988] against the respective gold standards of human ratings.
We evaluate on 80 TOEFL (Test of English as a Foreign Language) synonym questions [Landauer and Dumais1997] and 50 ESL (English as a Second Language) questions [Turney2001]. The first dataset represents a subset of 80 multiplechoice synonym questions from the TOEFL test: a word is paired with four options, one of which is a valid synonym. The second dataset contains 50 multiplechoice synonym questions, and the goal is to choose a valid synonym from four options. For each question, we compute the cosine similarity between the target word and the four candidates. The suggested answer is the candidate with the highest cosine score. We use accuracy to evaluate the performance.
Lazaridou:13 introduced a dataset of noun phrases (NP) in which each NP consists of three elements: the first element is either an adjective or a noun, and the other elements are all nouns. For a given NP (such as blood pressure medicine), the task is to predict whether it is a leftbracketed NP, e.g., (blood pressure) medicine, or a rightbracketed NP, e.g., blood (pressure medicine).
The dataset contains 2227 noun phrases split into 10 folds. For each NP, we use the average of word vectors as features to feed into the classifier by tuning the hyperparameters (
, and ) for each element (, and ) within the NP:. We then employ the classification of the NPs by using a Support Vector Machine (SVM) with Radial Basis Function kernel. The classifier is tuned on the first fold, and crossvalidation accuracy is reported on the nine remaining folds.
As mentioned above, the architecture of the filter is a feedforward network with a fixed depth . For each stage , the filter attempts to reduce the noise within input vectors by approximating these vectors based on vectors of a previous stage . In order to investigate the effects of each stage , we use pretrained GloVe vectors with 100 dimensions to evaluate the denoising performance of the vectors on detecting synonymy in the TOEFL dataset across several stages of .
The results are presented in Figure 4. The accuracy of synonymy detection increases sharply from 63.2% to 88.6% according to the number of stages from 0 to 3. However, the denoising performance of vectors falls with the number of stages . This evaluation shows that the filter with a consistently fixed depth can be trained to efficiently filter noise for word embeddings. In other words, the number of stages exceeds a consistent number (with in our case), leading to the loss of salient information in the vectors.
To the best of our knowledge, we are the first to work on filtering noise in word embeddings. In this paper, we have presented two novel models to improve word embeddings by reducing noise in stateoftheart word embeddings models. The underlying idea in our models was to make use of a deep feedforward neural network filter to reduce noise. The first model generated complete word denoising embeddings; the second model yielded overcomplete word denoising embeddings. We demonstrated that the word denoising embeddings outperform the originally stateoftheart word embeddings on several benchmark tasks.
The research was supported by the Ministry of Education and Training of the Socialist Republic of Vietnam (Scholarship 977/QDBGDDT; Nguyen Kim Anh) and the DFG Heisenberg Fellowship SCHU2580/1 (Sabine Schulte im Walde). It is also a collaboration between project D12 and project A8 in the DFG Collaborative Research Centre SFB 732.
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
, pages 1447–1452, Doha, Qatar.Proceedings of the 27th International Conference on Machine Learning (ICML), Haifa, Israel
, pages 399–406.Simlex999: Evaluating semantic models with (genuine) similarity estimation.
Computational Linguistics, 41(4):665–695.
Comments
There are no comments yet.