Recently, machine learning models utilizing deep learning methodologies have achieved huge success on various tasks. However, state-of-the-art models are often extremely complex and have a huge amount of parameters such that transparency or interpretability are compromised. Researchers cannot tell why or how the model makes a specific decision, which is particularly problematic when predictions are related to decision-critical applications such as medical applications. Considering that understanding the underlying phenomenon in the model is critical,interpretability [Lipton2016] has therefore arisen as a key desideratum of machine learning models.
In natural language processing (NLP), word embeddings have produced significant improvement for different tasks. However, the embeddings are dense representations that human finds difficult to interpret, which can be summarized in three main reasons:
Polysemy: Word embeddings mix different meanings into a single vector, which is also known as the polysemy issue [Reisinger and Mooney2010].
Dimension understanding: The higher and lower values in the dimensions of an embedding vector are difficult to interpret and analyze for human [Subramanian et al.2017].
Semantic analysis: We can only indirectly check the nearest neighbors to inspect the semantic meaning of a word embedding [Noraset et al.2017].
To address the polysemy issue, arora2018linear recently showed that a word embedding is the linear combination of its distinct sense embeddings weighted by the corresponding frequency in the training corpus [Arora et al.2018]. It proposed to use the weighted sum of multiple atoms of discourse to represent a word, where an atom indicates a concept. Unfortunately, the discourse atom itself still suffers from the third issue and is not directly explainable. Although it decomposed the vector representation into several atoms with their semantic meanings, it still suffered from the dimension understanding issue where the meaning of dimensions cannot be well explained.
In terms of the dimension understanding issue, several prior works attempted at projecting the dense embeddings into a sparse space and finding that words whose certain dimensions are large in a spare vector can form a semantic cluster [Faruqui et al.2015, Subramanian et al.2017]. Then they can isolate different senses into different dimensions and solved the first and the second issues together. Nevertheless, inspecting nearest neighbors is still the only way to discover the meaning of a word embedding, so the semantic analysis issue remains unsolved.
Finally, noraset2017definition tackled the semantic analysis issue by directly generating the textual definition of a word embedding based on a dictionary resource [Noraset et al.2017]. The main concern in this work is that they treated all word as monosemous and suffered from the polysemy issue. gadetsky2018conditional tried to address this issue by training an encoder-decoder architecture along with a mask to generate context-dependent definitions. However, both of these methods cannot explain the semantic meaning of the individual dimension (dimension understanding) [Gadetsky, Yakubovskiy, and Vetrov2018].
|bass||The lowest adult male singing voice.||His bass voice rings out attractively.|
|These are the opening words of the play, sung as a bass solo.|
|The common European freshwater perch.||Only leisure anglers are allowed to fish bass in Irish waters.|
|I did manage a couple of hours fishing a bass pool the next morning.|
Based on the above discussions, this paper proposes a novel explainable model, xSense, that embraces all benefits and avoids drawbacks. That is, the proposed model can solve all three issues together. The contributions of this paper are 4-fold:
Given a (context, word) pair, this paper can explicitly pin down the dimension in the sparse word representation that represents the sense of the word under a given context.
This paper is able to interpret the value of a specific dimension in the transformed sparse representation.
This paper provides the human understandable textual definition for a particular sense of a word embedding given its context.
We release a large and high-quality context-definition dataset that consists of abundant example sentences and the corresponding definitions for a variety of words.
Dictionary corpora are usually available in the online electronic format. However, they are often lack of example sentences. To the best of our knowledge, the Oxford online dictionary is the only one that contains an abundant amount of example sentences.111https://en.oxforddictionaries.com/ The prior work recently released a dataset based on this resource [Gadetsky, Yakubovskiy, and Vetrov2018]. However, their dataset does not contain complete information achievable online, which hinders the usage for diverse tasks. Some findings are described here: 1) Their dataset only provided one single example sentence for a definition, while there are usually multiple ones online. 2) Some provided example sentences in the dataset do not contain the target word, making the usage difficult. 3) Some provided example sentences do not align with their target word and the associated definitions. Considering the quality of the released dataset, this paper addresses these problems from the prior work by releasing the newly-collected dataset and the toolkit for crawling the content. A word example along with its multiple definitions and associated example sentences are shown in Table 1.
|Avg. #sentence per def.||1||27|
To be more specific, our dataset provides the following guarantees:
Each example sentence contains the target word it defines.
We include all example sentences of a specific definition available in the online dictionary.
We also include the corresponding POS tag of each word sense for further research usage.
The statistics of the proposed dataset is summarized in Table 2, where it is obvious that our dataset contains much more example sentences, and the size is about 5 times larger than one provided by gadetsky2018conditional. The high-quality and rich dataset can be leveraged in different NLP tasks, and this paper utilizes it for learning explainable word sense networks, xSense.
xSense: Explainable Word Sense Networks
The proposed model, xSense, consists of four main modules as illustrated in Figure 1. Given a target word and its context, the model encodes the contexts (context encoder) and extracts its sparse representation (sparse vector extractor). A mask is generated (mask generator) based on the contexts and the sparse vector in order to find the dimensions that encode the corresponding sense information, and then a definition sentence is generated (definition decoder). Each component is detailed below.
We propose dual vocabularies, and , used in our model. The first one is the pretrained embeddings from word2vec222https://code.google.com/archive/p/word2vec/ used by the encoder and sparse vector extractor. The second one is randomly initialized and is only used by the decoder. The goal of using two sets of vocabularies is to lower the out-of-vocabulary (OOV) rate. To be more specific, while contains lots of tokens, it misses some common functional words such as ‘a’ and ‘of’. In order to generate such common words in definition sentences, the dedicated vocabulary is adopted.
Given a context, the encoder module generates a distinguishable and meaningful sentence embedding. Because we do not assume additional resource for training the sentence embedding, the sentence is encoded in an unsupervised manner, which can be obtained in either sophisticated neural-based [Kiros et al.2015] or weighted-sum-based [Arora, Liang, and Ma2016] methods. The latter method is chosen in this paper due to two reasons. First, neural-based methods require additional training data and much longer training time. Second, considering the goal of this paper is interpretability, weighted-sum method is more transparent for humans to interpret and investigate the error.333We also tried training a bidirectional GRU encoder, the performance is roughly the same.
In our weighted-sum approach, we apply the smooth inverse frequency (SIF) embeddings [Arora, Liang, and Ma2016], which is inspired by the discourse random walk model [Arora et al.2016]. Formally, given word embeddings , a sentence , where is the set of all training sentences, a smoothing parameter
, and the occurrence probabilitiesof the words derived from the training corpus, SIF computes:
where is the length of sentence and will be used in the mask generator to generate the attention mask.
Sparse Vector Extractor
Words have large values in a specific dimension of their sparse representations often form a semantic cluster [Faruqui et al.2015, Subramanian et al.2017]. This characteristic helps interpret the semantics in different dimensions. Inspired by the idea about sparse coding in subramanian2017spine, we incorporate a sparse vector extractor to learn the sparse representation of the target word [Subramanian et al.2017]:
where, , , are the learning parameters, and is the dimension of the word embedding.
This formulation follows a regular
-sparse autoencoder aiming at minimizingreconstruction loss and partial sparsity loss [Makhzani and Frey2013, Subramanian et al.2017]. makhzani2013k pointed out that the -sparse autoencoder can be viewed as the variant of iterative thresholding with the inversion algorithm [Maleki2009], which aims to train an overcomplete matrix as orthogonal as possible. After training, can be used as the dictionary in the sparse recovery stage. In the context of word embeddings, the matrix contains the orthogonal basis of the embedding space, which are likely to be the basic semantic components.
We link this observation to the discourse atom, the basic sense component [Arora et al.2018]. arora2018linear showed that a set of word embeddings can be disentangled into multiple discourse vectors by sparse coding. Formally, given word embeddings in and an integer , the goal is to solve:
where represents how much the discourse vector weighs in constituting . Both and the discourse atom are the basic semantic components of the embedding space. Moreover, from the viewpoint of matrix operation, (4) is equivalent to (3) with and , where is the -th column of the matrix. In practice, since is directly generated by , we use the corresponding row vectors of in the mask generator. As illustrated in Figure 1, the sparse vector extractor focuses on decomposing different senses into different dimensions via sparse coding, and the trained sparse encoder is for the mask generator usage.
The mask generator module is the key for interpretability, which connects the encoder and the sparse extractor and automatically finds the sense-specific dimensions. Given the SIF embedding and a target word embedding , we focus on extracting the sense information from according to its contexts. is first fed into the sparse vector extractor to produce its sparse representation . We then lookup highest values in the sparse vector and retrieve the corresponding vectors in , which is learned from the sparse vector extractor. Formally, we compute the sparse representation of the target word by (2) and obtain largest values:
We retrieve the rows of according to the indices obtained in (5):
is therefore the -th row vector of . We calculate the inner product between the sentence embedding and the basis vectors to generate a weighted mask. However, the direct calculation is unreasonable since they do not align well in the vector space. Because both and
are derived from the same pretrained embeddings by almost-linear operations, we assume that learning an additional linear transformationcan effectively align the space [Conneau et al.2017]. The inner product is thus calculated after the transformation:
The mask is calculated by a softmax layer:
Finally, the retrieved basis vectors are weighted by the mask and then the sense vector is formed:
The decoder module generates a textual definition for a target word given its context. GRU is applied as our recurrent unit [Cho et al.2014]. We denote a target definition sentence as a sequence of tokens:
where is the number of words in the definition. We assign the aligned SIF embedding to the initial hidden state of the first-layer GRU and the target word embedding to the initial state of the second-layer GRU illustrated in Figure 1:
The goal of using the pretrained target word embedding as the initial hidden state is to provide explicit signal for the model in order to generate coherent and consistent definitions. We also conduct the experiments using signals other than in the experiment section Experiments to analyze the effectiveness. This initialization conditions the decoder to correctly generate definitions. For each decoding step, the input to the cell is concatenated as:
where is the ground truth word embedding at -th timestep and m is the sense vector calculated as (9). The decoding process terminates when an end-of-sentence token is predicted. The internal structure of a GRU cell is:
The output is generated by passing the hidden state through a linear layer:
where . We use to generate the final distribution over via softmax operation. Formally,
Note that during the testing phase, the decoder is auto-regressive. Formally, (13) becomes:
|1) Baseline w/o contexts|
|Noraset et al. noraset2017definition||33.8 / 36.3||30.5 / 32.7||12.0 / 13.3|
|2) Baseline w/ contexts|
|Seq2Seq||20.1 / 21.1||18.3 / 18.7||11.3 / 10.5|
|Gadetsky et al. gadetsky2018conditional||26.0 / 31.6||25.5 / 30.4||9.8 / 11.3|
|3) Proposed||1-Layer Init||2-Layer Init||Each Time Input|
|xSense||SSS||Sense Vector||Sense Vector||Sense Vector||14.8 / 17.0||14.4 / 15.9||12.1 / 13.3|
|AAS||Aligned Contexts||Aligned Contexts||Sense Vector||20.6 / 23.0||18.6 / 20.3||12.4 / 13.9|
|TTS||Target Word||Target Word||Sense Vector||33.6 / 35.9||29.4 / 31.3||11.9 / 14.2|
|ATS||Aligned Contexts||Target Word||Sense Vector||37.2 / 39.7||30.1 / 32.0||12.7 / 14.5|
|TAS||Target Word||Aligned Contexts||Sense Vector||40.0 /42.6||31.9 /33.9||12.4 /13.2|
There are two losses for optimizing the sparse extractor, where the first loss is the reconstruction loss:
where is the size of the whole dataset, and the second loss is the partial sparsity loss [Subramanian et al.2017]:
This loss encourages every dimension of to be either 0 or 1. Note that the sparse extractor module is pretrained and fixed. In order to train the whole model in an end-to-end fashion, we minimize the negative log likelihood over maximum decoding steps :
To evaluate our proposed model, we conduct various sets of experiments using our newly collected Oxford dataset.
Both and have dimension 300. For the encoder, we fix the smoothing term in (1) to as recommended [Arora, Liang, and Ma2016]. For the sparse vector extractor, the similar setup is adopted [Subramanian et al.2017]. We choose in the mask generator. The definition decoder is a two-layer GRU [Cho et al.2014] with the hidden size 300, where the optimizer used is SGD with the learning rate for training the sparse vector extractor and the mask generator. The Adam optimizer [Kingma and Ba2014] with the default settings is applied to the decoder.
In the experiments, we want to demonstrate the ability of the proposed model in two difficulty levels.
Easy: The easier one is to test (seen words, unseen contexts). Concretely, the small testset is the one proposed by gadetsky2018conditional with 6,809 instances, while the large testset is the one we collect with 42,589 instances.
Hard: The harder one is to test (unseen words, unseen contexts) in the unseen testset with 808 instances, which consists of all target words that are never seen during training.
Two objective measures are reported, including BLEU [Papineni et al.2002] up to 4-gram and F measure of ROUGE-L [Lin2004]. Considering that BLEU score has a lot of smoothing strategies, we decide to follow prior work [Noraset et al.2017, Gadetsky, Yakubovskiy, and Vetrov2018] and use the sentence-BLEU binary in the Moses library444http://www.statmt.org/moses/ for a fair comparison. Both scores are averaged across all testing instances.
Two sets of baseline approaches are compared, where the first one does not consider the contexts and the second one does. The baseline without contexts is essentially a language model conditioning on the pretrained word embeddings , which shares the same architecture in noraset2017definition. We reimplement the model and train on our proposed dataset for fair comparison. For baselines with contexts, we train the model proposed by gadetsky2018conditional with their strongest settings on our dataset and a vanilla sequence-to-sequence model with both encoder and decoder being a two-layer GRU network.
We tried different input variants of (11), (12), and (13) to see the effectiveness of inputting the explicit signal during decoding. Specifically, for the 1-layer and 2-layer initialization of GRU and the additional input at each time step, different combinations of aligned contexts (A), the target word vector (T), and the sense vector (S) are attempted. Note that at least one of the input should be the sense vector ( in (9)) in order to optimize the mask generator.
|Model||Top 1||Ranking Score|
|Noraset et al. noraset2017definition||311 (30.8%)||17 (28.4%)||1887 (27.6%)||111(27.2%)|
|Gadetsky et al. gadetsky2018conditional||240 (23.8%)||9 (15.0%)||1701 (24.9%)||92(22.5%)|
|xSense w/o Alignment||115 (11.4%)||8 (13.3%)||1182 (17.3%)||80 (19.9%)|
|xSense-ATS (Aligned Contexts/Target Word/Sense Vector)||342 (34.0%)||26(43.3%)||2055 (30.2%)||124 (30.4%)|
The results are shown in Table 3. Among all baselines, noraset2017definition’s work is the strongest baseline even though their model generates exactly the same definition regardless of different contexts. The probable reason is that dictionary definitions are often written in a highly structured and similar format, thus generating the same definition for all contexts can still share some common words with the ground truth.
Among baselines leveraging contexts, the performance of the sequence-to-sequence model is worse than gadetsky2018conditional’s. The probable reason is that gadetsky2018conditional introduced a mask to differentiate different contexts and generate definitions accordingly. However, their performance is the worst among all models on unseen dataset, which explicitly evaluates the generalizability. The observation tells that the better performance on large and small are likely because of memorizing the information from the training data (overfitting). In addition, the performance gain compared with noraset2017definition’s work reported in [Gadetsky, Yakubovskiy, and Vetrov2018] is only 0.46 of BLEU (full 100), which is insignificant.
To analyze the information richness of different variants, for two hidden layers, we replace the sense vector as initialization with the aligned contexts. Comparing between SSS and AAS in Table 3, using the aligned contexts () as the initial hidden state in the decoder outperforms the one only using the sense vector (). The reason is that the aligned contexts provide the decoder additional information of contexts and help generate more sophisticated definitions, while the sense vector is the weighted sum of basis vectors as shown in (9), which may introduce some errors due to the imperfectness of the sparse vector extractor.
We also try to replace the sense vector with pretrained target word embedding to initialize the hidden state of the decoder, and the significantly better performance is observed (SSS v.s. TTS). This is reasonable because pretrained embeddings are trained on a large corpus and thus contain robust and rich information. In addition, it provides a static representation that stabilizes the training process of the decoder. However, we find out that while having good performance on BLEU/ROUGE scores, the variety of generated definitions is lower than the one of the aligned contexts. In other words, despite pretrained word embeddings being informative, its semantic meaning is likely dominated by the most frequent senses in the training corpus. In fact, we observe that simply using the target word embedding as the initial decoder hidden state cannot distinguish the difference between fine-grained senses; The definitions generated by TTS are the major senses in most testing instances.
Finally, to balance between variety and correctness, combining aligned contexts with pretrained word embedding as our decoder initialization (ATS, TAS) is a natural choice from the experiments. The result is the best one for Large and Unseen datasets, demonstrating better performance and generalizability.
The performance is poorer for all models on Unseen rather than other testsets. That is because these words are not encountered during training, making the embedding explanation much more difficult. Moreover, we manually check the test words and find out that most of them are uncommon words, making this testset even harder.
In order to justify the quality of the generated definitions, we randomly select two hundred samples from the Small dataset for human evaluation, where two settings are reported, one includes all words (All) and another includes only the words whose multiple(3) senses are sampled (Multi-Sense). There are four candidate models including all baselines, one of our best models (xSense-ATS), and xSense without alignment in (7) that jointly learns the sparse vector extractor. Three human annotators are recruited to rank the generated definitions given the target word and its corresponding contexts in each sample. Table 4 shows the final statistics, where the top 1 choice and the accumulated scores are reported (4: first, 3: second, 2: third, 1: last). Note that in some samples, two models may generate exactly the same definition and if an annotator picks either of them, we assign the same score to another.
It can be found that our model performs best among all candidates for both settings of all target words and multi-sense target words. While subramanian2017spine’s work achieves the second-best performance, it cannot distinguish different senses since it does not consider the contexts, which makes the task about explainable embeddings entirely useless. The multi-sense setting indeed shows that our proposed model significantly outperforms theirs. The worst model is the one without alignment, indicating that the basis vectors and the sentence embedding do not align in the vector space so that the attention cannot be correctly obtained.
|Target Word||Contexts, Generated Definition, Nearest Neighbors|
|band||He looked around and saw what he was looking for a band of thin electrical wire.|
|Gen. Definition: A circular revolving plate supporting a single wire or other object of rock|
|Nearest Neighbors: inductor, chipset, transceiver (701-th dimension)|
|In her spare time she performs as one of three vocalists in a band.|
|Gen. Definition: A group of musicians actors or dancers who perform together|
|Nearest Neighbors: punk, tracklist, hiphop (215-th dimension)|
|cool||I closed my eyes again and imagined myself in a cool refreshing blue pool.|
|Gen. Definition: soothing or refreshing because of its low temperature|
|Nearest Neighbors: humid, moist, wintry (213-th dimension)|
|There is need to cool off our tempers and stop fanning the embers of dissent.|
|Gen. Definition: unemotional undemonstrative or impassive dancers who perform together|
|Nearest Neighbors: levelheaded, gentlemanly, personable (161-th dimension)|
|bow||It was customary when they finished to bow as a sign of respect to their master.|
|Gen. Definition: a gesture of acknowledgement or concession to|
|Nearest Neighbors: palanquin, casket, limousine (143-th dimension)|
|Pat was wearing a black spandex long sleeved shirt with a thin thread tied in a bow|
|Gen. Definition: a length of cord rope wire or other material serving a particular purpose|
|Nearest Neighbors: embroidery, ribbon, fabric (782-th dimension)|
An important capability of our model is that we can pin down the dimension in the sparse representation of a target word given its context. This is difficult to tell in numbers, so we show some samples for analysis in Table 5. We can see that the nearest neighbors and the generated definitions belong to the same semantic clusters. Moreover, we are able to disentangle multiple senses based on the given contexts.
|Target Word||Contexts, Ground Truth, Generated Definition, Nearest Neighbors|
|bass||Don’t worry if all your bass have been what we call schoolie bass which are fish under two or three pounds.|
|Ground Truth: The common European freshwater perch|
|Gen. Definition: A bass guitar or double bass. (X)|
|Nearest Neighbors: yacht, vessel, surf, sail (148-th dimension)|
|tie||I of course immediately asked him how many knots he could tie.|
|Ground Truth: form a knot or bow in a ribbon lace|
|Gen. Definition: form a knot or bow in a ribbon lace|
|Nearest Neighbors: unbeaten, tiebreaker, victor (780-th dimension) (X)|
To better understand the limitations of our model, we show some common mistakes in Table 6. For the word bass, our model generates the wrong definition while picking up the correct nearest neighbors. Note that the generated wrong definition is another sense of bass, so the cause of this error may be due to the imbalance of sense frequency in training data, considering that bass as a kind of fish is a relatively rare sense. For the word tie, the generated definition is correct while the selected nearest neighbors are wrong. Because the nearest neighbors are determined by (8), this error type may be propagated from the SIF sentence embedding.
This work can be viewed as a bridge that connects sparse embeddings and sense embeddings together for better interpretability via definition modeling.
Several works have shown that introducing sparsity in word embedding dimensions improves dimension interpretability [Murphy, Talukdar, and Mitchell2012, Fyshe et al.2015] and the benefit of word embeddings as features in downstream tasks [Guo et al.2014]. They focused on investigating the internal characteristics of word embeddings, making it hard to perform real-world applications such as word sense disambiguation (WSD). In addition, they cannot provide explicit textual definitions of word embeddings.
In literature, most of the prior works assigned a vector representation for each sense of a word. The work often assumed a large training corpus to facilitate the training of multi-sense embeddings in an unsupervised manner [Reisinger and Mooney2010, Li and Jurafsky2015, Lee and Chen2017]. Note that the sense embeddings in our framework are disentangled internally by a sparse autoencoder. In this paper, the additional training data is not required. Also, unlike the prior work, our model can provide human-readable definitions for better interpretability.
Dictionary definition task
There are several works that utilized dictionary definitions to perform the ranking task or learn word embeddings. In the ranking tasks, the models are evaluated by how well they rank words for given definitions [Hill et al.2015] or definitions for words [Noraset et al.2017]. Aside from ranking tasks, bahdanau2017learning suggested using definitions to compute embeddings for out-of-vocabulary words. Different from them, this paper focuses on utilizing the textural definitions to provide the capability of explaining the embeddings via human understandable natural language.
In this paper, the interpretability of word embedding dimensions is investigated. Our proposed model is able to pin down a specific dimension on its sparse representation via an attention mechanism in an unsupervised manner and generate its corresponding textual definition at the same time. In the experiments, the proposed model outperforms others for both quantitative results and human evaluation. Finally, we release a new and high-quality dataset which is five times larger than the currently available one, providing potential directions for future research work.
- [Arora et al.2016] Arora, S.; Li, Y.; Liang, Y.; Ma, T.; and Risteski, A. 2016. A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computational Linguistics 4:385–399.
- [Arora et al.2018] Arora, S.; Li, Y.; Liang, Y.; Ma, T.; and Risteski, A. 2018. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association of Computational Linguistics 6:483–495.
- [Arora, Liang, and Ma2016] Arora, S.; Liang, Y.; and Ma, T. 2016. A simple but tough-to-beat baseline for sentence embeddings.
- [Bahdanau et al.2017] Bahdanau, D.; Bosc, T.; Jastrzbski, S.; Grefenstette, E.; Vincent, P.; and Bengio, Y. 2017. Learning to compute word embeddings on the fly. arXiv preprint arXiv:1706.00286.
- [Cho et al.2014] Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
- [Conneau et al.2017] Conneau, A.; Lample, G.; Ranzato, M.; Denoyer, L.; and Jégou, H. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.
- [Faruqui et al.2015] Faruqui, M.; Tsvetkov, Y.; Yogatama, D.; Dyer, C.; and Smith, N. 2015. Sparse overcomplete word vector representations. arXiv preprint arXiv:1506.02004.
- [Fyshe et al.2015] Fyshe, A.; Wehbe, L.; Talukdar, P. P.; Murphy, B.; and Mitchell, T. M. 2015. A compositional and interpretable semantic space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 32–41.
- [Gadetsky, Yakubovskiy, and Vetrov2018] Gadetsky, A.; Yakubovskiy, I.; and Vetrov, D. 2018. Conditional generators of words definitions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 266–271.
[Guo et al.2014]
Guo, J.; Che, W.; Wang, H.; and Liu, T.
Revisiting embedding features for simple semi-supervised learning.In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 110–120.
- [Hill et al.2015] Hill, F.; Cho, K.; Korhonen, A.; and Bengio, Y. 2015. Learning to understand phrases by embedding the dictionary. arXiv preprint arXiv:1504.00548.
- [Kingma and Ba2014] Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- [Kiros et al.2015] Kiros, R.; Zhu, Y.; Salakhutdinov, R. R.; Zemel, R.; Urtasun, R.; Torralba, A.; and Fidler, S. 2015. Skip-thought vectors. In Advances in neural information processing systems, 3294–3302.
- [Lee and Chen2017] Lee, G.-H., and Chen, Y.-N. 2017. Muse: Modularizing unsupervised sense embeddings. arXiv preprint arXiv:1704.04601.
- [Li and Jurafsky2015] Li, J., and Jurafsky, D. 2015. Do multi-sense embeddings improve natural language understanding? arXiv preprint arXiv:1506.01070.
- [Lin2004] Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out.
- [Lipton2016] Lipton, Z. C. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
- [Makhzani and Frey2013] Makhzani, A., and Frey, B. 2013. K-sparse autoencoders. arXiv preprint arXiv:1312.5663.
- [Maleki2009] Maleki, A. 2009. Coherence analysis of iterative thresholding algorithms. In Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on, 236–243. IEEE.
- [Murphy, Talukdar, and Mitchell2012] Murphy, B.; Talukdar, P.; and Mitchell, T. 2012. Learning effective and interpretable semantic models using non-negative sparse embedding. 1933–1950.
- [Noraset et al.2017] Noraset, T.; Liang, C.; Birnbaum, L.; and Downey, D. 2017. Definition modeling: Learning to define word embeddings in natural language. In Proceedings of AAAI.
- [Papineni et al.2002] Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, 311–318. Association for Computational Linguistics.
- [Reisinger and Mooney2010] Reisinger, J., and Mooney, R. J. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 109–117. Association for Computational Linguistics.
- [Subramanian et al.2017] Subramanian, A.; Pruthi, D.; Jhamtani, H.; Berg-Kirkpatrick, T.; and Hovy, E. 2017. Spine: Sparse interpretable neural embeddings. arXiv preprint arXiv:1711.08792.