A Probabilistic Framework for Learning Domain Specific Hierarchical Word Embeddings

10/16/2019
by   Lahari Poddar, et al.
0

The meaning of a word often varies depending on its usage in different domains. The standard word embedding models struggle to represent this variation, as they learn a single global representation for a word. We propose a method to learn domain-specific word embeddings, from text organized into hierarchical domains, such as reviews in an e-commerce website, where products follow a taxonomy. Our structured probabilistic model allows vector representations for the same word to drift away from each other for distant domains in the taxonomy, to accommodate its domain-specific meanings. By learning sets of domain-specific word representations jointly, our model can leverage domain relationships, and it scales well with the number of domains. Using large real-world review datasets, we demonstrate the effectiveness of our model compared to state-of-the-art approaches, in learning domain-specific word embeddings that are both intuitive to humans and benefit downstream NLP tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/01/2019

A Simple Regularization-based Algorithm for Learning Cross-Domain Word Embeddings

Learning word embeddings has received a significant amount of attention ...
03/26/2019

Deep Learning and Word Embeddings for Tweet Classification for Crisis Response

Tradition tweet classification models for crisis response focus on convo...
05/10/2018

Learning Domain-Sensitive and Sentiment-Aware Word Embeddings

Word embeddings have been widely used in sentiment classification becaus...
04/01/2019

Unsupervised Abbreviation Disambiguation Contextual disambiguation using word embeddings

As abbreviations often have several distinct meanings, disambiguating th...
01/21/2022

Taxonomy Enrichment with Text and Graph Vector Representations

Knowledge graphs such as DBpedia, Freebase or Wikidata always contain a ...
04/27/2019

Enabling Open-World Specification Mining via Unsupervised Learning

Many programming tasks require using both domain-specific code and well-...
06/09/2016

Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora

A word's sentiment depends on the domain in which it is used. Computatio...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Word embedding models learn a lower dimensional vector representation of a word, while encoding its semantic relationship to other words in a corpus Mikolov et al. (2013a). Using pre-trained embeddings to represent the semantics of input words has become a standard practice in NLP models. Apart from their usage in downstream applications, word embeddings are also powerful tools in language understanding and analysis of word behaviour Mikolov et al. (2013c); Bolukbasi et al. (2016); Garg et al. (2018).

Figure 1: Sample product reviews from different domains, using the word bright but in different contexts.

In a typical embedding space, each word is represented using a single embedding vector. However, applications often involve corpora consisting of documents from diverse domains, where usage of a word varies across domains. For example, in product reviews, the word bright usually refers to screen brightness for electronics products, but when used in reviews of clothes it refers to a lighter shade of colors, as shown in sample reviews in Figure 1

. Ignoring these nuances in word representations may affect many downstream applications, such as sentiment analysis where

thin or cheap are positive in the context of electronics, but may be perceived as negative for outdoors or apparel products.

It is thus insufficient to learn a single representation for a word, where its meanings and usage patterns pertaining to different topical domains would be lost. In order to learn separate embeddings in different domains, the current models Mikolov et al. (2013b); Pennington et al. (2014) would need to be trained independently on each domain-specific dataset. This raises multiple issues: (1) not all domains have a large amount of data for learning a reliable embedding model; (2) the relationships between domains are not captured when we treat them independently, and (3) the embedding dimensions will not be aligned making it hard to study the semantic shifts of a word between the domains. Therefore, we need a principled way to learn domain-specific word embeddings that leverages the domain relationships.

Figure 2: A sample product taxonomy where products belong to a node in the hierarchy. Global is a dummy node representing root of the tree.

In this paper we propose a hierarchical word embedding model that learns domain-specific embeddings of words from text categorized into hierarchical domains, such as a product taxonomy of an e-commerce site, research field tags of scientific articles, or topical categories for a news portal. A sample product taxonomy is shown in Figure 2. For an e-commerce site, products are categorized into one or more nodes in such a hierarchy. Products that are closer in the taxonomy (e.g. Digital cameras and PC accessories) are more similar to each other than to products residing in distant nodes of the tree (e.g. Shoes or Cycling). We note that, while products may have some features specific to its particular node, they also have some features inherited from their parent and hence are shared across sibling nodes (e.g., most Electronic items have a battery, or the typical product under Apparel would have a size). This intuitively implies that most words in a category node show similar usage patterns to their usage in the parent node, while some words may deviate from the parent category-level meaning, and adopt specialized meanings. Accurate modeling of these words will not only benefit NLP systems, but will also facilitate discovery of such terms in a systematic way.

We build on structured Exponential Family Embeddings (s-EFE) (Rudolph et al., 2017)

. We assume that the embedding of a word for a node in the taxonomy, may deviate from its embedding in the parent node to a certain extent following a probability distribution. We use the hierarchical structure of the domains to control for the amount of deviation allowed for a word among domains. This will help the model share information among related product domains and allow the embedding for the same word to vary for distant domains proportional to their distance in the taxonomy.

We conduct extensive experiments on large and diverse product reviews datasets and show that our model is able to fit unseen data better than competitive baselines. The learned semantic space can capture idiosyncrasies of different domains which are intuitive to humans, suggesting the usefulness of the representations for downstream exploratory applications. Evaluation on a downstream task of review rating prediction demonstrates that the hierarchical word embeddings outperform other word embedding approaches. Further analysis through crowd-sourced evaluation shows that our model can discover key domain terms, as a natural by-product of its construction without requiring additional processing.

To summarize, this paper makes the following contributions:

  • Proposing a hierarchical word embedding model to represent domain-specific word meanings, by leveraging the inherent hierarchical structure of domains.

  • Conducting extensive qualitative and quantitative evaluation, including downstream tasks demonstrating that our model outperforms competitive baselines.

  • Presenting the capability of our model in naturally learning domain-specific keywords and evaluating them using crowd-sourcing.

2 Related Work

Learning word embeddings has received substantial attention and has moved from co-occurrence based models (Landauer et al., 1998)

to a wealth of prediction based methods, leveraging deep learning 

(Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) and Bayesian modeling (Rudolph et al., 2016). All these methods learn a single representation of a word, which is insufficient for capturing usage variation across domains. Recently,  rudolph2017structured proposed s-EFEs to exploit grouping of data points into a (flat) set of semantically coherent subgroups (e.g., grouping ArXiv papers by research field). Inspired from their approach, we develop a model to capture multi-level hierarchical structure in the data, and present product hierarchies on e-commerce platforms as a sizable real-world use case.

In the spirit of capturing multiple meanings of a word, our work is related to context-dependent word representations McCann et al. (2017); Peters et al. (2018); Devlin et al. (2018) and multi-sense word embeddings (Neelakantan et al., 2014; Li and Jurafsky, 2015; Nguyen et al., 2017; Athiwaratkun et al., 2018). Contextualized word vectors McCann et al. (2017) and their deep extensions Peters et al. (2018)

aim to learn a dynamic representation of a word as internal states of a recurrent neural network, that encodes a word depending on its context. They have been proven effective for downstream tasks when used in addition to pre-trained single vector word embeddings. This line of research is complementary to ours and combining these would be an interesting direction to explore in future. Multi-sense embeddings dynamically allocate further vectors to represent new senses of a word

Li and Jurafsky (2015). In contrast, our model is not designed to explicitly encode word senses, i.e. generally accepted distinct meanings of a word. We capture idiosyncrasies in word usages which are specific to the hierarchical domain structure exhibited by the data.

Our model can capture domain-specific semantics of product categories, and relates to unsupervised Titov and McDonald (2008); Poddar et al. (2017); Luo et al. (2018) or domain knowledge-based Mukherjee and Liu (2012); Chen et al. (2013) models of aspect discovery. Aspects are domain-specific concepts along which products are characterized and evaluated (e.g., battery life for electronics, or suspense for crime novels). We focus on learning a characterization of each word specific to every domain, instead of learning abstract aspects. However, we show in an experiment with human subjects that the learnt embeddings can be utilized to discover important domains terms without further processing and suggest interesting applications for future research.

3 Method

In this section we first describe the Exponential Family Embeddings and its extension for grouped data. Thereafter, we describe the construction of our hierarchical word embeddings model.

3.1 Background

Exponential Family Embeddings (EFE; rudolph2016exponential) is a generic model that encodes the conditional probability of observing a data point (e.g. word in a sentence, item in a shopping basket) given other data points in its context (surrounding words, other items in the shopping basket, respectively).

In the context of texts, a sentence consisting of words is represented as a binary matrix where V is the size of vocabulary and indicates whether the word appears at the position in the sentence or not. EFE models the conditional distribution of an observation , given its context where is the size of the context window.

(1)

where are the context vectors of ; is the natural parameter, and is the sufficient statistics for exponential families.

The natural parameter is parameterized with two vectors: (1) embedding vector of a word , and (2) context vectors of surrounding words ; where and is the embedding dimension. The definition of the natural parameter is a modeling choice and for EFE it is defined as a function of a linear combination of the above two vectors for focusing on linear embeddings.

(2)

where is the identity link function.

EFEs define a parameter sharing structure across observations. By assuming that for each word in the vocabulary, and , the vectors are shared across positions of its appearance in a sentence.

EFEs are a general family of probabilistic graphical models that are also applicable to data other than language, and provide the flexibility of choosing the appropriate probability distribution depending on the type of data at hand. For word embeddings, we model binary word count data by observing whether or not a word appears in a given context i.e. is or

. The Bernoulli distribution is a suitable choice for this type of data.

Since the matrix is very sparse, in order to diminish the effect of negative observations and to keep the training time low, a negative sampling approach is used similar to Word2Vec Mikolov et al. (2013b). The model is trained by maximizing the following objective function,

(3)

Structured Exponential Family Embeddings (s-EFE; rudolph2017structured) extends EFE to model data organized into a set of groups and learn group-specific embeddings for each word. Similar to EFE, s-EFE assumes that a word has a context vector and an embedding vector . The context vector of a word () is shared across groups, but embedding vectors () are learned specific to each group . By sharing the context vectors (), s-EFE ensures that the embedding dimensions are aligned across groups and are directly comparable. The embedding vectors () are tied together to share statistical strength through a common prior: each

is drawn from a normal distribution centered around the word’s global embedding vector

; i.e. , where is a parameter to control how much the group vectors can deviate from the global representation of the word.

3.2 Hierarchical Embeddings

The s-EFE model Rudolph et al. (2017) can learn word embeddings only over a flat list of groups, where the degree of similarity between groups are not captured. However, in a complex real-world scenario, data is often organized into a hierarchy of domains such as a product taxonomy (as shown in Figure 2), where each product and thereby all its reviews are categorized into one of the nodes. The hierarchy represents an inherent similarity measure among the domain nodes, where nodes that are closer to each other in the hierarchy, represent more similar domains, than nodes that are further away. This gets more pronounced as we continue to go deeper in the hierarchy, for e.g. Electronics/Gaming/XBox has a lot more in common with Electronics/Gaming/Playstation than with Apparel/Shoes/Sneakers. We believe that these similarities among the domains are also reflected in word usages in those domains i.e. a word is more likely to be used for conveying similar meaning in two closely related domains than in two distant domains in the hierarchy.

We build upon the s-EFE model Rudolph et al. (2017) and extend it to learn word embeddings for data organized in hierarchical domains. We assume this hierarchy or taxonomy to be given, which we believe is a natural assumption in many real world scenarios. For a node (e.g. Shoes) in the hierarchy, we denote its parent node as (i.e. Apparel). For each word, we learn a specific representation at each node, which captures its properties in that particular domain.

In order to incorporate the domain hierarchy in the learned word embeddings and share information between domains along the tree structure, we tie together the embedding vectors of sibling nodes: We assume that, for each node , the embedding vector for word , is drawn from a normal distribution centered at the embedding vector of the word in the parent node of i.e.,

(4)

where is the embedding vector for word in the parent node of , and

is the standard deviation for the normal distribution. The value of

controls how much an embedding at a node can differ from its parent node. If this value is low, the embeddings have to remain very close to their parent representations and as a result significant variation can not be observed across domains.

For a word , denotes its embedding at level (i.e. the root) of the hierarchy. This can be considered as a global embedding of the word that encodes its properties across all domains. For nodes at level , the embeddings for the words are learned conditioned on this global embedding, and similarly for any level the embeddings are learned conditioned on their parent node’s embedding at level .

This structure implies that the amount by which the embedding of a word at a node may vary from its global embedding, is proportional to the depth of the node in the hierarchy, i.e. how specific or fine-grained the domain is. This captures the intuition that fine-grained domains might have specific word usages that the embedding vectors need to accommodate. Additionally, by allowing the embeddings at a node to deviate only by a limited measure (controlled by ) from the embedding at its parent node, our model ensures that the inherent similarities between parent-child nodes and the sibling nodes are preserved. On the other hand, for distant nodes, the representation of the same word will be able to vary considerably - proportional to the distance of the two domains in the taxonomy.

Home Electronics Apparel Camera Kitchen Outdoors Sports Furniture Books Total
#reviews 6.2M 3.1M 5.8M 1.8M 4.8M 2.3M 4.8M 791K 10.2M 40M
#sentences 26.7M 17.9M 21.6M 11.37M 24M 11.5M 21.8M 4.1M 64.1M 203M
#words 277M 214M 201M 140M 268M 131M 233M 44.5M 892M 2.4B
Avg. sentence length 10.37 11.95 9.31 12.31 11.16 11.39 10.68 10.85 13.93 11.81
Table 1: Statistics of the review datasets.

For words that do not change in meaning or are not very frequent in a domain, its domain-specific embedding would not need to deviate much from the global embedding. This should be true for most words, as not all words exhibit domain dependent behavior. However, for words that have a different meaning in a subcategory, their embeddings will deviate more strongly from the embedding at parent nodes in order to reflect the domain meaning. We demonstrate and analyze this effect by investigating such words in Section 4.4.

We maximize the following objective to train the hierarchical model,

(5)

where is the maximum depth of the tree, and is a domain at level . The objective function sums the log conditional probabilities of each data point (), log conditional probabilities of embeddings at each domain node (), regularizers for the global embeddings () and context vectors ().

4 Experiments

We consider a large collection of real-world publicly available customer review datasets from Amazon111https://s3.amazonaws.com/amazon-reviews-pds/readme.html. We select datasets from popular and diverse categories, namely, Apparel, Books, Camera, Kitchen, Electronics, Furniture, Outdoors, Home and Sports. For each of these categories the number of reviews varies between , and are written by customers over a period of two decades. Table 1 shows the overall statistics.

We consider these categories as level nodes in a product hierarchy. We further map each review to a finer category corresponding to a child node in the taxonomy (e.g., reviews for the level node books would be split into review sets for romance books, cook books, etc.). We refer to these finer categories as level nodes222We shall release this taxonomy with the paper. In our experiments we consider this 3-level taxonomy (a global root, and two levels of product nodes, see Figure 2 for example). In principle our model can scale to deeper hierarchies.

We apply standard pre-processing techniques on review texts.We consider the top most frequent words as the vocabulary for learning embeddings and remove all words that are not part of the vocabulary. Following mikolov2013distributed, we down-sample the most frequent words by removing tokens with probability , where is the frequency of word .

4.1 Parameter Settings

The depth and width of the taxonomy tree may grow as more diverse and new products are added to the catalogue. Therefore, deciding the maximum depth of a branch at which to learn embeddings becomes an implementation and scalability choice. In our experiments, we recursively merge smaller leaf nodes to their parent until each node has a minimum data size of reviews. In total we had groups across level 1 and level 2 to train our hierarchical model. For products tagged with multiple categories, we consider its reviews to be associated with all those domains.

For all competing methods, we use an embedding dimension of . While training the exponential family based models, we use a context window of , and positive to negative ratio of through random negative sampling. We experiment with multiple values of the standard deviation () and find that for values in the range of , domain variations are captured well. We report results with set to .

4.2 Quantitative Evaluation

We first evaluate the effectiveness of the proposed hierarchical word embedding model compared to recent alternatives for modeling unseen data. We use log-likelihood on held out data for measuring intrinsically how well a model is able to generalize. The higher the log- likelihood value on unseen data, the higher is the generalization power of the model. We compare with the following:

Global Embedding (EFE) Rudolph et al. (2016) This model is fit on the whole dataset and learns a single global embedding for a word which does not take the domains into account.

Grouped Embedding (s-EFE) Rudolph et al. (2017) For the grouped model, we merge all reviews under a sub-tree to the level 1 node (e.g. Electronics/PC Accessories, Electronics/Digital Cameras are all merged to Electronics).

Model Val Test
EFE Rudolph et al. (2016) -2.015 -2.016
s-EFE Rudolph et al. (2017) -1.635 -1.656
Hierarchical Embedding -1.416 -1.425
Table 2:

Comparison of log-likelihood(LL) results. Higher is better. All comparisons are statistically significant (paired t-test with p

0.0001)

We divide the dataset in train (), validation (), and test () sets. We learn the models on the train set and tune hyper-parameters on the validation set. For five such random partitions we note accuracy on the validation and test set. As the datasets are sufficiently large, we do not observe much fluctuation across different partitions (standard deviation ). Therefore, we report the average accuracy among the runs across all categories in Table 2.

As we can see from the results, the Hierarchical Embedding approach is able to best fit the held-out data by a significant margin. The Grouped Embedding model is also able to outperform the Global Embeddings. This shows that word usages do vary across domains, rendering a global model insufficient to generalize to unseen data from different domains. By outperforming the Grouped model, the Hierarchical model demonstrates its efficiency in being able to incorporate the domain taxonomy well and the effectiveness of our approach for modeling such hierarchical data.

4.3 Rating Prediction

As embeddings are popularly used as word representations for NLP applications, we evaluate the quality of hierarchical word vectors learned by our proposed approach in the context of downstream tasks. We consider the traditional task of rating prediction from review content. We first pre-train all competing embedding methods on the review dataset and use the trained word vectors in the following neural model for text representation.

Rating Prediction Model: For a review, we consider the associated user id, item id and the review text in order to predict the star rating given by the user for the item. The model (shown in Figure 3) uses these three signals to predict a numeric rating.

Figure 3: Architecture of the neural net model for rating prediction. Layers whose weights are learned as part of the network are shaded in gray.

We embed the user and item id similar to Neural Collaborative Filtering (NeuMF) He et al. (2017). For the review text, we first encode each word using different competitive pre-trained word embeddings for comparison. Thereafter, two GRU layers  Chung et al. (2014)

are used to encode the review text from the sequence of word embeddings. The output from the final timestep of the GRU is considered as the encoded representation of review text. We concatenate the three vectors and feed them through a fully connected layer with ReLU activation to predict the numeric rating. The network is trained using mean squared error loss through backpropagation.

Method MAE RMSE
NeuMF (No Text) He et al. (2017) 0.746 1.09
EFE Rudolph et al. (2016) 0.573 0.874
s-EFE Rudolph et al. (2017) 0.565 0.878
GloVe Pennington et al. (2014) 0.566 0.863
Word2Vec - Skip-gram Mikolov et al. (2013a) 0.554 0.854
Word2Vec - CBOW Mikolov et al. (2013a) 0.556 0.847
Hierarchical Embedding 0.526 0.843
Table 3: Comparison of embedding approaches for predicting Star Rating of reviews. Lower is better.
Apparel Camera
Camera/
lenses
Camera/
flashes
Books/
cookbooks
Books/
travel
Kitchen/
bakeware
jegging digital blurriness pandigital pollan pilgrimage springfoam
bikini zoom focusing speedlights foodies shrines corningware
headwear zooming hsm jittery vegetarian crossings pizzeria
backless nikor focal nikor lentil kauai basting
danskin pixel aperture jpegs healthful regional tinfoil
sunbathing cybershot nightshot closeups crocker geology molds
necks washout af webcams tacos hiker microwaveable
coolmax ultrahd fringing studio turnip ecosystem muffins
jockey fringing monopod lamps pastes guides browned
Table 4: Top words with most deviations in a domain reflect salient domain terms. “ / ” is the level delimiter.

Baselines: We evaluate our hierarchical embedding model against Global Embeddings (EFE) and Grouped Embeddings (s-EFE). We also compare with Word2Vec Mikolov et al. (2013a) (both Continuous Bag of Words (CBOW) and Skip-gram, trained on our reviews corpus) and GloVe Pennington et al. (2014) (pre-trained on Wikipedia+Gigaword) which are among the most commonly used word embedding approaches. For Word2Vec, we compare with both the Continuous Bag of Words (CBOW) and Skip-gram algorithm using the implementations from Python’s Gensim library Řehůřek and Sojka (2010).

We split the dataset in 80-10-10 proportions for train-development-test and report mean absolute error (MAE) and root mean squared error (RMSE) over test set after averaging five runs with random partitions. Table 3 shows the results of using different embeddings for the rating prediction task. We can see that all methods that include text perform better than NeuMF which only uses the user id and item id. This shows the benefit of utilizing the textual information. We also observe that both of the Word2Vec algorithms outperform GloVe embeddings, demonstrating the domain dependence of word meanings that are not captured in GloVe embeddings pre-trained on Wikipedia and Gigaword. Finally, the proposed Hierarchical Embeddings provide the best text representation and achieve the lowest error rate. By learning domain specific word embeddings, the hierarchical model can represent appropriate usage of a word, leading to better downstream predictions.

4.4 Domain Term Discovery

Discovering key domain specific terms is an important building block for many downstream applications of review content understanding. For example, while aperture, focus, sharpness are aspects of camera lenses, words such as size, full-sleeve, turtleneck and so on are important to characterize a fashion product. Mining such domain terms can help in fine grained sentiment analysis and can facilitate the study of customer opinions at a detailed, actionable level or help in extracting accurate feature specifications from product descriptions, improve indexing and product discovery and many such downstream tasks. However, with constant influx of new products from diverse domains, it is infeasible to manually curate such terms with great detail and coverage.

Customer reviews can be a rich source of information for discovering domain terminology. They discuss the properties of products or businesses that customers truly care about. We explore the potential of our embedding approach for discovering such words in a data-driven fashion.

In the Hierarchical Embedding model, the embedding of a word in a domain may differ from the one in the parent domain following a distribution. For domain-specific words we assume that their embeddings differ more from the embeddings at the parent or the global embeddings than for domain-neutral words, in order to accommodate the domain usage. We study our model’s ability to discover domain terms by inspecting the words whose embeddings deviated the most from their parent domain.

Table 4 shows the ranked list of top such words for a few domains. We can observe that most of the words with highest embedding deviations in a domain are indeed domain-specific. This shows that the hierarchical embedding approach is able to capture usage variation of words and can help discover salient domain-specific words without requiring any further processing steps. Additionally, synonyms of these words can be explored among their neighboring words in the embedding space to augment the domain term list.

Domain Levels #Pairs Accuracy
Level 1 81 97.5%
Level 2 (different parents) 491 95.71%
Level 2 (same parent) 491 82.45%
Table 5: Accuracy of detecting the domain given the top most deviating words for different domain levels.

To quantitatively evaluate how well these words can describe a domain, we set up an annotation task on Amazon Mechanical Turk333https://www.mturk.com/. Given a set of words, we ask MTurk workers to choose among a pair of domains, which domain the words belong to. We perform the experiment at multiple levels of granularity and select the pair of domains from- (1) two level nodes (e.g. Apparel vs. Electronics), (2) two level nodes from different parents (e.g. Electronics/Homedecor vs. Home/Living room furniture), and (3) two level nodes from the same parent (e.g. Books/Travel vs. Books/Science).

Kitchen
razor, dull, cutting, sharpen, sharpness
Outdoor
edges, edge, sharpened, cuts, flakes
Camera
stunningly, combined, compensated, distinctly, combining
Cookware
razor, dull, cutting,
knife, knives
Cutlery and
knife accessories
razor, effortless,
strokes, dull, stab
Cycling
leaves, glass, cuts,
hits, stones
Accessories
action, jacket,
dust, hat, wind
Flashes
silky, crystal, clear,
nuance, perfection
Lenses
yielding, useable, enlarged, pixelation,
enlarge
Table 6: Top neighboring words for sharp across different domains
Electronics
light, lights, dark, green, lit
Outdoor
practice, effort, dark, fill, darker
Camera
sun, shade, darkness, dark, lit
Computer
accessories
dark, light, glow,
green, eyes
Homedecor
brighter, dimly, brightest,
levels, unobtrusive
Cycling
steady, brighter, visible,
modes, flashlight
Outdoor
clothing
white, purple,
dark, actual, dress
Flashes
balanced, spotlight,
reflect, artificial, glow
Lenses
darkness, lights,
direct, brighter, sun
Table 7: Top neighboring words for bright across different domains

We evaluate on such pairs and ask workers to evaluate each pair. Table 5 shows that overall, the workers could identify the correct domain with high accuracy by looking only at the top words. As anticipated, we observe that it is easier to distinguish between more heterogeneous nodes (level 1 nodes or level 2 nodes with different parents) compared to closely related ones(level 2 nodes with same parent). Analyzing the mistakes made by workers, we realize that some of the finer domains, are indeed harder to distinguish from word usage as they are similar in nature (e.g. Kitchen/Bakeware vs. Kitchen/utilities) or are overlapping (e.g. Electronics/Computer accessories vs. Electronics/Headphones).

4.5 Qualitative Evaluation

Finally, we qualitatively analyze whether the learned semantic space by our model is interpretable to humans for distinguishing word usages across domains. In the embedding space, the nearest neighbors of a word represent its most semantically similar words. Table 6 and Table 7 show the most similar words (by euclidean distance between word vectors) for sample words sharp and bright respectively, across a few domains.

Table 6 shows that similar words for sharp vary widely among the domains. For products in the Kitchen domain, words related to sharp are knives, sharpen, dull, razor etc., whereas, for products related to Outdoor sports and gear, sharp is often used in the context of sharp wind, sharp edges or sharp pain. In the context of Camera, sharp mostly refers to image quality. We can observe that the hierarchical model is able to capture nuances even at a finer domain level of Camera/Flashes vs. Camera/Lenses. When used in reviews of Camera Flashes, people describe the quality of the flash light in producing a clear, well illuminated image hence clear, silky, perfection are its most similar words. Whereas in reviews of Camera Lens, people use sharp in the context of image resolution, describing the effect of enlargement and pixelation of the picture, hence related terms are captured.

Similarly, usage variation of the word bright is captured (Table 7). In Electronics products, bright is an attribute for screen display or LED lights on different devices. For Outdoor sports such as cycling, it is used in the context of visibility but for Outdoor Clothing, bright is a descriptor for the color of a clothing item. In reviews of Camera Flashes, people refer to the artificial brightness created by the camera flashlight. In contrast, in reviews of Camera Lenses, people usually discuss the capability of the lens to capture images under a bright sun or low-light conditions. Neighborhoods of bright reflect this usage variation across domains.

5 Conclusion

We have studied word embeddings to address varying word usages across domains. We propose a hierarchical embedding that uses probabilistic modeling to learn domain-specific word representations, leveraging the inherent hierarchical structure of data, such as reviews in e-commerce site following a product taxonomy. Our principled approach enables the learned embeddings to capture domain similarities and as the embedding dimensions are aligned across domains, it facilitates interesting studies of semantic shifts of word usage. On large real-word product review datasets, we show that our nuanced representations, (1) provide a better intrinsic fit for the data, (2) lead to an improvement in a downstream task of rating prediction over state-of-the-art approaches, and (3) are intuitively meaningful to humans, opening up avenues for future explorations on aspect discovery.

References

  • Athiwaratkun et al. (2018) Ben Athiwaratkun, Andrew Wilson, and Anima Anandkumar. 2018. Probabilistic fasttext for multi-sense word embeddings. In Proc. of ACL.
  • Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Proc. of TACL.
  • Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS, pages 4349–4357.
  • Chen et al. (2013) Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013. Exploiting domain knowledge in aspect extraction. In Proc. of EMNLP.
  • Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. Proc. of NIPS Deep Learning and Representation Learning Workshop.
  • Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Garg et al. (2018) Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proc. of the National Academy of Sciences, 115(16):E3635–E3644.
  • He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW, pages 173–182. Proc. of WWW.
  • Landauer et al. (1998) Thomas K Landauer, Peter W Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse processes, 25(2-3):259–284.
  • Li and Jurafsky (2015) Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proc. of EMNLP.
  • Luo et al. (2018) Zhiyi Luo, Shanshan Huang, Frank F Xu, Bill Yuchen Lin, Hanyuan Shi, and Kenny Zhu. 2018. Extra: Extracting prominent review aspects from customer feedback. In Proc. of EMNLP, pages 3477–3486.
  • McCann et al. (2017) Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proc. of NIPS, pages 6294–6305.
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proc. of ICLR Workshop, arXiv preprint arXiv:1301.3781.
  • Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. of NIPS, pages 3111–3119.
  • Mikolov et al. (2013c) Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proc. of NAACL, pages 746–751.
  • Mukherjee and Liu (2012) Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proc. of ACL, pages 339–348. Association for Computational Linguistics.
  • Neelakantan et al. (2014) Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014.

    Efficient non-parametric estimation of multiple embeddings per word in vector space.

    In Proc. of EMNLP.
  • Nguyen et al. (2017) Dai Quoc Nguyen, Dat Quoc Nguyen, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2017. A mixture model for learning multi-sense word embeddings. In Proc. of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017). Association for Computational Linguistics.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP.
  • Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL, volume 1, pages 2227–2237.
  • Poddar et al. (2017) Lahari Poddar, Wynne Hsu, and Mong Li Lee. 2017. Author-aware aspect topic sentiment model to retrieve supporting opinions from reviews. In Proc. of EMNLP.
  • Řehůřek and Sojka (2010) Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en.
  • Rudolph et al. (2017) Maja Rudolph, Francisco Ruiz, Susan Athey, and David Blei. 2017. Structured embedding models for grouped data. In Proc. of NIPS, pages 251–261.
  • Rudolph et al. (2016) Maja Rudolph, Francisco Ruiz, Stephan Mandt, and David Blei. 2016. Exponential family embeddings. In Proc. of NIPS, pages 478–486.
  • Titov and McDonald (2008) Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proc. of WWW. ACM.