Dual Attention Network for Product Compatibility and Function Satisfiability Analysis

12/06/2017 ∙ by Hu Xu, et al. ∙ Lehigh University University of Illinois at Chicago 1

Product compatibility and their functionality are of utmost importance to customers when they purchase products, and to sellers and manufacturers when they sell products. Due to the huge number of products available online, it is infeasible to enumerate and test the compatibility and functionality of every product. In this paper, we address two closely related problems: product compatibility analysis and function satisfiability analysis, where the second problem is a generalization of the first problem (e.g., whether a product works with another product can be considered as a special function). We first identify a novel question and answering corpus that is up-to-date regarding product compatibility and functionality information. To allow automatic discovery product compatibility and functionality, we then propose a deep learning model called Dual Attention Network (DAN). Given a QA pair for a to-be-purchased product, DAN learns to 1) discover complementary products (or functions), and 2) accurately predict the actual compatibility (or satisfiability) of the discovered products (or functions). The challenges addressed by the model include the briefness of QAs, linguistic patterns indicating compatibility, and the appropriate fusion of questions and answers. We conduct experiments to quantitatively and qualitatively show that the identified products and functions have both high coverage and accuracy, compared with a wide spectrum of baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Microsoft Surface Pro 4 (128 GB, 4 GB RAM, Intel Core i5)
Q: Can the M processor handle photoshop?
A: It does run Photoshop very well on our internal test unit.
Q: Does the surface pro 4 support the Google Play app store?
A: No, it does not support Google Play.
Q: can this run fallout 4
A:
It struggles graphically with Fallout 4 on
low graphics settings.
Q: Does this connect to a 5G home wireless network?
A:
We connect to a Comcast high speed wireless router for
Internet access and it seems to work well.
I can not speak to the specific network
in the question.
Q: Can you use this for sketching?
A:
To some extent .
It depends upon the degree to which you intend to create .
Table 1: An example of QA pairs for a tablet PC: function targets (the first 4 are complementary entities) are underlined in questions; function words are bolded; keywords that indicating compatibility or satisfiability are italicized.

Learning about the compatibility of a product that is functionally complementary to a to-be-purchased product is an important task in e-commerce. For customers, before they purchase a product (e.g., a mouse), it is natural for them to ask whether the to-be-purchased one can work properly with the intended complementary product (e.g., a laptop). Such a query is driven by customers’ needs on product functionality, where compatibility can be viewed as a special group of functions. In fact, a function need is the very first step of purchase decision process. Whether a product can satisfy some function needs or not (a.k.a. satisfiability of a function need) even leads to the definition of product. In marketing, product is defined as “anything that can be offered to a market for attention, acquisition, use or consumption that might satisfy a want or need” [Kotler and Armstrong2010]. For sellers or manufacturers, satisfiability on function needs are equally important as being fully-aware of existing and missing functions is crucial in increasing sales and improve products. Therefore, exchanging the information about functions is important to customers, sellers, and manufacturers.

Given its importance, however, such function (we omit “need” for simplicity) information is not fully available in product descriptions. Just imagine the cost of compatibility test over the huge number of products, or sellers’ intention of hiding missing functions. Fortunately, customers may occasionally exchange such knowledge online with other customers or sellers. This allows us to adopt an NLP-based approach to automatically sense and harvest this knowledge on a large scale.

In this paper, we address two (2) closely related novel NLP tasks: product compatibility analysis and product function satisfiability analysis. They are defined as following.
Product Compatibility Analysis: Given a corpus of texts, identify all tuples , , , where and are a pair of complementary products (entities), and indicates whether the two entities are compatible (1), not (2) or uncertain (3).
Note that two complementary entities can be incompatible. For example, a mouse is a product functionally complementary to “Microsoft Surface Pro 4”, as Surface Pro 4 doesn’t come with a mouse. However, due to different interfaces, not all mouse models can work with Surface Pro 4 properly. By slightly extending to a function expression (e.g., identifying “work with Microsoft Surface Pro 4” instead of “Microsoft Surface Pro 4”), we have a more general task.
Product Function Satisfiability Analysis: Given the same corpus, identify all tuples , , , where is a function expression, and indicates whether can satisfy (1) the function or not (2), or uncertain (3).
Note that functions derived from complementary entites are just one type of function called extrinsic functions. Functions in may also include functions derived from the product itself as intrinsic functions. For example, “draw a picture” is an intrinsic function for “Microsoft Surface Pro 4”. A function expression may consist a function word (e.g., “work with” or “draw”) and a function target (e.g., “Microsoft Surface Pro 4” or “picture”). So function target is a general term of complementary entity.

Two challenges come immediately after formalizing these two tasks. First, the quality of the tuples depends on the data source (namely, the corpus). Corpora that are dense and accurate regarding compatibility and functionality information are preferred. Second, although general information extraction models are available, novel models that can jointly identify the entities (or function expressions) and compatibility (or satisfiability) in an end-to-end manner are preferred.

We address the first challenge by annotating a high-quality corpus. In particular, Amazon.com allows potential consumers to communicate with existing product owners or sellers via Product Community Question Answering (PCQA). As an example, in Table 1 we show 5 QA pairs addressing the functionality of Microsoft Surface Pro 4 (). We can see that the first 4 questions address the extrinsic functions (on complementary entities) and the last one is an intrinsic function. Specifically, “photoshop” is a compatible entity, “Google Play app store” and “fallout 4” are incompatible entities, “5G home wireless network” is a complementary entity with uncertain compatibility, and “use for sketching” is an uncertain function. Observe that the to-be-purchased product can be identified using the product title of the product page. We focus on extracting or from the question and detect or from the answers. We leave infrequently-asked open questions (e.g., “what products can this tablet work with?”) to future work and only focus on yes/no questions. The details of the annotated corpus can be found in Section Experimental Result.

Given the corpus, we address the second challenge by formulating these two (2) tasks as sequence labeling problems, which fuse the information from both the questions and the answers. We propose a model called Dual Attention Network (DAN) to solve the sequence labeling problems. DAN addresses two technical challenges. First, the questions and answers are usually brief 222the longest question has only 82 words with rather limited context. Second, the polarity information in many answers are implicit (without an explicit “Yes” or “No” in the very beginning, e.g., the 1st and 3rd answers in Table 1). DAN resolves these challenges by taking the question and the answer together as a QA story (or document) and perform two reading comprehensions [Richardson, Burges, and Renshaw2013, Rajpurkar et al.2016] on such a story as side information for sequence labeling. For example, it may not be obvious that “photoshop” is a complementary entity by reading the question only. However, the word “run” in the answer is a strong indicator that “photoshop” is a complementary entity and the word “well” indicates a positive polarity. We conduct experiments quantitatively and qualitatively to show the performance of DAN. The proposed dual attention architecture is not limited to the proposed tasks, but may potentially be applied to other QA tasks.

Related Work

Complementary products are studied in the context of recommender system [McAuley, Pandey, and Leskovec2015]. In [McAuley, Pandey, and Leskovec2015], topic models are used to predict substitutes and complements (without compatibility) of a product. But their work takes the outputs of Amazon’s recommender system as the ground truths for complements, which can be inaccurate. Instead, we take an information extraction approach similar to the Complementary Entity Recognition (CER) task in [Xu et al.2016b, Xu, Shu, and Yu2017]. But we perform in a supervised setting on an annotated QA corpus [McAuley and Yang2016, Xu et al.2016a]. We further generalize to a more fundamental task: function satisfiability analysis. Although [Xu et al.2017] has a preliminary study on product functions, to the best of our knowledge, this is the first time to study a fully end-to-end model with satisfiability analysis.

Deep neural network

[LeCun, Bengio, and Hinton2015, Goodfellow, Bengio, and Courville2016]

has drawn attention in the past few years due to its impressive performance on NLP tasks. Long Short-Term Memory (LSTM)

[Hochreiter and Schmidhuber1997] is shown to achieve the state-of-the-art results on many NLP tasks [Greff et al.2015, Lample et al.2016, Tan, Xiang, and Zhou2015, Nassif, Mohtarami, and Glass2016, Wang and Nyberg2015]. Attention mechanism [Larochelle and Hinton2010, Denil et al.2012] is effective in NLP tasks, such as machine translation[Bahdanau, Cho, and Bengio2014], sentence summarization [Rush, Chopra, and Weston2015]

, sentiment analysis

[Tang, Qin, and Liu2016], question and answering [Li et al.2016] and reading comprehension [Kumar et al.2015, Xiong, Zhong, and Socher2016]. There are also studies of neural sequence labeling [Lample et al.2016, Ma and Hovy2016]. However, traditional sequence labeling takes a single sequence as the to-be-labeled input. The proposed task naturally has two inputs: the question with the to-be-labeled tokens and the answers with the polarity. Inspired by the task of reading comprehension, we take one more step to fuse the question and the answer together as a story and perform reading comprehension on such a QA story. Instead of learning question-aware representations of the story as in reading comprehension, we learn question-aware (or answer-aware) representations of the QA story as the side information to enrich the representation of the question (or the answer, resp.).

Model Overview and Preliminary

Formally, we define the 2 inputs sequence labeling problem as following. Given a QA pair (we use bold symbols to indicate sequences), we label each word in the question as a label sequence , where and is the length of the question. Here is the label space and the two proposed tasks only differ in the label space . For product compatibility analysis, the label space is , indicating Other non-entity words, Compatible, Incompatible and Uncertain entity words. For product satisfiability analysis, the label space is , indicating Other non-function words, Satisfiable, UNsatisfiabile, Uncertain function target words, and Satisfiable, UNsatisfiabile, Uncertain Function words.

Figure 1: Dual Attention Network: given a question “Works with iphone ?” and its answer “Yes, it is”, “Works with iphone ?” is can be labeled as for satisfiability analysis (or for compatibility analysis). “Works with iphone” is labeled as a satisfiable function expression, where “iphone” is a Satifiable function targets (or Compatible entity) ) and “Works with” is labeled as satisfiable function words (F-S). The details of question attention (and similarly for answer attention) is shown in Figure 2.

The proposed network is illustrated in Figure 1. The question and the answer are first concatenated to form a QA story . Then the QA pair and the story are passed into a shared embedding layer (not shown in the figure), followed by three respective BLSTM (Bidirectional Long Short-Term Memory [Hochreiter and Schmidhuber1997, Schuster and Paliwal1997]

) Context Layers to obtain contextual representations. So the vector representation at each position is encoded with the information from nearby words. The contextual representation of the QA story is attended (read) by the contextual representations of the question and the answer, respectively. This is done via two separate attention modules. The attention process can be viewed as both the question and the answer read the QA story to form their corresponding side information. Then the side information is concatenated with the original contextual representation for the question and the answer, respectively. Now we call them QA-augmented question and answer. Later, we pass the QA-augmented question and answer representations to the Question Context 2 layer and the Answer Context 2 layer, respectively. The second context layers here are used to learn the representations encoding both the original context representations and the QA story. Note that we only learn a single vector from the Answer Context 2 layer as the

polarity vector since we need a single vector to represent the polarity of the whole answer sequence. Lastly, the polarity vector is duplicated times, each of which is concatenated to the representation of each word in the question, and output the label sequence

via a dense+softmax layer shared for each word in the question. Thus, both the question and the answer help to decide the output labels in an end-to-end manner.

Note that more complicated deep architecture for sequence labeling can be leveraged (e.g., modeling label dependency using CRF or learning better features from character-level embedding as in LSTM-CRF [Lample et al.2016] or LSTM-CNNs-CRF [Ma and Hovy2016]), and here we mainly focus on how to leverage a QA story to augment side information and perform sequence labeling. Next, we briefly introduce preliminary layers that are common in most NLP models.

Input Layers Let the sequence and denote the question and the answer, respectively, where denote the length of the answer. A question (or an answer) may contain multiple sentences and we simply concatenate them into a single sequence. We set as the maximum number of words in any question; since an answer can be as long as more than 2000 words, we simply make the answer the same length as the question () by removing words beyond the first 82 words. Intuitively, the beginning of an answer is more informative.

We transform ( and , resp.) into an embedded representation ( and , resp.) via a word embedding matrix , and is the dimension (we set it as 300) of word vectors. We pre-train the word embedding matrix via fasttext model [Bojanowski et al.2016]

and fine tune the embeddings when optimizing the proposed model. The fasttext model allows us to obtain embeddings for out-of-vocabulary words (which is common in product QAs) from character n-grams embeddings. The pre-training is discussed in Section Experimental Result.

BLSTM Context Layers The embedded word sequences (, and ) are fed into the Question Context 1 layer, the Answer Context 1 layer, and the QA Story Context layer, respectively. BLSTM [Hochreiter and Schmidhuber1997, Schuster and Paliwal1997] is an important variant of RNN due to its ability to model long-term dependencies and contexts in both forward and backward directions in a sequence. The key component of an LSTM unit is the memory cell, which avoids overwriting the hidden state at every time step. An LSTM unit decides to update the memory cell via input, forget and output gates. We set all the output dimensions of BLSTM layers as 128. We omit the details of the update mechanism and interested readers may refer to [Hochreiter and Schmidhuber1997] for details. Note that other variants of RNN such as GRU [Chung et al.2014]

can also be used, here we mainly focus on how attention mechanism can help improve the performance. After passing the question and the answer embedding through these BLSTM layers, we have the hidden representations

, and for the question, answer and QA story, respectively.

Dual Attention Network

Attention-based Module

Next, we leverage attention mechanism to allow both the question and the answer to enrich their representations. Attention mechanism [Larochelle and Hinton2010, Denil et al.2012] is popular in recent years due to its capability of modeling variable length memories rather than fixed-length memory. We utilize attention mechanism to synthesize side information from the QA story. We introduce two attention mechanisms: question attention and answer attention. By reading the QA story, they both get side information by the fact that the question and the answer in a QA pair are connected. Intuitively, the words in the answer depend on the question and the question can help infer compatibility information from the answer. According to our experience of dataset annotation, we find that some entities are hard to label by reading only the question or the answer. However, if we read the question and answer as a whole, more often we get the idea of what the QA pair discusses about. For example, a question like “straight talk?” for a cell phone can be hard to label. If we have an answer “yes, it works with straight talk well.”, we can somehow guess “straight talk” should be a carrier. Similarly, the “straight talk” indicated by the question also helps us to identify the polarity word “well” in the answer. This is very important in identifying implicit polarities in the answer. We mimic this procedure of human’s reading comprehension using the following attention mechanism.

Figure 2: Question Attention: each word in the question attends on each word in the QA story.

Let the output from the QA story Context layer be . For the question (answer) attention, we obtain the attention weight for the -th question (answer) word when reading the -th word in the QA story via a dot product and further normalized by a softmax function:

(1)

Then the contextual information for the -th word in the question (or answer) is the weighted sum over all words in the QA story:

(2)

We concatenate with as the representation of the -th word in the question (answer): . Similarly, we have another answer attention module to obtain a sequence of answer word representations and the -th word in the answer is denoted as .

Stacked Structures

Next, and are fed into the Question Context 2 layer and the Answer Context 2 layer, respectively, which are similar to the stacked BLSTM [El Hihi and Bengio1995] to obtain better representation of the sequences. We utilize two structures of BLSTMs: the many-to-many structure on questions and the many-to-one structure for learning the answer representation since we care about the answer polarity more than word-by-word representations. The answer representation is a concatenation of the last and the first outputs of a forward and backward LSTMs, respectively. Finally, we have for the question sequence and for the polarity representation of the whole answer.

Joint Model

Now we form the joint model in an end-to-end manner, by merging the question branch and the answer branch into prediction . So the labels for each word in question can affect both the question and answer branches to learn better representations. To match the output length of the answer branch to the output length of the question branch, we obtain copies of and concatenate each copy with the output of the question branch at each word position. Then we reduce the dimension of each concatenated output to via a fully-connected layer by weights and bias shared among all positions of the question:

(3)

where is the representation the

-th position in question. We output the probability distribution over label space

for the -th question word via a softmax function:

(4)

where represents all trainable parameters.

Finally, we optimize the cross entropy loss function:

(5)

where represents all the training examples and is the ground truth for the -th question word and label in the -th training example. So is a one-hot vector. We leverage Adam optimizer [Kingma and Ba2014] to optimize this loss function and set the learning rate as 0.001 and keep other parameters the same as the original paper. We set the dropout rate to 0.1. The batch size is set to 128.

During testing, the prediction for each position in the question is computed as:

(6)

Lastly, for function satisfiability analysis, we extract function words and function targets with polarities over label space ; for compatibility analysis, we extract complementary entities with polarities over label space .

Experimental Result

In this section, we discuss the details of the annotated corpus, and experimentally demonstrate the superior performance of DAN.

Corpus Annotation and Analysis

Product QA % with Fun. Intr. Fun. Extr. Fun.
DSLR 319 27.9 57 32
E-Reader 270 44.07 48 71
Speaker 153 38.56 14 45
Tablet 300 41.67 31 94
Cellphone 1 168 60.12 6 95
Cellphone 2 321 43.61 25 115
Laptop 1 289 25.26 42 31
Laptop 2 404 58.17 56 179
Netbook 194 46.91 15 76
TV 291 49.48 21 123
TV Console 167 50.9 7 78
Gaming Console 204 75.49 14 140
Apple Watch 324 37.65 55 67
VR Headset 438 76.94 12 325
Stylus 260 72.69 13 176
Micro SD Card 281 81.85 1 229
Mouse 254 74.41 30 159
Tablet Stand 212 88.68 3 185
Table 2: Function statistics of 18 selected labeled products. QAs: number of QA pairs; % with Fun.: percentage of QA pairs containing function needs; Intr. Fun.: QAs about intrinsic functions; Extr. Fun.: QAs about extrinsic functions.

We crawled about 1 million QA pairs from the web pages of products in the electronics department of Amazon. These 1 million QAs combined with all electronics customer reviews [McAuley, Pandey, and Leskovec2015] are used to train word embeddings. We use customer reviews because the texts in QAs are too short to train good quality embeddings. The combined corpus is 4 GB.

We select 42 products with 7969 QA pairs in total as the to-be-annotated corpus. The corpus is labeled by 3 annotators independently. The general annotation guidelines are as the following:

  1. only yes/no QAs should be labeled;

  2. a function expression should be labeled as either intrinsic function or extrinsic function;

  3. each function expression is labeled with separate function words and function targets;

  4. function words are verbs and prepositions around the function targets;

  5. function targets are token spans containing nouns, adjectives, or model numbers;

  6. abstract entities such as “picture”, “video”, etc. are considered as function targets for intrinsic functions;

  7. specific entities are considered as complementary entities (function targets for extrinsic functions). They are not limited to products from Amazon, but also include general entities like “phone”, or service providers like “AT&T”;

  8. implicit yes/no answers should also be labeled to increase the recall rate;

  9. implicit answers without direct experience on the target product are labeled as uncertain answers (e.g., “I am not sure but it works for my android phone.”).

All annotators initially agreed on their annotations on 83% of all QA pairs. Disagreements are then resolved to reach final consensus annotations.

Product Satis. Unsatis. Uncertain #Desc.
DSLR 62 9 18 7
E-Reader 66 35 18 8
Speaker 35 16 8 7
Tablet 76 22 27 7
Cellphone 1 88 2 11 6
Cellphone 2 63 47 30 39
Laptop 1 59 3 11 4
Laptop 2 98 88 49 5
Netbook 59 12 20 11
TV 84 38 22 21
TV Console 46 27 12 21
Gaming Console 78 29 47 19
Apple Watch 70 40 12 8
VR Headset 106 200 31 8
Stylus 97 24 68 13
Micro SD Card 168 16 46 9
Mouse 106 36 47 5
Tablet Stand 95 30 63 55
Table 3: Statistics of 18 selected labeled products on satisfiability. Satis.: satisfiable function needs; Unsatis.: unsatisfiable function needs; Uncertain: uncertain function needs; #Desc.: number of compatible products mentioned in the corresponding product descriptions.

Due to limited space, the statistics of 18 selected products with annotations are shown in Table 2 and 3. We can see that the majority functions are extrinsic functions, indicating the importance of product compatibility analysis. This is close to our common sense as complementary entities can be unlimited, whereas the intrinsic functions are usually limited. We observe that accessories (the last 5 products) have a higher percentage of functionality related questions than main products (the first 13 products). This is as expected since accessories are poorly described in the product description and accessories usually have many complementary entities. From Table 3, we can see that the polarity distribution is not even: most products have more satisfiable functions than unsatisfiable or uncertain ones. This is because customers are more likely to ask a question to confirm functionality before purchasing, and many unsatisfiable functions are thus identified in advance without asking a question. The only exception is the relatively new product VR headset, which is problematic due to its short time of testing on the market.

We further investigate product descriptions of these 18 products and count the number of compatible products mentioned, as shown in the last column of Table 3. Interestingly, no incompatible entities can be found, justifying the need for compatibility analysis on incompatible products from user-generated data.

The corpus is preprocessed using Stanford CoreNLP 333http://stanfordnlp.github.io/CoreNLP/ regarding sentence segmentation, tokenization, POS-tagging, lemmatizing and dependency parsing. The last 3 steps provide features for the Conditional Random Fields (CRF) [Lafferty, McCallum, and Pereira2001] baseline. We shuffle all QA pairs and select 70% of QA pairs for training, 10% for validation and 20% for testing.

Baselines

We compare DAN with the following baselines.
CRF: This baseline is to show that a traditional sequence labeling model performs poorly. Note that CRF [Lafferty, McCallum, and Pereira2001] can only be evaluated on extraction without polarity detection since it cannot incorporate the answer into the model. We train CRF models using Mallet444http://mallet.cs.umass.edu/. We use the following features: the words within a 5-word window, the POS tags within a 5-word window, the number of characters, binary indicators (camel case, digits, dashes, slashes and periods), and dependency relations for the current word obtained via dependency parsing.
QA S-BLSTM: This baseline does not have any attention module. We use it to show that attention mechanism improves the results.
QA CoAttention: This baseline is inspired by [Xiong, Zhong, and Socher2016], where the question and the answer part directly attend to each other, without form the QA story. We use this baseline to demonstrate that attending on a QA story is better.
DAN (-) Answer Attention: This baseline does not have the answer attention module in DAN. We use this baseline to show that the answer attention also helps to improve the performance on polarity detection.

Product Compatibility Analysis

Method PCA CER Polar. Acc.
CRF - 71.0 -
QA S-BLSTM 61.3 78.0 81.6
QA CoAttention 62.9 79.8 82.4
DAN (-) Ans. Attention 63.9 80.2 81.5
DAN 64.8 80.9 82.5
Table 4: Performance on Product Compatibility Analysis (PCA): PCA is the averaged scores over 3 types (compatible, incompatible and uncertain) of complementary entities; CER only evaluates the performance of Complementary Entity Recognition without considering the polarity; Polarity Accuracy only evalutes the polarity detection given successfully identified complementary entities.

We first evaluate the performance of product compatibility analysis. Note that the label space of this task is , as described in Section Model Overview and Preliminary. We consider an extracted entity that has more than or equal to 50% overlapping words with the ground truth entity as a positive extraction. The polarity of a positive extraction is computed as the majority type voted from all words in such a positive extraction. So a true positive example must have at least 50% overlapping words and a match on polarity. Any positive extraction with no corresponding ground truth entity is considered as a false positive example. Any example with mismatched polarity or negative extraction is considered as a false negative example. We average the computed from the above definition as the PCA column, shown in Table 4. Further, by only considering a positive extraction as a true positive and a negative extraction as a false negative, we compute CER for complementary entity recognition. Given a positive extraction, we further compute the classification accuracy over 3 polarity types to show the effectiveness of polarity detection.

Result Analysis: We can see that the performance of DAN outperforms other baselines on PCA , CER , and polarity accuracy. The attention mechanism boosts the performance of CER a lot. With the attention on the answer, the polarity is more accurate when DAN is compared with DAN (-) Answer Attention. The baseline QA CoAttention indicates that attending on the QA story is better than attending the question or the answer alone. Lastly, CRF performs poorly on learning better word representations.

Product Function Satisfiability Analysis

Method FSA FNR Polar. Acc.
CRF - 70.2 -
QA S-BLSTM 61.1 77.0 82.2
QA CoAttention 61.7 77.4 81.3
DAN (-) Ans. Attention 62.5 78.0 83.0
DAN 63.9 78.2 84.3
Table 5: Performance on product Function Satisfiability Analysis (FSA): FSA is the averaged scores over 3 types (satisfiable, unsatisfiable and uncertain) of extracted function expressions; FNR only evaluates the performance of Function Need (function expression) Recognition without considering the polarity; Polarity Accuracy only evalutes the polarity detection given successfully identified function needs.

We then evaluate the performance of product function satisfiability analysis, which requires the label space of all models to be . Similar to the previous task, we consider an extracted function target that has more than or equal to 50% overlapping words with the ground truth function target as a positive function target extraction. If there is at least one function words that are correctly predicted, or the function word from the ground truth is missing, we consider such case as a positive function word extraction. A true positive extraction

is generated when both a positive function target extraction and a positive function word extraction happen. The rest of evaluation metrics is the same as in the previous subsection except the change of corresponding terms, as shown in Table

5.

Result Analysis: We can see that the performance of DAN outperforms other baselines. DAN improves over DAN (-) Answer Attention a little for function need recognition, but a lot for polarity detection due to answer attention. The baseline QA CoAttention indicates that the QA story in DAN is better given the longer QA story rather than the question or the answer alone. Further, we notice that the performance of QA S-BLSTM and QA CoAttention are close. So the short question or answer alone may not have enough information. Sometimes it may even introduce noise. Lastly, CRF performs poorly because of its poor representation learning capability.

Qualitative Analysis

To get a better sense of the extracted function expressions (or needs), we sample a few predictions for 5 popular products from DAN, as shown in Table 6. We observe that many function needs are indeed customers’ high priority needs. Most function needs are extrinsic functions and their function targets can be interpreted as complementary products. For example, it is important to know that the Tablet is not designed for high-performance games like “fallout 4”, or google apps are not runnable on Cellphone 1. Intrinsic functions are also identified, such as “waterproof” or “support multi pairing”. Knowing whether Apple Watch is waterproof or not is very important when deciding whether to buy such a product.

Product Satisfiable Unsatisfiable
Tablet
run itunes
read kindle books
play Xbox One games
run fallout 4
Cellphone 1
work on cricket
run Gmail
get google apps
support SD card
Laptop 1
Install Windows
install adobe flash player
use with HDMI
support Appleworks
Apple Watch
work on IPhone 6 plus
comp. with apple comp.
comp. with android phones
waterproof
Mouse
mac compatible
Android
work on glass table
support multi pairing
Table 6: Satisfiable function needs and unsatisfiable function needs from 5 popular products: the function word are bolded and the function targets are underlined.

Conclusion

In this paper, we propose two closely related problems: product compatibility analysis and function satisfiability analysis, where the second problem is a generalization of the first problem. We address this problem by first creating an annotated corpus based on Product Community Question and Answering (PCQA). Then we propose a neural Dual Attention Network (DAN) to solve these two (2) problems in an end-to-end manner. Experiments demonstrate that DAN is superior to a wide spectrum of baselines. Applications of this model can be found in e-commerce websites and recommender systems.

Acknowledgments

This work is supported in part by NSF through grants IIS-1526499, CNS-1626432, and NSFC 61672313. We would also like to thank anonymous reviewers for their valuable feedback to improve this paper.

References