Supervised Opinion Aspect Extraction by Exploiting Past Extraction Results

12/23/2016 ∙ by Lei Shu, et al. ∙ RTI International University of Illinois at Chicago 0

One of the key tasks of sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. In this work, we focus on using supervised sequence labeling as the base approach to performing the task. Although several extraction methods using sequence labeling methods such as Conditional Random Fields (CRF) and Hidden Markov Models (HMM) have been proposed, we show that this supervised approach can be significantly improved by exploiting the idea of concept sharing across multiple domains. For example, "screen" is an aspect in iPhone, but not only iPhone has a screen, many electronic devices have screens too. When "screen" appears in a review of a new domain (or product), it is likely to be an aspect too. Knowing this information enables us to do much better extraction in the new domain. This paper proposes a novel extraction method exploiting this idea in the context of supervised sequence labeling. Experimental results show that it produces markedly better results than without using the past information.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Aspect extraction is a fundamental task of opinion mining or sentiment analysis [Hu and Liu2004]. It aims to extract opinion targets from opinion text. For example, from “This phone has a good screen,” it aims to extract “screen.” In product reviews, aspects are product attributes or features. They are needed in many sentiment analysis applications.

Aspect extraction has been studied by many researchers. There are both supervised and unsupervised approaches. We will discuss existing methods in these two approaches and compare them with the proposed technique in the related work section. The proposed technique uses the popular supervised sequence labeling method Conditional Random Fields (CRF) [Lafferty et al.2001] as its base algorithm [Jakob and Gurevych2010, Choi and Cardie2010, Mitchell et al.2013]. We will show that the results of CRF can be significantly improved by leveraging some prior knowledge mined automatically (without any user involvement) from a large amount of online reviews of many products, which we also call domains. Such reviews are readily available and can be easily crawled from the Web, e.g., from Amazon.com’s review pages. The improvement is possible due to the important observation that although every product (domain) is different, there is a fair amount of aspects overlapping across domains or products [Chen and Liu2014]

. For example, every product review domain probably has the aspect

price, reviews of most electronic products share the aspect of battery life and reviews of some products share the aspect of screen. This paper exploits such sharing to help CRF produce much better extraction results.

Since the proposed method will make use of the past extraction results as prior knowledge to help new or future extraction, it has to make sure that the knowledge is reliable. It is well-known that no statistical learning method can guarantee perfect results (as we will see later in the experiment section, CRF’s extraction results are far from perfect). However, if we can find a set of shared aspects that have been extracted from multiple past domains, these aspects, which we call reliable aspects, are more likely to be correct. They can serve as the prior knowledge to help CRF extract from a new domain more effectively. For example, we have product reviews from three domains. After running a CRF-based extractor, a set of aspects is extracted from each domain reviews, which is listed below. Note that only four aspects are listed for each domain for illustration purposes.

Camera Domain: price, my wife, battery life, picture

Cellphone: picture, husband, battery life, expensive

Washer: price, water, customer, shoes

Clearly, some of the aspects are clearly incorrect, e.g., my wife, husband, and customer as they are not features of these products. However, if we focus on those aspects that appear at least in two domains, we can find the following set:

{price, battery life, picture}.

This list of words is used as the past knowledge and given to CRF, which will leverage on it to perform better extraction in the new review domain.

The proposed approach has two phases:

  1. Model building phase: Given a labeled training review dataset , it builds a CRF model .

  2. Extraction phase: At any point in time, has been applied to extract from past domains of reviews , which produced the corresponding sets of aspects . A set of frequent aspects (the past knowledge) will be discovered from . When faced with a new domain of reviews , the algorithm first finds a set of reliable aspects from the aspect store . is then used to help perform better extraction from . The resulting set of aspects is added to for future use.

Due to the ability to leverage the knowledge gained from the past learning results to help the new domain extraction by the CRF model . We are essentially using the idea of

lifelong machine learning

 [Chen and Liu2014, Ruvolo and Eaton2013, Silver et al.2013, Thrun1998], which gives the name of the proposed technique, Lifelong-CRF. Lifelong learning means to retain the knowledge or information learned in the past and leverage it to help future learning and problem solving. Formally, it is defined as follows [Chen et al.2015]:

Definition 1.

Lifelong machine learning (or simply lifelong learning) is a continuous learning process where the learner has performed a sequence of learning tasks, , , , , called the past tasks. When faced with the th task with its data , the learner can leverage the prior knowledge gained in the past to help learn . After the completion of learning , is updated with the learned results from .

The key challenge of the proposed Lifelong-CRF method is how to leverage to help to perform better extraction. This paper proposes a novel method, which does not change the trained model , but uses a set of dependency patterns generated from dependency relations and as feature values for CRF. As the algorithm extracts in more domains, also grows and the dependency patterns grow too, which gives the new domain richer feature information to enable the CRF model to perform better extraction in the new domain data .

In summary, this paper makes the following contributions:

  1. It proposes a novel idea of exploiting review collections from past domains to learn prior knowledge to guide the CRF model in its sequence labeling process. To the best of our knowledge, this is the first time that lifelong learning is added to CRF. It is also the first time that lifelong learning is applied to supervised aspect extraction.

  2. It proposes a novel method to incorporate the prior knowledge in the CRF prediction model for better extraction.

  3. Experimental results show that the proposed Lifelong-CRF outperforms baseline methods markedly.

2 Related Work

As mentioned in the introduction, there are two main approaches to aspect extraction: supervised and unsupervised. The former is mainly based on CRF [Jakob and Gurevych2010, Choi and Cardie2010, Mitchell et al.2013], while the latter is mainly based on topic modeling [Mei et al.2007, Titov and McDonald2008, Li et al.2010, Brody and Elhadad2010, Wang et al.2010, Moghaddam and Ester2011, Mukherjee and Liu2012, Lin and He2009, Zhao et al.2010, Jo and Oh2011, Fang and Huang2012], and syntactic rules [Zhuang et al.2006, Wang and Wang2008, Wu et al.2009, Zhang et al.2010, Qiu et al.2011, Poria et al.2014, Xu et al.2016b, Xu et al.2016a]. There are also frequency-based methods [Hu and Liu2004, Popescu and Etzioni2005, Zhu et al.2009], word alignment methods [Liu et al.2013], label propagation methods [Zhou et al.2013], and others [Zhao et al.2015].

The technique proposed in this paper is in the context of supervised CRF [Jakob and Gurevych2010, Choi and Cardie2010, Mitchell et al.2013], which learns a sequence model to label aspects and non-aspects. Our work aims to improve it by exploiting the idea of lifelong learning. None of the existing supervised extraction methods have made use of this new idea.

Our work is most closely related to extraction methods that have already employed lifelong learning. However, all the current methods are unsupervised. For example, lifelong topic modeling-based methods in [Chen et al.2014, Wang et al.2016] have been used for aspect extraction. However, topic models can only find some rough topics and are not effective for finding fine-grained aspects as a topical term does not necessarily mean an aspect. Also, topic models only find aspects that are individual words, but many aspects of products are multiple word phrases, e.g., batter life and picture quality. Further, lifelong learning is used for unsupervised opinion target (aspect or entity) classification [Shu et al.2016], but not for aspect extraction. [Liu et al.2016] proposed an unsupervised lifelong leaning method based on dependency rules [Qiu et al.2011]

and recommendation. However, it is different from our method as our method is based on supervised sequence labeling. We aim to find more precise aspects using supervised learning and show that lifelong learning is also effective for supervised learning and to propose a novel method to incorporate it into the CRF labeling process.

There are existing lifelong supervised learning methods [Chen et al.2015, Ruvolo and Eaton2013] but they are for classification rather than for sequence labeling.

Note that lifelong learning is related to transfer learning and multi-task learning 

[Pan and Yang2010], but they are also different. See their differences in [Chen and Liu2014].

3 Conditional Random Field

Conditional Random Field (CRF) is a popular supervised sequence labeling method. We use linear-chain CRF, which is the first order CRF. It can be viewed as a factor graph over an observation sequence and a label sequence .

Let denote the length of the sequence, and indicate the th position in the sequence. Let be a set of feature functions. Each feature function has a corresponding weight . The conditional probability of the sequence of labels given the sequence of observations is

(1)

where is the partition function:

(2)

3.1 Parameter Estimation and Prediction

During training, the weights

of feature functions can be estimated by maximizing the log likelihood on the training data

:

(3)

During testing, the prediction of labels is done by maximizing

(4)

3.2 Feature Function

We use two types of feature functions. One is Label-Word (LW) feature function:

(5)

where is the set of labels, is the vocabulary and is indicator function. The above feature function returns when the th word is and the th label is . Otherwise, it returns . The other feature function is Label-Label (LL) feature function:

(6)

Because the size of label set is small, the occurrence of each Label-Label combination in the training data is sufficient to learn the corresponding weights for Eq. (6) well. However, the set of observed words is much larger. The occurrence of each Label-Word combination in the training data is not sufficient to ensure the corresponding weights are learned well for Eq. (5). Further, the set of unobserved words is huge. When doing prediction (testing), it is highly possible that there are new words that have never appeared in the training data. So there are no corresponding Label-Word feature function and weight that match those newly observed words. To solve this problem, we introduce additional features in the next section.

4 Features

In the Label-Word feature function Eq (5), represents the current word that can take a value from a set of words. In practice,

is a multi-dimensional vector. We use

to denote one feature (dimension) of , where is the feature set of . The Label-dimension (L) feature function is defined as

(7)

where is the set of observed values in feature and we call feature ’s feature values. Eq. (7) is a feature function that returns when ’s feature equals to the feature value and the variable (th label) equals to the label value .

For NLP problems, commonly used features are word (W) and POS-tag (P). The POS-tag feature is a more general feature than the word feature since it generalizes to new observations in testing. Contextual features in a fixed-sized window are useful as well, such as previous word (-1W), previous word’s POS-tag (-1P), next word (+1W) and next word’s POS-tag (+1P).

The feature set we use for CRF is {G, W, -1W, +1W, P, -1P, +1P}, which contains 6 common features and 1 general dependency feature (G). The general dependency feature (G) takes a dependency pattern as a value, which is generated from a dependency relation obtained from dependency parsing on input sentences. This is a useful feature because a dependency relation can link two words that may be far away from each other rather than a fixed-size window. The dependency feature also enables the capability of knowledge accumulation in lifelong learning, which will be clear shortly.

4.1 Dependency Relation

A dependency relation is a 7-tuple of the following format:

(type, gov, govidx, govpos, dep, depidx, deppos)

where type is the type of the dependency relation, gov is the governor word, govidx is the index (position) of the governor word in a sentence, govpos is the POS tag of the governor word, dep is the dependent word, depidx is the index of the dependent word in a sentence and deppos is the POS tag of the dependent word.

Index Word Dependency Relations
1 The {(det, battery, 2, NN , The, 1, DT) }
2 battery {(nsubj, great, 7, JJ , battery, 2, NN), (det, battery, 2, NN , The, 1, DT), (nmod, battery, 2, NN, camera, 5, NN) }
3 of {(case, camera, 5, NN, of, 3, IN) }
4 this {(det, camera, 5, NN, this, 4, DT) }
5 camera {(case, camera, 5, NN, of, 3, IN), (det, camera, 5, NN, this, 4, DT), (nmod, battery, 2, NN, camera, 5, NN) }
6 is {(cop, great, 7, JJ , is, 6, VBZ) }
7 great {(root, ROOT, 0, VBZ, great, 7, JJ), (nsubj, great, 7, JJ , battery, 2, NN), (cop, great, 7, JJ , is, 6, VBZ) }
Table 1: Dependency relations parsed from “The battery of this camera is great”

Table 1 shows the dependency relations parsed from “The battery of this camera is great”. The Index column shows the position of each word in the sentence. The Dependency Relations column lists all the dependency relations that each word involves.

The general dependency feature (G) of the variable takes a set of feature values . Each feature value is a dependency pattern. The Label-G (LG) feature function is defined as:

(8)

Such a feature function returns when the general dependency feature of the variable equals to a dependency pattern and the variable equals to the label value .

4.2 Generating Dependency Patterns

As discussed above, the general dependency feature has possible values of all dependency relations. However, we do not use each dependency relation from the parser directly because it is too sparse to get better results during testing. We generalize a dependency relation into a dependency pattern using the following steps:

  1. Eliminate all index information from a relation as the same word pattern may appear in different positions in different sentences. After removing the index information, the dependency relation is in the following format:

  2. Replace a specific word with a wildcard for all its dependency relations. There are two reasons for this. First, the word itself in a dependency relation is redundant since we already have word (W) and POS tag(P) features. Second, we care more about the other word’s influence on the current word. We still keep the information whether a word is a dependent word or a governor word. This is because without such information, “the battery of this camera” has the same nmod relation as “the camera of this battery”.

    To illustrate the process of this step, in Table 1, the 5th word “camera” is the dependent word in (nmod, battery, NN, camera, NN) but the governor word in (det, camera, NN, this, DT). After applying wildcard, the relations for “camera” become:

    (nmod, battery, NN, *), (det, *, this, DT), (case, *, of, IN)

    This is still not general enough because there are still actual words in the relations, which make the relations still too specific and difficult to apply to new domains (cross-domains) other than the training domain because those words may not appear in new domains.

  3. Replace the related word in each dependency relation with a more general label to achieve a more general dependency feature value. Let the set of aspects annotated in the training data be . If a word in the dependency relation appears in , we replace it with a special label ‘A’ (aspect) and if the word does not, it is replaced with the label ‘O’ (other).

    For example, assuming the training domain is Camera, The words “battery” and “camera” are in . The above dependency relations for the word “camera” become:

    Likewise, the dependency relations of the word “battery” become:

    These final forms of dependency relations are called dependency patterns.

    In the sentence “The battery of this camera is great”, the 5th word “camera” makes the feature function Eq. (9) returns because “camera” is an aspect and it has a dependency pattern (nmod, A, NN, *) .

    (9)

    where and . Likewise, the 2nd word “battery” makes the feature function Eq. (10) returns because it is an aspect as well and it has a dependency pattern nmod(*, A, NN).

    (10)

    where and .

We are now ready to present the proposed Lifelong-CRF method since dependency patterns are capable of accumulating knowledge.

5 Lifelong CRF

Due to the fact that dependency patterns for the general dependency feature do not use any actual words, they are powerful for cross-domain extraction (the test domain is not the training domain). More importantly, they make the proposed Lifelong-CRF method possible. The idea is as follows:

We first introduce a set of reliable aspects which is mined from the aspects extracted from past domains datasets using a trained CRF model . is regarded as the past knowledge in lifelong learning. is (the set of all annotated aspects in the training data ) initially. The more domains works on, the more aspects it extracts, and the larger the set . When faced with a new domain, allows the general dependency feature to generate more dependency patterns related to aspects, which uses as we discussed in the previous section. More patterns generate more feature functions, which enable the CRF model to extract more and better aspects in the new domain.

The proposed Lifelong-CRF algorithm works in two phases: training phase and lifelong prediction phase. In the training phrase, we train a CRF model using the annotated training data . In the lifelong prediction phase, is applied to each new dataset for aspect extraction. In the lifelong process, does not change. Instead, as mentioned above, we keep a set of reliable aspects extracted from the past datasets. Clearly, we cannot use all extracted aspects from the past domains as reliable aspects due to many extraction errors. But those aspects that appear in multiple past domains are more likely to be correct. Thus contains those frequent aspects extracted in the past. Below, we discuss these two phases in greater detail.

Model Training Phase: Given the annotated training data set and the set of all annotated aspects in , it first generates all feature functions (including dependency pattern-based ones) to give the data with features (line 1). It then trains a CRF model by running a CRF learning algorithm (line 2). is assigned to as the initial set of reliable aspects (line 3), which will be used in subsequent extraction tasks in new domains.

1:  
2:  
3:  
Algorithm 1 Model Training Phrase

Lifelong Prediction Phase: This is the steady state phase for lifelong CRF prediction (or extraction). When a new domain dataset arrives in the system, it uses Algorithm 2 to perform extraction on , which works iteratively.

  1. As in Algorithm 1, it first generates the features on the data (line 3). It then applies the CRF model on to produce a set of aspects (line 4). It is important to note again that grows as the system worked on more domains, which enables the system to generate more dependency patterns-based feature functions for the new data, and consequently better extraction results from the new domain as we will see in the experiment section.

  2. is added to , our past aspect store. From , we mine a set of frequent aspects . The frequency threshold is .

  3. If is the same as from the previous iteration, the algorithm exits the loop as there will be no new aspects to be found. We now explain why we need an iterative process. This is because each extraction gives new results, which may increase the size of , the reliable past aspects or the past knowledge. The increased may produce more dependency patterns, which may enable more extractions in the next iteration.

  4. Else, this means that some additional reliable aspects are found. may be able to extract additional aspects in the next iteration. Lines 10 and 11 basically updates the two sets for the next iteration. Note that aspects from the training data are considered always reliable, thus a subset of .

1:  
2:  loop 
3:     
4:     
5:     
6:     
7:     if  then
8:        break
9:     else
10:        
11:        
12:        
13:     end if
14:  end loop
Algorithm 2 Lifelong Prediction Phase

6 Experiment

Evaluation Datasets: We use two types of data for our experiments. The first type consists of seven (7) annotated benchmark review datasets from 7 domains (products). Since they are annotated, they are used in training and testing. The first 4 datasets are from [Hu and Liu2004], which actually has 5 datasets from 4 domains. Since we are mainly interested in results at the domain level, we did not use one of the domain-repeated datasets. The last 3 datasets of three domains (products) are from [Liu et al.2016]. All these datasets have been used previously for aspect extraction [Hu and Liu2004, Jakob and Gurevych2010, Liu et al.2016] Details of the datasets are in Table 2.

The second type has 50 review datasets from 50 diverse domains or products [Chen and Liu2014]. These datasets are not annotated or labeled. They are used for lifelong learning and are treated as the past domain data. Note that since they are not annotated, we cannot use them for training of CRF or for testing. Each dataset has 1000 reviews. All these datasets were downloaded from the paper authors’ webpages.

max width=0.5 Dataset Domain # of Sentence # of Aspect # of Outside D1 Computer 536 1173 7675 D2 Camera 609 1640 9849 D3 Router 509 1239 7264 D4 Phone 497 980 7478 D5 Speaker 510 1299 7546 D6 DVD Player 506 928 7552 D7 Mp3 Player 505 1180 7607

Table 2: Annotation details of the benchmark datasets.

max width=1.0 Cross Domain In Domain Training Testing CRF CRF+R Lifelong CRF Testing CRF CRF+R Lifelong CRF Computer Computer 86.6 51.4 64.5 23.2 90.4 37.0 82.2 62.7 71.1 Computer 84.0 71.4 77.2 23.2 93.9 37.3 81.6 75.8 78.6 Camera Camera 84.3 48.3 61.4 21.8 86.8 34.9 81.9 60.6 69.6 Camera 83.7 70.3 76.4 20.8 93.7 34.1 80.7 75.4 77.9 Router Router 86.3 48.3 61.9 24.8 92.6 39.2 82.8 60.8 70.1 Router 85.3 71.8 78.0 22.8 93.9 36.8 82.6 76.2 79.3 Phone Phone 72.5 50.6 59.6 20.8 81.2 33.1 70.1 59.5 64.4 Phone 85.0 71.1 77.5 25.1 93.7 39.6 82.9 74.7 78.6 Speaker Speaker 87.3 60.6 71.6 22.4 91.2 35.9 84.5 71.5 77.4 Speaker 83.8 70.3 76.5 20.1 94.3 33.2 80.1 75.8 77.9 DVDplayer DVDplayer 72.7 63.2 67.6 16.4 90.7 27.7 69.7 71.5 70.6 DVDplayer 85.0 72.2 78.1 20.9 94.2 34.3 81.6 76.7 79.1 Mp3player Mp3player 87.5 49.4 63.2 20.6 91.9 33.7 84.1 60.7 70.5 Mp3player 83.2 72.6 77.5 20.4 94.5 33.5 79.8 77.7 78.7 Average 82.5 53.1 64.3 21.4 89.3 34.5 79.3 63.9 70.5 Average 84.3 71.4 77.3 21.9 94.0 35.5 81.3 76.0 78.6

Table 3: Comparative results on Aspect Extraction in precision, recall and F score Cross Domain and In Domain (X means all except domain X)

Compared Methods: Since the goal of this paper is to study whether lifelong learning can be exploited to improve supervised learning for aspect extraction, we compare our proposed method Lifelong-CRF with the state-of-the-art supervised extraction methods. We will not compare with unsupervised extraction methods, which have been shown improvable by lifelong learning [Liu et al.2016]. This paper is the first to incorporate lifelong learning to supervised sequence labeling. Our experiment compares the following four methods.

CRF: This is the linear chain CRF. We use the system from 111https://github.com/huangzhengsjtu/pcrf/. CRF has been used for the task by many researchers [Jakob and Gurevych2010, Choi and Cardie2010, Mitchell et al.2013].

CRF+R: This system treats the accumulated reliable aspect set in the past as a dictionary. It simply adds those reliable aspects in that are not extracted by CRF but are in the test data to the CRF results. We want to see whether incorporating into the CRF model for prediction in Lifelong-CRF is actually needed.

Lifelong-CRF: This is our proposed system. The frequency threshold in Algorithm 2 used in our experiment to judge which extracted aspects are considered reliable is empirically set to .

Experiment Setting: In order to compare the systems using the same training and test data, we split each dataset into three parts, 200 sentences for training, 200 sentences for testing.

In our experiments, we conducted both cross-domain and in-domain tests. We are particularly interested in cross-domain tests as it is labor-intensive and time-consuming to label training data for each domain. It is thus highly desirable to have the trained model used in cross-domain situations. Also, training and testing on the same domains (in-domain) is less interesting because those aspects appeared in the training data are very likely to appear in the test data because they are all about reviews of the same products.

Cross-domain experiments: We combine 6 datasets for training (1200 sentences), and then test on the 7th domain (not used in training). This gives 7 cross-domain results.

In-domain experiments: We train and test on the same 6 domains (1200 sentences for training and 1200 sentences for testing). This gives us 7 in-domain results as well. Note that although we call these in-domain experiments, both the training and testing data use the same 6 domains.

Evaluating Measures: Since our goal is to extract aspects, we use the popular precision , recall , and -score to evaluate our results on the extracted aspects.

6.1 Results Analysis

All the experiment results are given in Table 3. We analyze the results of cross-domain and in-domain in turn. Cross-domain is the main setting of interest as it is very undesirable to manually label data for every domain in practice.

  1. Cross-domain results analysis. Each entry X in the first column means that domain X data is not used in training, i.e., the other 6 domains are used in training for the experiment. For example, Computer means that the data from the Computer domain is not used in training. In the second column, X means that domain X is used in testing.

    From the cross-domain results in the table, we observe the following: In score, Lifelong-CRF is much better than CRF. On average the score improves from to . This is a major improvement. The main improvement is on the recall, which are markedly better. CRF+R’s results are very poor due to poor precisions, which show that treating the reliable aspects set as a dictionary is a bad idea. Incorporating into the CRF model is important because many aspects in are not correct or not applicable to the new/test domain. CRF model will not extract many of them.

  2. In-domain results analysis: Training uses the same 6 domain data as in the cross-domain case. X in the Testing column of the in-domain results means that X is not used in testing. For example, in the first row under in-domain, the Computer domain is not used in training or testing. That is, the other 6 domains are used in both training and testing (thus in-domain).

    From the in-domain results in the table, we observe the following: Lifelong-CRF still improves CRF, but the amount of improvement is considerably smaller. This is expected as we discuss above, because most of the aspects appeared in training probably also appeared in the test data because they are reviews from the same 6 products. Again CRF+R does poorly due to the same reason.

7 Conclusion

This paper proposed a lifelong learning based approach to enabling Conditional Random Fields (CRF) to leverage the past knowledge gained from extraction results of multiple domains to improve CRF’s extraction performance. To our knowledge, this is the first time that CRF is incorporated with the lifelong learning capability. This is also the first time that a lifelong supervised method is used for aspect extraction in opinion mining. Experimental results demonstrated the superior performance of the proposed Lifelong-CRF method.

References

  • [Brody and Elhadad2010] Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In NAACL ’10, pages 804–812.
  • [Chen and Liu2014] Zhiyuan Chen and Bing Liu. 2014. Topic modeling using topics from many domains, lifelong learning and big data. In ICML ’14, pages 703–711.
  • [Chen et al.2014] Zhiyuan Chen, Arjun Mukherjee, and Bing Liu. 2014. Aspect extraction with automated prior knowledge learning. In ACL ’14, pages 347–358.
  • [Chen et al.2015] Zhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Lifelong learning for sentiment classification. Volume 2: Short Papers, page 750.
  • [Choi and Cardie2010] Yejin Choi and Claire Cardie. 2010. Hierarchical sequential learning for extracting opinions and their attributes. In ACL ’10, pages 269–274.
  • [Fang and Huang2012] Lei Fang and Minlie Huang. 2012. Fine granular aspect analysis using latent structural models. In ACL ’12, pages 333–337.
  • [Hu and Liu2004] Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD ’04, pages 168–177.
  • [Jakob and Gurevych2010] Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single- and cross-domain setting with conditional random fields. In EMNLP ’10, pages 1035–1045.
  • [Jo and Oh2011] Yohan Jo and Alice H. Oh. 2011. Aspect and sentiment unification model for online review analysis. In WSDM ’11, pages 815–824.
  • [Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML ’01, pages 282–289.
  • [Li et al.2010] Fangtao Li, Minlie Huang, and Xiaoyan Zhu. 2010. Sentiment analysis with global topics and local dependency. In AAAI ’10, pages 1371–1376.
  • [Lin and He2009] Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In CIKM ’09, pages 375–384.
  • [Liu et al.2013] Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially-supervised word alignment model. In IJCAI ’13, pages 2134–2140.
  • [Liu et al.2016] Qian Liu, Bing Liu, Yuanlin Zhang, Doo Soon Kim, and Zhiqiang Gao. 2016. Improving opinion aspect extraction using semantic similarity and aspect associations. In

    Thirtieth AAAI Conference on Artificial Intelligence

    .
  • [Mei et al.2007] Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: Modeling facets and opinions in weblogs. In WWW ’07, pages 171–180.
  • [Mitchell et al.2013] Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In ACL ’13, pages 1643–1654.
  • [Moghaddam and Ester2011] Samaneh Moghaddam and Martin Ester. 2011. ILDA: interdependent lda model for learning latent aspects and their ratings from online product reviews. In SIGIR ’11, pages 665–674.
  • [Mukherjee and Liu2012] Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In ACL ’12, volume 1, pages 339–348.
  • [Pan and Yang2010] Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359.
  • [Popescu and Etzioni2005] Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In HLT-EMNLP ’05, pages 339–346.
  • [Poria et al.2014] Soujanya Poria, Erik Cambria, Lun-Wei Ku, Chen Gui, and Alexander Gelbukh. 2014. A rule-based approach to aspect extraction from product reviews. In

    Proceedings of the second workshop on natural language processing for social media (SocialNLP)

    , pages 28–37.
  • [Qiu et al.2011] Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics, 37(1):9–27.
  • [Ruvolo and Eaton2013] Paul Ruvolo and Eric Eaton. 2013. Ella: An efficient lifelong learning algorithm. ICML (1), 28:507–515.
  • [Shu et al.2016] Lei Shu, Bing Liu, Hu Xu, and Annice Kim. 2016. Lifelong-rl: Lifelong relaxation labeling for separating entities and aspects in opinion targets. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • [Silver et al.2013] Daniel L Silver, Qiang Yang, and Lianghao Li. 2013. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, pages 49–55. Citeseer.
  • [Thrun1998] Sebastian Thrun. 1998. Lifelong learning algorithms. In Learning to learn, pages 181–209. Springer.
  • [Titov and McDonald2008] Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In ACL ’08: HLT, pages 308–316.
  • [Wang and Wang2008] Bo Wang and Houfeng Wang. 2008. Bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing. In IJCNLP ’08, pages 289–295.
  • [Wang et al.2010] Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In KDD ’10, pages 783–792.
  • [Wang et al.2016] Shuai Wang, Zhiyuan Chen, and Bing Liu. 2016. Mining aspect-specific opinion using a holistic lifelong topic model. In Proceedings of the 25th International Conference on World Wide Web, pages 167–176. International World Wide Web Conferences Steering Committee.
  • [Wu et al.2009] Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In EMNLP ’09, pages 1533–1541.
  • [Xu et al.2016a] Hu Xu, Lei Shu, Jingyuan Zhang, and Philip S. Yu. 2016a. Mining compatible/incompatible entities from question and answering via yes/no answer classification using distant label expansion. arXiv preprint arXiv:1612.04499.
  • [Xu et al.2016b] Hu Xu, Sihong Xie, Lei Shu, and Philip S. Yu. 2016b. Cer: Complementary entity recognition via knowledge expansion on large unlabeled product reviews. In Proceedings of IEEE International Conference on Big Data.
  • [Zhang et al.2010] Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In COLING ’10: Posters, pages 1462–1470.
  • [Zhao et al.2010] Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a maxent-lda hybrid. In EMNLP ’10, pages 56–65.
  • [Zhao et al.2015] Yanyan Zhao, Bing Qin, and Ting Liu. 2015. Creating a fine-grained corpus for chinese sentiment analysis. IEEE Intelligent Systems, 30(1):36–43.
  • [Zhou et al.2013] Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2013. Collective opinion target extraction in Chinese microblogs. In EMNLP ’13, pages 1840–1850.
  • [Zhu et al.2009] Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In CIKM ’09, pages 1799–1802.
  • [Zhuang et al.2006] Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In CIKM ’06, pages 43–50.