Log In Sign Up

Beyond Opinion Mining: Summarizing Opinions of Customer Reviews

Customer reviews are vital for making purchasing decisions in the Information Age. Such reviews can be automatically summarized to provide the user with an overview of opinions. In this tutorial, we present various aspects of opinion summarization that are useful for researchers and practitioners. First, we will introduce the task and major challenges. Then, we will present existing opinion summarization solutions, both pre-neural and neural. We will discuss how summarizers can be trained in the unsupervised, few-shot, and supervised regimes. Each regime has roots in different machine learning methods, such as auto-encoding, controllable text generation, and variational inference. Finally, we will discuss resources and evaluation methods and conclude with the future directions. This three-hour tutorial will provide a comprehensive overview over major advances in opinion summarization. The listeners will be well-equipped with the knowledge that is both useful for research and practical applications.


page 1

page 2

page 3

page 4


Noisy Pairing and Partial Supervision for Opinion Summarization

Current opinion summarization systems simply generate summaries reflecti...

Unsupervised Extractive Opinion Summarization Using Sparse Coding

Opinion summarization is the task of automatically generating summaries ...

Learning Opinion Summarizers by Selecting Informative Reviews

Opinion summarization has been traditionally approached with unsupervise...

Unsupervised Opinion Summarization Using Approximate Geodesics

Opinion summarization is the task of creating summaries capturing popula...

Extractive Opinion Summarization in Quantized Transformer Spaces

We present the Quantized Transformer (QT), an unsupervised system for ex...

Weakly-Supervised Opinion Summarization by Leveraging External Information

Opinion summarization from online product reviews is a challenging task,...

ExplainIt: Explainable Review Summarization with Opinion Causality Graphs

We present ExplainIt, a review summarization system centered around opin...

1. Introduction

People in the Information Age read reviews from online review websites when making decisions to buy a product or use a service. The proliferation of such reviews has driven research on opinion mining (Hu and Liu, 2006; Pang and Lee, 2008), where the ultimate goal is to glean information from multiple reviews so that users can make decisions more effectively. Opinion mining has assumed several facets in its history: among others, there are sentiment analysis (Pang et al., 2002), that reduces a single review into a sentiment label, opinion extraction (Mukherjee and Liu, 2012), that produces a list of aspect-sentiment pairs representing opinions mentioned in the reviews, and most notably opinion summarization (Wang and Ling, 2016), which creates a textual summary of opinions that are found in multiple reviews about a certain product or service. Opinion summarization is arguably the most effective solution for opinion mining, especially when assisting the user in making decisions. Specifically, textual opinion summaries provide users with information that is both more concise and more comprehensible compared to other alternatives. Thus, opinion mining research on the IR community has geared its focus towards opinion summarization in recent years (see Table 1).

The task of summarizing opinions in multiple reviews can be divided into two subtasks: opinion retrieval and summary generation. Opinion retrieval selects opinions from the reviews that are salient and thus need to be included in the summary. Summary generation produces a textual summary given the retrieved opinions that is concise yet informative and comprehensible for users to read and make decisions effectively. The summary can be generated from scratch with possibly novel tokens (i.e., abstractive summarization; (Ganesan et al., 2010; Chu and Liu, 2019)) or spans of text directly extracted from the input (i.e., extractive summarization; (Hu and Liu, 2004; Angelidis and Lapata, 2018)). Traditionally, these subtasks correspond to a pipeline of natural language generation models (McKeown, 1992; Carenini et al., 2006; Wang and Ling, 2016)

where opinion retrieval and summary generation are treated as content selection and surface realization tasks, respectively. Thanks to advancements in neural networks, most of the recent methods use an end-to-end approach

(Chu and Liu, 2019; Bražinskas et al., 2020, 2020) where both opinion retrieval and summary generation are done by a single model optimized to produce well-formed and informative summaries.

There are two broad types of challenges in opinion summarization: annotated data scarcity and usability

. As reviews-summary pairs are expensive to create, this has resulted in annotated dataset scarcity. However, the exceptional performance of neural networks for text summarization is mostly driven by large-scale supervised training 

(Rush et al., 2015; Zhang et al., 2020), which makes opinion summarization challenging. The second challenge – usability – stems from a number of practical requirements for industrial applications. First, for real-world products and service we often need to summarize many thousands of reviews. This is largely infeasible due to the high computational and memory costs of modelling that many reviews with neural architectures (Beltagy et al., 2020). Second, state-of-the-art text summarizers are prone to hallucinations (Maynez et al., 2020). In other words, a summarizer might mistakenly generate a summary with information not covered by input reviews, thus misinforming the user. Third, generic summaries often cannot address specific user needs. This, in turn, calls for ways to learn summarizers producing personalized summaries.

This opens exciting avenues to develop methods for solving these major challenges in opinion summarization. In this light, the aim of the tutorial is to inform interested researchers and practitioners, especially in opinion mining and text summarization, about recent and ongoing efforts to improve the state of the art and make opinion summarization systems useful in real-world scenarios. And the tutorial will make the audience well-equipped for addressing these challenges in terms of methods, ideas, and related work.

2. Tutorial Content and Outline

The tutorial will be 3 hours long and consist of the following five parts, which we describe in detail below.

2.1. Part I: Introduction [30 min]

Opinion summarization (Hu and Liu, 2006; Titov and McDonald, 2008; Kim et al., 2011)

focuses on summarizing opinionated text, such as customer reviews, and has been actively studied by researchers from the natural language processing and data mining community for decades. There are two major types of opinion summaries: non-textual summaries, such as aggregated ratings 

(Lu et al., 2009), aspect-sentiment tables (Titov and McDonald, 2008), and opinion clusters (Hu and Liu, 2004), and textual summaries, which often consist of a short text. Compared to non-textual summaries, which may confuse users due to their complex formats, textual summaries are considered much more user-friendly (Murray et al., 2017). Thus, in recent years, the considerable research interest in opinion summarization has shifted towards textual opinion summaries. In this tutorial, we will also focus on recent solutions for generating textual opinion summaries.

Like single document summary (Rush et al., 2015; See et al., 2017), textual opinion summary can also be either extractive or abstractive. However, unlike single document summarization, opinion summarization can rarely rely on gold-standard summaries at training time due to the lack of large-scale training examples in the form of review-summary pairs. Meanwhile, the prohibitively many and redundant input reviews also pose new challenges for the task.

In this part of the tutorial, we will first describe the opinion summarization task, its history, and the major challenges that come with the task. We will then provide a brief overview of existing opinion summarization solutions.

Pre-Neural Solutions
 LexRank (Erkan and Radev, 2004), TextRank (Mihalcea and Tarau, 2004), MEAD (Carenini et al., 2006), Wang (Wang et al., 2014)
 Opinosis (Ganesan et al., 2010), SEA (Carenini et al., 2006), Gerani (Gerani et al., 2014)
 MATE+MT (Angelidis and Lapata, 2018), Mukherjee (Mukherjee et al., 2020), ASPMEM (Zhao and Chaturvedi, 2020), QT (Angelidis et al., 2021)
 MeanSum (Chu and Liu, 2019), Coavoux (Coavoux et al., 2019), OpinionDigest (Suhara et al., 2020), RecurSum (Isonuma et al., 2021), MultimodalSum (Im et al., 2021), COOP (Iso et al., 2021)
Synthetic Training
 Copycat (Bražinskas et al., 2020), DenoiseSum (Amplayo and Lapata, 2020), MMDS (Shapira and Levy, 2020), Elsahar (Elsahar et al., 2021), Jiang (Jiang et al., 2021), PlanSum (Amplayo et al., 2021), TransSum (Wang and Wan, 2021), AceSum (Amplayo et al., 2021), ConsistSum (Ke et al., 2022), LSARS (Pan et al., 2020)
 Wang (Wang and Ling, 2016), FewSum (Bražinskas et al., 2020), AdaSum (Bražinskas et al., 2022), PASS (Oved and Levy, 2021), SelSum (Bražinskas et al., 2021), CondaSum (Amplayo and Lapata, 2021), Wei (Wei et al., 2021)
Table 1. Opinion summarization solutions that will be covered in this tutorial. A dagger denotes that the solution also leverages weak supervision.

2.2. Part II: Solutions To Data Scarcity [90 min]

In this part of the tutorial, we will present multiple existing opinion summarization models, as also summarized in Table 1

. These models attempt to solve the annotated data scarcity problem and are classified into four parts: pre-neural models, autoencoder-based models, models that use synthetic data, and models that leverage low-resource annotated data.

2.2.1. Autoencoders [30/90 min]

Due to the lack of training examples, one major approach is to use autoencoders for unsupervised opinion summarization. The autoencoder model consists of an encoder that transforms the input into latent representations and a decoder that attempts to reconstruct the original input using a reconstruction objective. It has a wide range of applications in both CV and NLP communities (Hinton et al., 2011; Kingma and Welling, 2014; Bowman et al., 2016). Autoencoders can also help models obtain better text representations, which allows easier text clustering, aggregation, and selection. Thus, it benefits both extractive and abstractive solutions. In this tutorial, we will first introduce the basics of autoencoders and then describe how to use autoencoders for both extractive and abstractive opinion summarization.

2.2.2. Synthetic Dataset Creation [30/90 min]

The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization

(Rush et al., 2015; See et al., 2017). The absence of human-written summaries in a large-scale calls for creative ways to synthesize datasets for supervised training of abstractive summarization models. Customer reviews, available in large quantities, can be used to create synthetic datasets for training. Such datasets are created by sampling one review as a pseudo-summary, and then selecting or generating a subset of reviews as input to be paired with the pseudo-summary. Subsequently, the summarizer is trained in a supervised manner to predict the pseudo-summary given the input reviews. This self-supervised approach, as has been shown in a number of works [Zhang et al., 2020, inter alia], is effective for training summarizers to generate abstractive opinion summamaaries. In this tutorial, we will introduce various techniques to create synthetic datasets, contrast them, and present results achieved by different works.

2.2.3. Low-Resource Learning [30/90 min]

Modern deep learning methods rely on large amounts of annotated data for training. Unlike synthetic datasets, automatically created from customer reviews, annotated datasets require expensive human effort. Consequently, only datasets with a handful of human-written summaries are available, which lead to a number of few-shot models. These models alleviate annotated data-scarcity using specialized mechanisms, such as parameter subset fine-tuning and summary candidate ranking. An alternative to human-written are editor-written summaries that are scraped from the web and linked to customer reviews. This setup is challenging because each summary can have hundreds of associated reviews. In this tutorial, we will present both methods that are few-shot learners and that scale to hundreds of input reviews.

2.3. Part III: Improving Usability [30 min]

In order to make opinion summarizers more useful in industrial settings, a number of features need to be improved. In this part of the tutorial, we will discuss the following three major features and recent solutions the community has proposed:

  • [leftmargin=*]

  • Scalability: The ability to handle a massive number of input reviews. To handle large scale input, the ability to retrieve salient information, e.g., reviews or opinions, becomes a important yet challenging feature for opinion summarization solutions.

  • Input Faithfulness: The ability of a summarizer to generate summaries covered in content by input reviews. In other words, the summarizer should not confuse entities or introduce novel content into summaries.

  • Controllability: The ability to produce constrained summaries, such as a hotel summary that only includes room cleanliness or a product summary that only covers the negative opinions.

2.4. Part IV: Evaluation and Resources [20 min]

As is common in other areas of natural language processing, in opinion summarization, researchers often rely on automatic metrics. These metrics, such as ROUGE (Lin, 2004), are based on word overlaps with the reference summary. However, word overlap metrics are limited and can weakly correlate with human judgment.

To address these shortcomings, human evaluation is often used, where human annotators assess various aspects of generated summaries. In this tutorial, we will present different kinds of human evaluation experiments, how they are designed, and how they are performed.

2.5. Part V: Future Work [10 min]

To conclude the tutorial, we will present several notable open questions for opinion summarization, such as the need for additional annotated resources, common issues with the generated summary (e.g., repetition, hallucination, coherency, and factuality), and the ability to handle various type of input data (e.g., images and knowledge bases). Based on these open questions, we will also present future work on opinion summarization.

3. Objectives

In this tutorial, we will cover a wide range of techniques from pre-neural approaches to the most recent advances for opinion summarization. In addition, we will also introduce the commonly used resources and evaluation metrics. Our goal for this tutorial is to increase the interest of the IR community towards the opinion summarization problem and help researchers to start working on relevant problems.

4. Relevance to the IR community

Sentiment analysis has been a major research area in the IR community. Since the tutorial will cover cutting-edge research in the field, it would attract a wide variety of IR researchers and practitioners. We would also like to emphasize that the interest in opinion mining and summarization techniques in the IR community has been rapidly and significantly increased in recent years. To the best of our knowledge, we are the first ones to offer a tutorial covering the series of recent opinion summarization approaches.222More than 80% of the papers were published within the last three years.

Figure 1. Increasing # of papers for opinion summarization that are published in IR-related venues.

5. Broader Impact

Methods presented in the tutorial also have applications beyond customer reviews. The amount of opinions on various topics expressed online is vast. These opinions address various sports, politics, and public events. In turn, this calls for ways to summarize this information for the benefit of the user. As we will discuss, the methods presented in the tutorial can be applied to other opinion domains, such as social media and blogs.

6. Instructors

Reinald Kim Amplayo is a Research Scientist at Google. He received his PhD from the University of Edinburgh, where his thesis focused on controllable and personalizable opinion summarization. He is a recepient of a best student paper runner-up at ACML 2018.

Arthur Bražinskas is a Research Scientist at Google working on natural language generation for Google Assistant. His PhD on low- and high-resource opinion summarization is supervised by Ivan Titov and Mirella Lapata at the University of Edinburgh.

Yoshi Suhara is an Applied Research Scientist at Grammarly. Previously, he was a Senior Research Scientist at Megagon Labs, an Adjunct Instructor at New College of Florida, a Visiting Scientist at the MIT Media Lab, and a Research Scientist at NTT Laboratories. He received his PhD from Keio University in 2014. His expertise lies in NLP, especially Opinion Mining and Information Extraction.

Xiaolan Wang is a Senior Research Scientist at Megagon Labs. She received her PhD from University of Massachusetts Amherst in 2019. Her research interests include data integration, data cleaning, and natural language processing. She co-instructed the tutorial, Data Augmentation for ML-driven Data Preparation and Integration, at VLDB 2021.

Bing Liu is a Distinguished Professor of Computer Science at the University of Illinois at Chicago (UIC). He has published extensively in top conferences and journals. He also authored four books about lifelong learning, sentiment analysis and Web mining. Three of his papers received Test-of-Time awards: two from SIGKDD and one from WSDM. He has served as the Chair of ACM SIGKDD from 2013-2017, as program chair of many leading data mining conferences, including KDD, ICDM, CIKM, WSDM, SDM, and PAKDD, and as associate editor of leading journals such as TKDE, TWEB, DMKD and TKDD. He is a recipient of ACM SIGKDD Innovation Award, and he is a Fellow of the ACM, AAAI, and IEEE.


  • R. K. Amplayo, S. Angelidis, and M. Lapata (2021) Unsupervised opinion summarization with content planning. In AAAI, Vol. 35, pp. 12489–12497. Cited by: Table 1.
  • R. K. Amplayo, S. Angelidis, and M. Lapata (2021) Aspect-controllable opinion summarization. In EMNLP, pp. 6578–6593. Cited by: Table 1.
  • R. K. Amplayo and M. Lapata (2020) Unsupervised opinion summarization with noising and denoising. In ACL, pp. 1934–1945. Cited by: Table 1.
  • R. K. Amplayo and M. Lapata (2021) Informative and controllable opinion summarization. In EACL, pp. 2662–2672. Cited by: Table 1.
  • S. Angelidis, R. K. Amplayo, Y. Suhara, X. Wang, and M. Lapata (2021) Extractive opinion summarization in quantized transformer spaces. TACL 9, pp. 277–293. Cited by: Table 1.
  • S. Angelidis and M. Lapata (2018) Summarizing opinions: aspect extraction meets sentiment prediction and they are both weakly supervised. In EMNLP, pp. 3675–3686. Cited by: §1, Table 1.
  • I. Beltagy, M. E. Peters, and A. Cohan (2020) Longformer: the long-document transformer. arXiv:2004.05150. Cited by: §1.
  • S. R. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio (2016) Generating sentences from a continuous space. In CoNLL, pp. 10–21. Cited by: §2.2.1.
  • A. Bražinskas, M. Lapata, and I. Titov (2020) Few-shot learning for opinion summarization. In EMNLP, pp. 4119–4135. Cited by: §1, Table 1.
  • A. Bražinskas, M. Lapata, and I. Titov (2020) Unsupervised opinion summarization as copycat-review generation. In ACL, pp. 5151–5169. Cited by: §1, Table 1.
  • A. Bražinskas, M. Lapata, and I. Titov (2021) Learning opinion summarizers by selecting informative reviews. In EMNLP, pp. 9424–9442. Cited by: Table 1.
  • A. Bražinskas, R. Nallapati, M. Bansal, and M. Dreyer (2022) Efficient few-shot fine-tuning for opinion summarization. In Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: Table 1.
  • G. Carenini, R. Ng, and A. Pauls (2006) Multi-document summarization of evaluative text. In EACL, pp. 305–312. Cited by: §1, Table 1.
  • E. Chu and P. Liu (2019) Meansum: a neural model for unsupervised multi-document abstractive summarization. In ICML, pp. 1223–1232. Cited by: §1, Table 1.
  • M. Coavoux, H. Elsahar, and M. Gallé (2019) Unsupervised aspect-based multi-document abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pp. 42–47. Cited by: Table 1.
  • H. Elsahar, M. Coavoux, J. Rozen, and M. Gallé (2021) Self-supervised and controlled multi-document opinion summarization. In EACL, pp. 1646–1662. Cited by: Table 1.
  • G. Erkan and D. R. Radev (2004) Lexrank: graph-based lexical centrality as salience in text summarization.

    Journal of artificial intelligence research

    22, pp. 457–479.
    Cited by: Table 1.
  • K. Ganesan, C. Zhai, and J. Han (2010) Opinosis: a graph based approach to abstractive summarization of highly redundant opinions. In COLING, pp. 340–348. Cited by: §1, Table 1.
  • S. Gerani, Y. Mehdad, G. Carenini, R. Ng, and B. Nejat (2014) Abstractive summarization of product reviews using discourse structure. In EMNLP, pp. 1602–1613. Cited by: Table 1.
  • G. E. Hinton, A. Krizhevsky, and S. D. Wang (2011) Transforming auto-encoders. In ICANN, pp. 44–51. Cited by: §2.2.1.
  • M. Hu and B. Liu (2004) Mining and summarizing customer reviews. In KDD, pp. 168–177. External Links: ISBN 1581138881 Cited by: §1, §2.1.
  • M. Hu and B. Liu (2006) Opinion extraction and summarization on the web. In AAAI, Vol. 7, pp. 1621–1624. Cited by: §1, §2.1.
  • J. Im, M. Kim, H. Lee, H. Cho, and S. Chung (2021) Self-supervised multimodal opinion summarization. In ACL, pp. 388–403. Cited by: Table 1.
  • H. Iso, X. Wang, Y. Suhara, S. Angelidis, and W. Tan (2021) Convex aggregation for opinion summarization. In EMNLP Findings, pp. 3885–3903. Cited by: Table 1.
  • M. Isonuma, J. Mori, D. Bollegala, and I. Sakata (2021) Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance. TACL 9, pp. 945–961. External Links: ISSN 2307-387X Cited by: Table 1.
  • W. Jiang, J. Chen, X. Ding, J. Wu, J. He, and G. Wang (2021) Review summary generation in online systems: frameworks for supervised and unsupervised scenarios. ACM Trans. Web 15 (3). External Links: ISSN 1559-1131 Cited by: Table 1.
  • W. Ke, J. Gao, H. Shen, and X. Cheng (2022) ConsistSum: unsupervised opinion summarization with the consistency of aspect, sentiment and semantic. In WSDM, pp. 467–475. Cited by: Table 1.
  • H. D. Kim, K. Ganesan, P. Sondhi, and C. Zhai (2011) Comprehensive review of opinion summarization. Technical report University of Illinois at Urbana-Champaign. Cited by: §2.1.
  • D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. CoRR abs/1312.6114. Cited by: §2.2.1.
  • C. Lin (2004) ROUGE: a package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81. Cited by: §2.4.
  • Y. Lu, C. Zhai, and N. Sundaresan (2009) Rated aspect summarization of short comments. In WWW, pp. 131–140. Cited by: §2.1.
  • J. Maynez, S. Narayan, B. Bohnet, and R. McDonald (2020) On faithfulness and factuality in abstractive summarization. In ACL, pp. 1906–1919. Cited by: §1.
  • K. McKeown (1992) Text generation. Cambridge University Press. Cited by: §1.
  • R. Mihalcea and P. Tarau (2004) Textrank: bringing order into text. In EMNLP, pp. 404–411. Cited by: Table 1.
  • A. Mukherjee and B. Liu (2012) Aspect extraction through semi-supervised modeling. In ACL, pp. 339–348. Cited by: §1.
  • R. Mukherjee, H. C. Peruri, U. Vishnu, P. Goyal, S. Bhattacharya, and N. Ganguly (2020) Read what you need: controllable aspect-based opinion summarization of tourist reviews. In SIGIR, pp. 1825–1828. Cited by: Table 1.
  • G. Murray, E. Hoque, and G. Carenini (2017) Chapter 11 - opinion summarization and visualization. In Sentiment Analysis in Social Networks, F. A. Pozzi, E. Fersini, E. Messina, and B. Liu (Eds.), pp. 171–187. External Links: ISBN 978-0-12-804412-4 Cited by: §2.1.
  • N. Oved and R. Levy (2021) PASS: perturb-and-select summarizer for product reviews. In ACL, pp. 351–365. Cited by: Table 1.
  • H. Pan, R. Yang, X. Zhou, R. Wang, D. Cai, and X. Liu (2020) Large scale abstractive multi-review summarization (lsars) via aspect alignment. In SIGIR, pp. 2337–2346. Cited by: Table 1.
  • B. Pang, L. Lee, and S. Vaithyanathan (2002) Thumbs up? sentiment classification using machine learning techniques. In EMNLP, pp. 79–86. Cited by: §1.
  • B. Pang and L. Lee (2008) Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval 2 (1-2), pp. 1–135. Cited by: §1.
  • A. M. Rush, S. Chopra, and J. Weston (2015)

    A neural attention model for abstractive sentence summarization

    In EMNLP, pp. 379–389. Cited by: §1, §2.1, §2.2.2.
  • A. See, P. J. Liu, and C. D. Manning (2017) Get to the point: summarization with pointer-generator networks. In ACL, pp. 1073–1083. Cited by: §2.1, §2.2.2.
  • O. Shapira and R. Levy (2020) Massive multi-document summarization of product reviews with weak supervision. arXiv preprint arXiv:2007.11348. Cited by: Table 1.
  • Y. Suhara, X. Wang, S. Angelidis, and W. Tan (2020) OpinionDigest: a simple framework for opinion summarization. In ACL, pp. 5789–5798. Cited by: Table 1.
  • I. Titov and R. McDonald (2008) A joint model of text and aspect ratings for sentiment summarization. In ACL:HLT, pp. 308–316. Cited by: §2.1.
  • K. Wang and X. Wan (2021) TransSum: translating aspect and sentiment embeddings for self-supervised opinion summarization. In ACL Findings, pp. 729–742. Cited by: Table 1.
  • L. Wang and W. Ling (2016) Neural network-based abstract generation for opinions and arguments. In NAACL, pp. 47–57. Cited by: §1, §1, Table 1.
  • L. Wang, H. Raghavan, C. Cardie, and V. Castelli (2014) Query-focused opinion summarization for user-generated content. In COLING, pp. 1660–1669. Cited by: Table 1.
  • P. Wei, J. Zhao, and W. Mao (2021) A graph-to-sequence learning framework for summarizing opinionated texts. TASLP 29 (), pp. 1650–1660. Cited by: Table 1.
  • J. Zhang, Y. Zhao, M. Saleh, and P. Liu (2020) PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. In ICML, H. D. III and A. Singh (Eds.), Proceedings of Machine Learning Research, Vol. 119, pp. 11328–11339. Cited by: §1, §2.2.2.
  • C. Zhao and S. Chaturvedi (2020) Weakly-supervised opinion summarization by leveraging external information. In AAAI, Vol. 34, pp. 9644–9651. Cited by: Table 1.