WSL-DS: Weakly Supervised Learning with Distant Supervision for Query Focused Multi-Document Abstractive Summarization

11/03/2020 ∙ by Md Tahmid Rahman Laskar, et al. ∙ 0

In the Query Focused Multi-Document Summarization (QF-MDS) task, a set of documents and a query are given where the goal is to generate a summary from these documents based on the given query. However, one major challenge for this task is the lack of availability of labeled training datasets. To overcome this issue, in this paper, we propose a novel weakly supervised learning approach via utilizing distant supervision. In particular, we use datasets similar to the target dataset as the training data where we leverage pre-trained sentence similarity models to generate the weak reference summary of each individual document in a document set from the multi-document gold reference summaries. Then, we iteratively train our summarization model on each single-document to alleviate the computational complexity issue that occurs while training neural summarization models in multiple documents (i.e., long sequences) at once. Experimental results in Document Understanding Conferences (DUC) datasets show that our proposed approach sets a new state-of-the-art result in terms of various evaluation metrics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the rapid growth of textual documents on the internet, accessing information from the web has become a challenging issue [43]. Often users want the summary of a topic from various sources to fulfill their information needs [8]. The QF-MDS task deals with such problems where the goal is to summarize a set of documents to answer a given query.

In the QF-MDS task, the summaries generated by the summarizer can be either extractive or abstractive [43, 16]. An extractive summarizer extracts relevant text spans from the source document(s), whereas an abstractive summarizer generates a summary in natural language that may contain some words which did not appear in the source document(s) [34, 28, 30]. With the rising popularity of virtual assistants in recent years, there is a growing interest to integrate abstractive summarization capabilities in these systems for natural response generation [31].

One major challenge for the QF-MDS task is that the datasets (e.g., DUC 2005, 2006, 2007) used for such tasks do not contain any labeled training data. Therefore, neural summarization models that leverage supervised training cannot be used in these datasets. Note that for other related tasks [1, 23, 27], how to reduce the demands for labeling the data and how to leverage unlabeled data were also identified as a major challenge. While using datasets similar to the target dataset as the training data for the QF-MDS task, we find that these datasets only contain multi-document gold summaries. However, the state-of-the-art transformer-based [36] summarization models [24, 18] cannot be used in long documents due to computational complexities [6, 46]. To tackle these issues, we propose a novel weakly supervised approach by utilizing distant supervision to generate weak reference summary of each single-document from multi-document gold reference summaries. We train our model on each document with weak supervision and find that our proposed approach that generates abstractive summaries is very effective for the QF-MDS task. More concretely, we make the following contributions:

  • First, to address the issue of unlabeled individual documents in a training document set, we utilize pre-trained sentence similarity models [25, 19] to generate the weak reference summary of each individual document from multi-document gold reference summaries.

  • Second, to address the computational issue to train neural models in long documents [46, 6], we propose an iterative approach that adopts a pre-trained single-document generic summarization model to leverage the effectiveness of fine-tuning such models for query focused abstractive summarization [18] and extends it for the QF-MDS task.

  • Experimental results on DUC 2005-07 datasets show that our proposed approach sets a new state-of-the-art result in terms of various ROUGE scores. As a secondary contribution, we will make our source codes publicly available here: https://github.com/tahmedge/WSL-DS-COLING-2020.

2 Related Work

Early work on multi-document summarization was mostly focused on generic summarization [29], whereas the amount of work for QF-MDS had been very limited [43]. Due to the lack of training data for the QF-MDS task, most previous works were based on various unsupervised approaches that could only generate extractive summaries [39, 10, 37, 42, 47, 38, 26, 8].

To generate the abstractive summaries for the QF-MDS task, [5]

proposed a transfer learning technique to tackle the issue of no training data. They adopted the Pointer Generation Network (PGN)

[35] pre-trained for the generic abstractive summarization task in a large dataset to predict the query focused summaries in the target dataset via modifying the attention mechanism of the PGN model. However, their model failed to outperform different extractive approaches in terms of various ROUGE scores [8, 33].

Identifying sentences which are relevant to the query is an important step for the QF-MDS task. For this purpose, various approaches were utilized such as counting word overlaps [5] or the Cross-Entropy Method [8]. Though neural models based on supervised training have significantly outperformed various non-neural models for the answer selection task in recent years [17, 19], such neural models have not been effectively used for the QF-MDS task yet due to the absence of labeled data for the relevant sentences in the QF-MDS datasets.

Recently, [9] showed that neural models pre-trained in a large Question Answering (QA) dataset could effectively select answers in other QA datasets. More recently, such pre-trained answer selection models for the QF-MDS task were used by [41]. In their work, they utilized distant supervision from various QA datasets using the fine-tuned BERT [7] model to filter out the irrelevant sentences from the documents. However, [5] showed that filtering sentences as an early step could lead to performance deterioration for the QF-MDS task. Thus, instead of applying distant supervision to filter out some sentences from the document, we apply it to generate the weak reference summary of each unlabeled document in our training datasets. Our proposed weakly supervised learning approach not only allows us to leverage the advantage of fine-tuning pre-trained generic summarization models [18], but also allows us to overcome the limitation of training neural models in long documents [6, 46].

3 Our Proposed Approach

Figure 1: An overview of our model that generates (a) weak reference summary using RoBERTa for (b) iterative fine-tuning using BERTSUM to generate query focused abstractive summaries which are then ranked by (c) RoBERTa. [CLS] and [SEP] are the special tokens used with inputs [7].

Suppose, we have a query containing words and a set of documents = . For the QF-MDS task, the goal is to generate a summary containing words from the document set for the given query .

In figure 1, we show the overall architecture of our proposed approach. Since there is no training data available for the QF-MDS task, we provide supervised training to our target dataset by using other QF-MDS datasets as the training data. However, the available QF-MDS datasets [8] only contain the gold summaries generated by human experts from multiple documents and do not contain the gold summary of each individual document. Due to the limitations of using neural models in long documents [6, 46], we propose an iterative approach to train our model on each document of a document set. For this purpose, we generate the weak reference summary of each document from the multi-document gold summaries using distant supervision to train our model for the QF-MDS task. Finally, we rank the generated query focused summaries via an answer selection model [19]. In the following, we give a detailed description of our proposed approach.

3.1 Weakly Supervised Learning with Distant Supervision

To generate the weak reference summaries using distant supervision, we utilize the pre-trained RoBERTa model [25] in two steps (see Figure 1a). At first, we generate the weak extractive reference summary of each individual document using a RoBERTa sentence similarity model fine-tuned for the answer selection task. Then, we measure the similarity score between each sentence in the human written (abstractive) multi-document gold summaries with each sentence in the weak extractive reference summary using a RoBERTa sentence similarity model fine-tuned for the paraphrase identification task. Based on the similarity score, we select the most relevant sentences from the gold reference summaries as the weak abstractive reference summary for each document. Below we describe these steps in detail.

RoBERTa Answer Selection Model:

In this step, we first generate the initial weak extractive reference summary of each individual document by measuring the relevance between the query and each sentence in . For this purpose, we adopt the RoBERTa sentence similarity model from [19] for its impressive performance in the answer sentence selection task and fine-tune it in the QA-ALL dataset of MS-MARCO [2]. The fine-tuned RoBERTaMS-MARCO model was then utilized in our training dataset to measure the similarity score between each sentence in the document and the query. Based on the similarity score, we select the top most relevant sentences as the weak extractive reference summary. Note that we use the value of because extracting only sentences was found effective in different extractive summarizers such as the LEAD-3 baseline [35, 24], as well as the BERTSUMEXT model [24].

RoBERTa Paraphrase Identification Model:

We provide distant supervision to generate the weak abstractive reference summary by replacing each sentence in the weak extractive reference summary (generated in the previous step) with the most similar sentence found in the multi-document gold summaries. For this purpose, we fine-tune the RoBERTa model for the paraphrase identification task in the MRPC dataset

[25]. Then for each document in a document set , we utilize the fine-tuned RoBERTaMRPC paraphrase identification model to replace each sentence in the weak extractive reference summary of with the most similar sentence found in the gold summaries (among the sentences that are not already selected for the same document) of .

3.2 Iterative Fine-Tuning for Multi-Document Summarization

For the QF-MDS task, we adopt the transformer-based [36] BERTSUM model pre-trained for generic abstractive summarization on the CNN/DailyMail dataset [24] to leverage the advantages of fine-tuning it for the query focused abstractive summarization task [18]. However, BERTSUM was trained for the single-document summarization task by considering at most 512 tokens [24, 6, 46]. To address this issue for the multi-document scenario, we take an iterative approach (see Figure 1b). At first, we incorporate query relevance via concatenating the query with each document, similar to the work of [18]. Then, we fine-tune BERTSUM using the weak abstractive reference summary to generate the query focused abstractive summary of each document in a document set. The sentences in the generated query focused summaries of each document set are then ranked using the fine-tuned RoBERTaMS-MARCO answer selection model to select the sentences that are most relevant to the query (see Figure 1c).

4 Experimental Setup

We now describe the datasets used in this paper, followed by the details of our implementation.

Datasets:

We use the DUC 2005, 2006, and 2007 datasets for the QF-MDS task. The number of document sets were 50, 50, and 45 where the number of documents in each document set were 32, 25, and 25 in DUC 2005, 2006 and 2007 datasets respectively [8]. Each document set is associated with a query and the objective is to generate a summary containing at most 250 words from the document set based on the given query. Given the absence of the training data, to evaluate our model in each year’s dataset we use the datasets from the other two years for training. From each year’s training data, we randomly selected 20% of the document sets for validation while we used the rest for training.

Implementation:

For the RoBERTa model, we used its Large version [25, 19] and implemented using HuggingFace’s Transformer [40]. For fine-tuning the summarization model, we used the BERTSUMEXT-ABS222https://github.com/nlpyang/PreSumm model pre-trained on the CNN/DailyMail dataset [24]. While selecting the most relevant sentences as the final query focused summary, we used the Trigram Blocking to reduce redundancy [32]. To fine-tune the BERTSUM model, we kept most parameters similar to the original work [24] and ran 50 steps with batch size equal to 200. Among these 50 steps, we selected the step for evaluation that performed the best on the validation set. All of our models were run in multi-GPU settings using 4 NVIDIA V100 GPUs. We report the results based on both Recall and F1 scores in terms of ROUGE-1, ROUGE-2, and ROUGE-SU4 metrics [20]. From now on, we denote ROUGE as R.

(a) F1 Score: DUC 2005 DUC 2006 DUC 2007
Model R-1 R-2 R-SU4 R-1 R-2 R-SU4 R-1 R-2 R-SU4
[8] * 37.78 7.45 13.02 40.47 9.13 14.73 42.86 11.34 16.53
[41] * - - - 41.6 9.5 15.3 43.3 11.6 16.8
[33] * 38.08 7.54 13.17 41.23 9.47 14.97 43.24 11.78 16.83
PQSUMEXT * 37.52 7.84 13.29 40.68 9.29 14.66 42.57 11.20 15.98
PQSUMABS 38.35 7.94 13.44 40.87 9.43 14.83 42.17 10.82 15.98
PQSUMWSL-DS 40.32 9.17 14.73 43.49 10.78 16.45 44.72 12.44 17.72
(b) Recall: DUC 2005 DUC 2006 DUC 2007
Model R-1 R-2 R-SU4 R-1 R-2 R-SU4 R-1 R-2 R-SU4
[8] * 40.35 7.94 13.91 43.01 9.69 15.65 45.45 12.02 17.54
[5] 39.82 6.98 15.73 42.89 8.73 17.75 43.92 10.13 18.54
[33] * 40.82 8.07 14.13 43.94 10.09 15.96 46.02 12.53 17.91
PQSUMEXT * 37.55 7.84 13.31 40.41 9.22 14.56 42.41 11.08 15.92
PQSUMABS 38.36 7.92 13.43 40.59 9.39 14.73 42.05 10.79 15.91
PQSUMWSL-DS 40.36 9.17 14.74 43.22 10.70 16.35 44.61 12.40 17.66
Table 1: Performance comparisons in terms of (a) F1 and (b) Recall. ‘*’ denotes extractive model.
Model Recall F1 Statistically Significant
PQSUMWSL-DS 42.73 42.84
without Distant Supervision 41.77 (- 2.25%) 41.88 (- 2.24%) No

, based on paired t-test (

)
without Trigram Blocking 40.92 (- 4.24%) 41.01 (- 4.27%) No, based on paired t-test ( )
without Weakly Supervised Learning 40.01 (- 6.37%) 40.12 (- 6.35%) Yes, based on paired t-test ( )
Table 2: Ablation test result based on the average R-1. ‘-’ denotes ‘deterioration from original model’.

5 Results and Discussions

We now analyze the performance of our proposed model by comparing with other models (see Table 1). We also perform ablation test to further investigate its effectiveness (see Table 2). We denote our approach of using the Pre-trained models (RoBERTa and BERTSUM) for Query focused SUMmary generation via utilizing Weakly Supervised Learning with Distant Supervision (WSL-DS) as PQSUMWSL-DS. For performance comparisons, we use two baselines that do not utilize weak supervision and fine-tuning. Note that both of these baselines use the BERTSUM [24] model pre-trained on the CNN/DailyMail dataset. One of them is pre-trained for extractive summarization: PQSUMEXT; while the other is pre-trained for abstractive summarization: PQSUMABS. These baselines generate the summaries of all documents in a document set which are then ranked using RoBERTaMS-MARCO. Moreover, we compare our model with four recent works: i) CES-50 [8], ii) RSA [5], iii) QUERYSUM [41], and iv) DUAL-CES [33].

Performance Comparisons:

From Table 1(a), we find that our PQSUMWSL-DS model sets a new state-of-the-art in all datasets based on the F1 metric for all ROUGE scores. Specifically, based on R-1, it improves by 5.88% in DUC 2005 from [33] along with 4.54% and 3.28% from [41] in DUC 2006 and 2007 respectively. From Table 1(b), we find that our model also sets a new state-of-the-art in terms of R-2 Recall in DUC 2005 and 2006, but fails to outperform the DUAL-CES [33] in DUC 2007. In comparison to the abstractive RSA model [5], we find that our model outperforms them in all datasets in terms of both R-1 and R-2 Recall, but fails to outperform them in R-SU4 scores. Moreover, we find based on paired t-test ( ) that the weakly supervised learning significantly outperforms the baselines in terms of both Recall and F1.

Ablation Test:

The result of our ablation test based on the average of R-1 scores across all datasets is shown in Table 2. We find that the performance is deteriorated if we exclude Distant Supervision via removing the RoBERTaMRPC model, as well as if the Trigram Blocking is not used. Moreover, the performance is significantly degraded if the summary is generated by only ranking the sentences in the documents using the fine-tuned RoBERTaMS-MARCO without utilizing Weakly Supervised Learning.

6 Conclusions and Future Work

In this paper, we propose a novel weakly supervised approach for the Query Focused Multi-Document Abstractive Summarization task to tackle the issue of no available labeled training data for such tasks. We also propose an iterative approach to address the computational problem that occurs while training neural models in long documents [15, 6, 46]. Experimental results in three datasets show that our proposed approach sets a new state-of-the-art result in various evaluation metrics. In the future, we will apply our models on more tasks, such as information retrieval applications [11, 12, 44, 13]

, sentiment analysis

[22, 45], learning from imbalanced or unlabeled datasets [21, 3, 4], and automatic chart question answering [14].

Acknowledgements

We gratefully appreciate the area chair(s) and the reviewers for their excellent review comments. This research is supported by the Natural Sciences & Engineering Research Council (NSERC) of Canada, the York Research Chairs (YRC) program and an ORF-RE (Ontario Research Fund-Research Excellence) award in BRAIN Alliance. We acknowledge Compute Canada for providing us the computing resources.

References

  • [1] J. Allan, J. Aslam, N. Belkin, C. Buckley, J. Callan, B. Croft, S. Dumais, N. Fuhr, D. Harman, D. J. Harper, et al. (2003) Challenges in information retrieval and language modeling: report of a workshop held at the center for intelligent information retrieval, University of Massachusetts Amherst, September 2002. In ACM SIGIR Forum, Vol. 37, pp. 31–47. Cited by: §1.
  • [2] P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Majumder, A. McNamara, B. Mitra, T. Nguyen, et al. (2016) MS marco: a human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Cited by: §3.1.
  • [3] M. S. Bari, S. Joty, and P. Jwalapuram (2019)

    Zero-Resource Cross-Lingual Named Entity Recognition

    .
    arXiv preprint arXiv:1911.09812. Cited by: §6.
  • [4] M. S. Bari, M. T. Mohiuddin, and S. Joty (2020) MultiMix: a robust data augmentation strategy for cross-lingual nlp. arXiv preprint arXiv:2004.13240. Cited by: §6.
  • [5] T. Baumel, M. Eyal, and M. Elhadad (2018) Query focused abstractive summarization: incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704. Cited by: §2, §2, §2, Table 1, §5, §5.
  • [6] I. Beltagy, M. E. Peters, and A. Cohan (2020) Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150. Cited by: 2nd item, §1, §2, §3.2, §3, §6.
  • [7] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. Cited by: §2, Figure 1.
  • [8] G. Feigenblat, H. Roitman, O. Boni, and D. Konopnicki (2017) Unsupervised query-focused multi-document summarization using the cross entropy method. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 961–964. Cited by: §1, §2, §2, §2, §3, §4, Table 1, §5.
  • [9] S. Garg, T. Vu, and A. Moschitti (2019) TANDA: transfer and adapt pre-trained transformer models for answer sentence selection. arXiv preprint arXiv:1911.04118. Cited by: §2.
  • [10] A. Haghighi and L. Vanderwende (2009) Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 362–370. Cited by: §2.
  • [11] X. Huang and Q. Hu (2009) A bayesian learning approach to promoting diversity in ranking for biomedical information retrieval. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval,, pp. 307–314. Cited by: §6.
  • [12] X. Huang, F. Peng, D. Schuurmans, N. Cercone, and S. E. Robertson (2003)

    Applying machine learning to text segmentation for information retrieval

    .
    Information Retrieval 6 (3-4), pp. 333–362. Cited by: §6.
  • [13] X. Huang, M. Zhong, and L. Si (2005) York University at TREC 2005: genomics track. In Proceedings of the Fourteenth Text REtrieval Conference, TREC, Cited by: §6.
  • [14] D. H. Kim, E. Hoque, and M. Agrawala (2020) Answering questions about charts and generating visual explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–13. Cited by: §6.
  • [15] N. Kitaev, L. Kaiser, and A. Levskaya (2019) Reformer: the efficient transformer. In International Conference on Learning Representations, Cited by: §6.
  • [16] S. Kulkarni, S. Chammas, W. Zhu, F. Sha, and E. Ie (2020) AQuaMuSe: automatically generating datasets for query-based multi-document summarization. arXiv preprint arXiv:2010.12694. Cited by: §1.
  • [17] M. T. R. Laskar, E. Hoque, and J. Huang (2019) Utilizing bidirectional encoder representations from transformers for answer selection task. In Proceedings of the V AMMCS International Conference: Extended Abstract, pp. 221. Cited by: §2.
  • [18] M. T. R. Laskar, E. Hoque, and J. Huang (2020) Query focused abstractive summarization via incorporating query relevance and transfer learning with transformer models. In

    Canadian Conference on Artificial Intelligence

    ,
    pp. 342–348. Cited by: 2nd item, §1, §2, §3.2.
  • [19] M. T. R. Laskar, J. X. Huang, and E. Hoque (2020) Contextualized embeddings based transformer encoder for sentence similarity modeling in answer selection task. In Proceedings of The 12th Language Resources and Evaluation Conference, pp. 5505–5514. Cited by: 1st item, §2, §3.1, §3, §4.
  • [20] C. Lin (2004-07) ROUGE: a package for automatic evaluation of summaries. In Text Summarization Branches Out, Barcelona, Spain, pp. 74–81. Cited by: §4.
  • [21] Y. Liu, A. An, and X. Huang (2006) Boosting prediction accuracy on imbalanced datasets with SVM ensembles. In Proceedings of the 10th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD, pp. 107–118. Cited by: §6.
  • [22] Y. Liu, X. Huang, A. An, and X. Yu (2007) ARSA: a sentiment-aware model for predicting sales performance using blogs. In Proceedings of the 30th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 607–614. Cited by: §6.
  • [23] Y. Liu, X. Huang, A. An, and X. Yu (2008) Modeling and predicting the helpfulness of online reviews. In Proceedings of the 8th IEEE International Conference on Data Mining, pp. 443–452. Cited by: §1.
  • [24] Y. Liu and M. Lapata (2019) Text summarization with pretrained encoders. In

    Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing

    ,
    pp. 3721–3731. Cited by: §1, §3.1, §3.2, §4, §5.
  • [25] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) RoBERTa: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: 1st item, §3.1, §3.1, §4.
  • [26] S. Ma, Z. Deng, and Y. Yang (2016) An unsupervised multi-document summarization framework based on neural document model. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 1514–1523. Cited by: §2.
  • [27] J. Miao, J. X. Huang, and Z. Ye (2012) Proximity-based rocchio’s model for pseudo relevance. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 535–544. Cited by: §1.
  • [28] R. Nallapati, B. Zhou, C. dos Santos, C. Gulcehre, and B. Xiang (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280–290. Cited by: §1.
  • [29] M. T. Nayeem, T. A. Fuad, and Y. Chali (2018) Abstractive unsupervised multi-document summarization using paraphrastic sentence fusion. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 1191–1204. Cited by: §2.
  • [30] P. Nema, M. M. Khapra, A. Laha, and B. Ravindran (2017)

    Diversity driven attention model for query-based abstractive summarization

    .
    In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1063–1072. Cited by: §1.
  • [31] K. Nishida, I. Saito, K. Nishida, K. Shinoda, A. Otsuka, H. Asano, and J. Tomita (2019) Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2273–2284. Cited by: §1.
  • [32] R. Paulus, C. Xiong, and R. Socher (2018) A deep reinforced model for abstractive summarization. In International Conference on Learning Representations, Cited by: §4.
  • [33] H. Roitman, G. Feigenblat, D. Cohen, O. Boni, and D. Konopnicki (2020) Unsupervised dual-cascade learning with pseudo-feedback distillation for query-focused extractive summarization. In Proceedings of The Web Conference 2020, pp. 2577–2584. Cited by: §2, Table 1, §5, §5.
  • [34] A. M. Rush, S. Chopra, and J. Weston (2015) A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379–389. Cited by: §1.
  • [35] A. See, P. J. Liu, and C. D. Manning (2017) Get to the point: summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1073–1083. Cited by: §2, §3.1.
  • [36] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §1, §3.2.
  • [37] X. Wan and J. Xiao (2009) Graph-based multi-modality learning for topic-focused multi-document summarization. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, pp. 1586–1591. Cited by: §2.
  • [38] X. Wan and J. Zhang (2014) CTSUM: extracting more certain summaries for news articles. In Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 787–796. Cited by: §2.
  • [39] D. Wang, S. Zhu, T. Li, Y. Chi, and Y. Gong (2008) Integrating clustering and multi-document summarization to improve document understanding. In Proceedings of the 17th ACM Conference on Information and Knowledge Management, pp. 1435–1436. Cited by: §2.
  • [40] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al. (2019) HuggingFace’s transformers: state-of-the-art natural language processing. ArXiv, pp. arXiv–1910. Cited by: §4.
  • [41] Y. Xu and M. Lapata (2020) Query focused multi-document summarization with distant supervision. arXiv preprint arXiv:2004.03027. Cited by: §2, Table 1, §5, §5.
  • [42] J. Yao, X. Wan, and J. Xiao (2015) Compressive document summarization via sparse optimization. In Proceedings of the 24th International Conference on Artificial Intelligence, pp. 1376–1382. Cited by: §2.
  • [43] J. Yao, X. Wan, and J. Xiao (2017) Recent advances in document summarization. Knowledge and Information Systems 53 (2), pp. 297–336. Cited by: §1, §1, §2.
  • [44] X. Yin, J. X. Huang, Z. Li, and X. Zhou (2013) A survival modeling approach to biomedical search result diversification using wikipedia. IEEE Transactions on Knowledge and Data Engineering 25 (6), pp. 1201–1212. Cited by: §6.
  • [45] X. Yu, Y. Liu, X. Huang, and A. An (2012) Mining online reviews for predicting sales performance: A case study in the movie domain. IEEE Transactions on Knowledge and Data Engineering 24 (4), pp. 720–734. Cited by: §6.
  • [46] M. Zaheer, G. Guruganesh, A. Dubey, J. Ainslie, C. Alberti, S. Ontanon, P. Pham, A. Ravula, Q. Wang, L. Yang, et al. (2020) Big bird: transformers for longer sequences. arXiv preprint arXiv:2007.14062. Cited by: 2nd item, §1, §2, §3.2, §3, §6.
  • [47] S. Zhong, Y. Liu, B. Li, and J. Long (2015)

    Query-oriented unsupervised multi-document summarization via deep learning model

    .
    Expert Systems with Applications 42 (21), pp. 8146–8155. Cited by: §2.