to assist users in consuming lengthy document effectively. Generally, summarization methods can be divided into extractive summarization and abstractive summarization. The former selects sentences from the original document to compose the summaries[21, 22], while the latter generates the summary texts one word by word based on the understanding of the documents [14, 26, 28]
. In recent years, with the renaissance of neural networks, deep learning based methods have achieved the state-of-the-art results on standard benchmarks for both abstractive and extractive summarization[27, 8, 13, 1].
also raised much attention of the research community, thanks to the development of automatic speech recognition techniques. Different from summarizing the plain documents, the goal of dialogue summarization is to generate summaries for dialogues. Exemplar applications can be found in meetings, customer services and judicial trials. However, there exists two special challenges for dialogue summarization. The first one is that multi-role dialogue is more complicated due to the interactions among the different parties. Enhanced representation of the atomic components (e.g., utterance and role) of the dialogue prequalifies summary generation optimization.
Second, despite the nature of multi-role dialogues that are noisy and tedious for many circumstances (e.g., a 1-2 hour judicial trial can host 10-30 thousand words), sometimes, there exists a logical/factual gap between the input dialogue and the expected output summary. For instance, in the court debate scenario , the judge summarizes the case narrative not only based on facts recognized from the court debate during the trial but also relying on the evidence or materials submitted by the litigants. Similarly, in the medical inquiry process between the doctor and patient, the doctor concludes the medical record based on their conversation on symptoms along with a series of medial reports (e.g., blood test, X-Ray and CT results). In such context, the current summarization framework [28, 24] by enforcing alignment between the input (with dialogue discourse) and the output (with extra knowledge from external resource) can result in the mismatch and misalignment in terms of the critical factual aspects and logics in the generated summary, i.e., semantic/statistical alignment can hardly fill the logical/factual gaps in dialogue summarization.
Motivated by such observation, in this paper, we particularly investigate the factual inconsistency problem and propose a novel framework: Dialogue Inspectional Summarization (DIS) under both non-pretraining and pretraining settings. Specifically, in the non-pretraining setting, we design a hierarchical dialogue encoder involving role information to accommodate long context and multiple turns among the multiple roles. Rather than directly aligning the input dialogue and its summary within the generation framework, we additionally propose two auxiliary tasks in the way of joint learning: Expectant Factual Aspect Regularization (EFAR)
can estimate the factual aspects to be contained in the summary so as to make the model emphasize on the factual coverage of logic reasoning; andMissing Factual Entity Discrimination (MFED) predicts the missing aspects which discover/alarm the factual gap between the input and the output. DIS allows the users (e.g., judge and doctor) to further improve the summary by referring to other materials (e.g., evidence) according to the detected missing aspect-based entities. In the pretraining setting, we further investigate the factual inconsistency problem of pre-trained summarization models by equipping DIS with a pre-trained model PEGASUS .
Figure 1, for instance, shows a judicial trial transcript and the summary generated by our model. The predicted aspects and the missing aspects are produced by the two auxiliary tasks respectively. Such complementary results can help the professional users better understand the generated content in terms of the factual aspects coverage and the possible omissions that are not mentioned in the input dialogue but essential for writing a comprehensive summary.
To sum up, our contributions are as follows:
To the best of our knowledge, this work is the first attempt for dialogue inspectional summarization (DIS) by addressing the factual inconsistency between the input dialogue and the expected output summary. We particularly conduct experiments and validate our hypothesis on the judicial trial scenario.
We present an end-to-end DIS framework in both non-pretraining and pretraining settings, which is supervised by two auxiliary tasks: EFAR and MFED. Comprehensive experiments demonstrate that the proposed model can generate a more readable summary with high coverage of factual aspects as well as informing users with potential missing facts detected from the input dialogue.
We benchmark the DIS dataset in the judicial domain, where the input is the real civil trial debate data. The summaries come from the factfinding paragraphs extracted from the judgment documents of the corresponding cases. The factual aspect annotation is conducted on the output summary by five legal experts. To motivate other scholars to investigate this novel and essential problem, we will make the experiment dataset publicly available (while removing the sensitive information).
Ii Problem Formulation
Dialogue inspectional summarization is the task of condensing the factual information mentioned in the dialogue to a shorter version summary, along with two auxiliary tasks: Expectant Factual Aspect Regularization (EFAR) and Missing Factual Entity Discrimination (MFED).
Formally, each case contains a multi-role dialogue and an inspectional summary . The dialogue includes utterances where each utterance is composed of a sequence of words and its speaker role . The summary of the dialogue is , where is the number of tokens.
During the annotation process, we define a set of factual aspects and a set of factual entities respectively, with the assistance of domain experts. For each sample, we annotate the expectant value for each aspect in set and missing value for each entity in set . To be specific, aspect expectant value indicates whether the aspect should be considered in the case. Entity missing value indicates whether the information is related to the case but was omitted in the dialogue, thus leading to factual inconsistency between dialogue and inspectional summary.
In more detail, the defined 12 factual aspects are Limitation of Action, Notice of Repayment, Written Loan Agreement, Nature of Loan, Guarantee Liability, Agreed Rate/Interest, Repayment Period, Loan Period, Breach Clause, Repayment Behavior, Delivery, Debt. And the defined 14 expectant aspects are Loan Amount, Loan Period, Loan Start Date, Loan End Date, Repayment Date, Repayment of Principal, Repayment of Interest, Penalty/Overdue Interest, Outstanding Principal, Delivery Date, Delivery Amount, Annual Interest Rate, Monthly Interest Rate, Overdue Interest Rate. More details can refer to the released two complete samples of our dataset111https://github.com/anonymous-tmp/anonymous-1.
In this section, we introduce our dialogue summary generation framework, Dialogue Inspectional Summarization (DIS). DIS exploits domain knowledge to relieve the factual inconsistency problem and improve the credibility of the generated summary. We first describe it in a non-pretraining setting, which consists of Dialogue Encoder and Inspectional Decoder. Then we introduce the encoder-decoder model in a pre-trained setting. Third, we introduce how to combine the non-pretraining and pretraining summarization model with two auxiliary tasks: (1) Expectant Factual Aspect Regularizer, and (2) Missing Factual Entity Discriminator. Finally, the objective function is described for parameters Optimization.
Iii-a Dialogue Encoder
Considering the contextual structure of dialogues, we employ hierarchical encoders with role information integrated for dialogue representative learning. We first encode each utterance and then encode the entire dialogue. At utterance level, we use utterance encoder, notated as , to represent the local semantics of each utterance:
where are encoder hidden states corresponding to the token in utterance , is the dense word embedding, and is the maximum length of utterance.
In general, utterances of different characters contain different angles of factual information in a multi-role dialogue scenario. For example, in a trial debate, the plaintiffs tend to state the facts while the defendants deny them. Meanwhile, the presiding judge asks questions about the factual details to advance the trial. Inspired by their different functions in the dialogue, we integrate the role information into utterance representation. To be specific, we apply the role embedding layer to learn the representation for each type of the role (presiding judge, plaintiff, and defendant), corresponded to each utterance:
where represents the local semantics of utterance under the specific speaker role. The utterance hidden state is the last state of , where .
We then employ dialogue encoder, notated as , to learn the contextual representations of each utterance from the dialogue level:
where is the maximum utterance number of dialogue. Dialogue embedding is the last state of , where .
In our implementation, both and are single layer bidirectional LSTM222We have also tested attentional LSTM and Transformer as the encoder, but they did not give better performance.. We concatenate the forward and backward LSTM states to represent encoder hidden states:
Iii-B Inspectional Decoder
We propose an Inspectional Decoder for generating summaries. Inspectional Decoder generates the summary via pointing mechanism, while the Expectant Factual Aspect Regularizer tries to ensure the factual consistency from aspect level.
From the perspective of bionics, humans tend to write a draft before focusing on factual aspects. We treat the Inspectional Decoder as a drafter, whose states need further regularized by the aspect-aware module. At each decoding timestep
, context vectoris computed to represent the attended information from dialogue:
where shows the corresponding attention distribution on encoder state
. The attended probabilityis calculated given the decoder state :
The context vector, together with the decoder state, are concatenated to produce the generated probability distribution over the fixed size vocabulary:
where and are learnable parameters. Following , we employ the pointing mechanism in the decoder to directly copy tokens from the dialogue, along with generating tokens from a fixed vocabulary. Generation probability is computed as:
where , , , are learnable parameters and is the decoder input from reference summary . is the maximum decoding length. With the pointing mechanism, we calculate the probability of token as follows:
With the pointing mechanism integrated, the decoder can directly copy tokens from dialogue, making generated summary more accurate and relevant in factual details.
Iii-C Pre-trained Encoder-Decoder Model for Dialogue Inspectional Summarization
In this section, we further investigate whether the proposed dialogue inspectional summarization can benefit the pre-trained encoder-decoder model.
Specifically, we take PEGASUS  as the encoder-decoder model. PEGASUS is the state-of-the-art abstractive summarization model based on Transformer, which is pre-trained on a large-scale corpora by generating the selected important sentences as self-supervised tasks. We directly replace the multi-role dialogue encoder with PEGASUS Transformer-based encoder, and the inspectional decoder with the PEGASUS decoder.
Formally, given the input dialogue , the PEGASUS encoder concatenates the utterances in the dialogue and encodes the dialogue into deep contextual hidden states as:
We generate the decode state based on the dialogue hidden states and the previous generated summary text as follows:
Based on the multi-role dialogue encoder-decoder model and pre-trained encoder-decoder models, we next introduce how to relieve the factual inconsistency problem with two auxiliary tasks.
Iii-D Expectant Factual Aspect Regularizer
When writing formal documents like the legal verdict, people always carefully review their drafts to ensure that there are no inconsistencies in expectant aspects. Inspired by the process, we propose Expectant Factual Aspect Regularizer to verify the consistency on the aspect level.
For each aspect , we use aspect encoder to obtain its semantic embedding . The encoder is a single layer bidirectional LSTM to represent the aspect description text:
We then produce a weighted sum of the decoder hidden states, known as the aspect-aware decoder state :
where is the number of factual aspects and function uses additive attention:
Finally, we feed
into a 3-layer classifier to predict the expectant aspects:
where is the notation of linear layers and indicates the related probability of aspects.
Iii-E Missing Factual Entity Discriminator
As mentioned in section I, there always exists factual inconsistencies between the dialogue and reference summary. In the Seq2Seq framework, inconsistencies mislead the decoder to generate incorrect factual details. Missing Factual Entity Discriminator tries to detect the inconsistencies, thus mitigating the problem.
Motivated by the observation, we design the discriminator to classify whether the factual entity
is missing in the conversation. In real applications, human summarizers can refer the predictions to complete generated text based on additional information.
Intuitively, we view inconsistency as the factual divergence between source content and target content, so we use bilinear layer as the classifier. and represent transformation by linear layers:
where indicates the missing probability of entities, and are trainable parameters.
Iii-F Parameter Optimization
As for the summary generation task, we use negative log-likelihood error computed over all the timesteps:
where is the generation probability of the target token .
Finally, we combine the generation loss and classification loss as the final loss to optimize the parameters:
are hyperparameters to balance the main task and auxiliary tasks.
We name the non-pretraining model as DIS and the pre-trained based DIS as PEGASUS-DIS.
Iv Experimental Settings
For the experiment, we collected more than 300,000 court trial records of civil Private Loan Disputes (PLD) cases, paired with their corresponding verdicts. Legal experts helped us introduce 12 factual aspects and 14 types of factual entity, which are discriminative in PLD cases, and labeled 45,531 cases, i.e., for each case, we annotate the correlation of each aspect and the consistency of each entity. The aspect annotation indicates whether it will be considered in the court investigation. The entity annotation indicates whether its information is consistent between the trial and the verdict. In the inter-rater reliability measurement, the annotators achieved the Kappa coefficient as 0.9 (substantial agreement) after training.
We constructed the dataset by extracting the factfinding part from verdicts and filtered out those cases with a kind of fixed mode factfinding. The statistics of the dataset are shown in Table I. The entire dataset was then split into three subsets for training (90%, 27,432), validation (5%, 1,524) and testing (5%, 1,525). We show two complete samples 333https://github.com/anonymous-tmp/anonymous-1 of our dataset to expose how the judges hear the case and summarize factfinding in a practical situation.
|avg. trial length (tokens)||723.5|
|avg. factfinding length (utterances)||38.6|
|avg. factfinding length (tokens)||158.8|
|avg. expectant aspects||5.4|
|avg. missing entities||2.2|
Iv-B Implementation Details
We use PyTorch 1.4 and AllenNLP 1.0 for implementing our models. For DIS, the vocabulary size is constrained to 20,000 for both source and target. For the trial text, we limit the utterance number to 50 and utterance length to 30. We train the word embeddings from scratch with embedding dimensions of 300; as reported by, we also find no further improvement from pre-trained word embeddings. We set all the encoder hidden states to 256 dimensions. For the PEGASUS-DIS, we initialize the encoder-decoder with a PEGASUS model, which has 8 layers and 6 heads.
We use AdamW 
to optimize our models. For DIS and PEGASUS-DIS, the learning rate is set to 0.001 and 0.0003, respectively. For training, we set the batch size to 16 and 8 for the two models and group the instances into batches according to their padding lengths. For DIS, the hyperparameters in the final objective function are tuned as, . For PEGASUS-DIS, these two hyperparameters are set to 0.1 and 0.1, respectively.
We train our models on a single NVIDIA GeForce RTX 3090 GPU and use loss on the validation set to apply early stopping. It takes the two models about 9 epochs and 50 epochs for convergence. For inference, we use the beam search with a size of 5 to generate the summary. We set the maximum decode length to 200, which outnumbers 80 percent of all summary lengths in our dataset.
We are planning to release the dataset and code for further research on the problem.
|DIS w/o EFAR||47.56||27.28||38.09||85.19||81.21||83.04|
|DIS w/o MFED||48.53||28.09||39.24||85.25||81.67||83.32|
To demonstrate the effectiveness of DIS, we implement the following summarization models for comprehensive comparison:
LEAD3 is a basic extractive baseline by selecting the first three utterances as the summary.
TextRank  proposes an unsupervised extractive method based on sentence importance ranking. In the experiment, we extract the top-4 important sentences from trial.
S2S  is a basic encoder-decoder model for sequence-to-sequence learning. We concatenate all the utterances in the trial as the input sequence.
S2SAttn  modifies Seq2Seq model with the attentional decoder, which utilizes the context information given the attention distribution over encoder hidden states.
PGN  extends attentional Seq2Seq model with copy mechanism. The decoder calculates a generation probability to choose between generating tokens from the fixed vocabulary and copying tokens from the source.
FastSum  jointly combines content selection and rewriting into a summarizer. We adjust the sentence-level content selection to utterance-level in the experiment.
DAH  proposes a model driven by hierarchical attention. It is suitable for the discourse structure of judicial trials.
PEGASUS  is a Transformer-based abstractive summarization model which is pre-trained on large text corpora with a self-supervised objective, which removes/masks important sentences from the original documents and then generates them. To make a fair comparison, we fine-tune the pre-trained PEGASUS on the PLD dataset with exactly the same hyperparameters we used to train PEGASUS-DIS.
Iv-D Evaluation Metrics
To validate the overall quality of the generated factfinding, we report results on three evaluation metrics for different perspectives: (1) ROUGE, (2) BERTScore  and (3) human evaluation.
ROUGE is the standard metric for summarization task. We report the score for ROUGE-1, ROUGE-2, and ROUGE-L, obtaining the scores with the files2rouge444https://github.com/pltrdy/files2rouge package. From a linguistic perspective, the indicators can reflect the informativeness and fluency of the summary.
BERTScore is recently proposed as a novel evaluation metric for text generation tasks. Instead of the exact token match in ROUGE, BERTScore computes token similarity with contextualized embeddings from pre-trained language models like BERT . In the experiment, we choose 12-layer to get the score.
We perform human evaluation to verify whether the performance on automatic metrics correlates with human perceived quality. We hire five annotators well trained in reading legal documents and randomly select 100 cases from the test set for evaluation. For each case, we show the judicial trial, the factfinding in the verdict, and results generated by S2SAttn, DAH, and DIS. The annotators are asked to score each factfinding on three indicators under the 1 to 5 scale: Logical Completeness, Factual Consistency, and Readability.
V Result Discussion
V-a Overall Performance
We evaluate the overall performance comprehensively from the following perspectives: (1) comparison against baselines on automatic metrics, (2) the effectiveness of proposed auxiliary tasks, and (3) human perceived quality of generated summaries.
Comparisons against baselines. As shown in Table II, we first report the experimental results against baselines on automatic metrics. Following the setting in section IV-C and IV-D, we have the observations based on the results: (1) Our DIS model outperforms all the baselines without pre-training over ROUGE-1, ROUGE-2 and BERTScore, though slightly worse than DAH over ROUGE-L. (2) The fine-tuned PEGASUS model outperforms all the models without pre-training, which shows the advantages of large pre-trained models. (3) However, the PEGASUS-DIS model still gives better results than PEGASUS on all automatic evaluation metrics, which demonstrates that logical/factual gaps may still exist in pre-trained models and the DIS framework is still usefull in this setting. (4) The two basic extractive methods, LEAD3 and TextRank, perform rather poorly on the PLD dataset because the factual details scattered in the dialogue are difficult to represent by certain utterances. DAH performs better than other abstractive baselines, indicating that the hierarchical representative learning is critical in summarizing dialogues.
Human perceived quality. As shown in table III, we report the human evaluation results. First, DIS outperforms the selected baselines without pre-training on all the indicators, especially in factual consistency, while still having a certain gap with the reference summary. It is noteworthy that S2SAttn performs poorly in factually related indicators despite having acceptable performance in terms of automatic evaluation. This finding indicates that existing evaluation metrics are insufficient to tell the differences concerning factuality. Second, The fine-tuned PEGASUS model again demonstrates its advantages in human evaluation experiments on the three indicators, which demonstrate that pre-trained models can not only improve the fluencies but also be able to relieve the logical/factual inconsistency to some extent. Third, the proposed PEGASUS-DIS still outperforms PEGASUS on the three indicators, which again shows that the DIS framework is able to benefit pre-trained abstractive summarization models. Finally, as for readability, the level of all the results is relatively high, without obvious distinction. It reflects that generating factual consistent summaries is more significant than generating fluent summaries.
Effectiveness of auxiliary tasks. We conduct the ablation test to assess the contribution of each component in DIS framework. The fourth section in table II shows the results. We maintain the aspect embedding to compute the aspect-aware decoder state when removing EFAR task. Its removal causes 4.8% relative increase in error (RIE) for ROUGE-2 score, of which the impact is more critical than the removal of MFED task (2.0% RIE for ROUGE-2 score). The finding indicates that both the auxiliary tasks contribute to the informativeness, while EFAR presents a more significant role.
V-B Case Study
Figure 3 and 4 show two case studies by comparing the generated summary of the proposed method against the best baseline and the reference summary. As shown in Figure 3, our method outperforms PEGASUS in generating high coverage of accurate factual content by predicting the expectant factual aspects correctly. For example, the proposed PEGASUS-DIS predicts the Repayment Behavior correctly and generates corresponding texts in the summary. Also, there exists one missing factual entity in the reference summary (Loan Start Date). DIS detects the inconsistency to inform the users. Similar observations can be found in Figure4. With the help of the EFAR auxiliary task, the PEGASUS-DIS generates factual contents, e.g., Repayment Behavior and Delivery, which are missed by the fine-tuned PEGASUS.
V-C Error Analysis
From the sampling statistics and human evaluation feedback, we conclude the major problems occurring in DIS results as follows: (1) 25.5% of the errors come from the failure to distinguish between multiple facts. The lengthy and colloquial dialogue makes it difficult for the model to capture subtle differences and determine the correct facts. (2) 18.1% of the errors are attributed to their adverse results on auxiliary tasks. EFAR and MFED predictions are related to the summary content so that the false aspect can mislead to false facts. (3) We also find that 14.6% of the errors happen when the factual information needs to be obtained through a certain degree of reasoning. For instance, the summary may infer a specific date or calculate the total amount given fragmented information in dialogue. To improve the proposed method in the future, introducing numerical calculation and reasoning capabilities into text generation can be a promising direction.
Vi Related Work
Generally, neural text summarization has two strategies: extractive summarization and abstractive summarization. Extractive summarization merges the selected utterances from dialogue to form the summary, so it fails to produce consistent results. For dialogue inspectional summarization, abstractive strategy is more suitable because it can generate novel expressions to summarize the interactions. In the past few years, neural abstractive text summarization has received much attention from the research community. Seq2Seq  framework enables the summarizer to generate diverse, relevant, and readable results. Since the first successful attempt by , many techniques have been proposed to improve the accuracy and versatility. The most influential work includes copying mechanism [12, 28], AMR parsing 
, reinforcement learning approaches[24, 2], coverage  and combination of extraction and abstraction [8, 13, 1].
Our work especially belongs to dialogue summarization, which has presented new challenges to abstractive summarization: information in the interactions are more difficult to capture than documents with sequential logic; the lack of suitable corpora restricts the relevant research progress. Recent work adapted the practice in news summarization to this field and introduced supplementary annotations. For instance,  incorporates dialogue acts to better recognize the interactive patterns in meeting summarization;  introduces key point sequence generation to improve the logic and integrity of summaries in the customer service domain.  pay attentions to the controversy focus space of a civil trial dialogue. Unlike their work, our proposed framework particularly focuses on the factual inconsistency problem, aimed at generating a more faithful summary.
More recently, pre-trained models based on Transformer [5, 29, 19] have shown their dominant advantages from language understanding tasks to language generation tasks, e.g., machine translation, dialogue generation and text summarization. For language generation pre-training, a core problem is how to construct the self-supervised task from the unlabeled corpus. UniLm  proposed a unified model for both language understanding and generating tasks by designing different self-attention masks for different kinds of tasks. T5  converts various NLP tasks into a unified text-to-text framework, and trains the tasks with a sequence-to-sequence model. MASS  is a sequence-to-sequence pre-training model, which masks a consecutive text fragment in a sentence and predicts the masked tokens using a encoder-decoder model. BART  constructs the self-supervised task by corrupting the document with transformations, e.g., token deletion and text infilling, and then recovering the original document. PEGASUS  is the state-of-the-art abstractive summarization model, which is pre-trained on a large-scale corpora by generating the selected important sentences as a self-supervised task. Our work further investigates if such pre-trained abstractive summarization models have the factual inconsistency problem, and if the proposed DIS framework can benefit the pre-trained models.
In this work, we mainly investigate Dialogue Inspectional Summarization (DIS) by solving factual inconsistency problem. We propose DIS as a novel end-to-end dialogue summarization framework under non-pretraining and pretraining settings, supervised with two auxiliary tasks, namely Expectant Factual Aspect Regularization (EFAR) and Missing Factual Entity Discrimination (MFED). The auxiliary tasks align the generated summary with dialogue on different factual granularity.
For experiments, we benchmark the DIS dataset in judical trial summarization. Based on comprehensive evaluation and analysis, we demonstrate that the DIS framework can generate a more readable summary with accurate coverage of factual aspects, as well as informing the user with potential factual inconsistencies for further human intervention.
-  (2018-06) Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 1662–1675. External Links: Cited by: §I, §VI.
-  (2018) Fast abstractive summarization with reinforce-selected sentence rewriting. In ACL 2018: 56th Annual Meeting of the Association for Computational Linguistics, Vol. 1, pp. 675–686. Cited by: 6th item, §VI.
A discourse-aware attention model for abstractive summarization of long documents. In NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 2, pp. 615–621. Cited by: 7th item, §IV-B.
-  (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186. Cited by: §IV-D.
-  (2019) Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett (Eds.), pp. 13042–13054. External Links: Cited by: §VI.
-  (2019) Legal summarization for multi-role debate dialogue via controversy focus mining and multi-task learning. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, W. Zhu, D. Tao, X. Cheng, P. Cui, E. A. Rundensteiner, D. Carmel, Q. He, and J. X. Yu (Eds.), pp. 1361–1370. External Links: Cited by: §I, §VI.
-  (2021) A survey on dialogue summarization: recent advances and new frontiers. arXiv preprint arXiv:2107.03175. Cited by: §I.
Bottom-up abstractive summarization.
EMNLP 2018: 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4098–4109. Cited by: §I, §VI.
-  (2014) Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1602–1613. Cited by: §I.
-  (2018) Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts. In 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 735–742. Cited by: §I, §VI.
-  (2018) NEWSROOM: a dataset of 1.3 million summaries with diverse extractive strategies. In NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp. 708–719. Cited by: §I.
-  (2016) Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 1631–1640. Cited by: §VI.
-  (2018-07) A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 132–141. External Links: Cited by: §I, §VI.
-  (2000) Cut and paste based text summarization. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics, External Links: Cited by: §I.
-  (2020-07) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 7871–7880. External Links: Cited by: §VI.
Automatic evaluation of summaries using n-gram co-occurrence statistics. In NAACL ’03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pp. 71–78. Cited by: §IV-D.
-  (2019) Automatic dialogue summary generation for customer service. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1957–1965. Cited by: §I, §VI.
-  (2015-May–June) Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, pp. 1077–1086. External Links: Cited by: §VI.
-  (2019-11) Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3730–3740. External Links: Cited by: §VI.
-  (2019) Decoupled weight decay regularization. In ICLR 2019 : 7th International Conference on Learning Representations, Cited by: §IV-B.
-  (2004) TextRank: bringing order into texts. In Proc. 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, July, pp. 404–411. Cited by: §I, 2nd item.
SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, S. P. Singh and S. Markovitch (Eds.), pp. 3075–3081. External Links: Cited by: §I.
-  (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280–290. Cited by: 4th item.
-  (2018) A deep reinforced model for abstractive summarization. Cited by: §I, §VI.
Exploring the limits of transfer learning with a unified text-to-text transformer.
Journal of Machine Learning Research21 (140), pp. 1–67. External Links: Cited by: §VI.
-  (2015-09) A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 379–389. External Links: Cited by: §I.
-  (2015) A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Cited by: §I, §VI.
-  (2017) Get to the point: summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 1073–1083. Cited by: §I, §I, §III-B, 5th item, §VI.
-  (2019) MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, pp. 5926–5936. External Links: Cited by: §VI.
-  (2014) Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pp. 3104–3112. Cited by: 3rd item, §VI.
Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 76–85. External Links: Cited by: §VI.
-  (2020) PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, Proceedings of Machine Learning Research, Vol. 119, pp. 11328–11339. External Links: Cited by: §I, §III-C, 8th item, §VI.
-  (2020) BERTScore: evaluating text generation with bert. In ICLR 2020 : Eighth International Conference on Learning Representations, Cited by: §IV-D.