Jointly Extracting and Compressing Documentswith Summary State Representations

04/03/2019 ∙ by Afonso Mendes, et al. ∙ Priberam Unbabel Inc. 0

We present a new neural model for text summarization that first extracts sentences from a document and then compresses them. The proposed model offers a balance that sidesteps the difficulties in abstractive methods while generating more concise summaries than extractive methods. In addition, our model dynamically determines the length of the output summary based on the gold summaries it observes during training and does not require length constraints typical to extractive summarization. The model achieves state-of-the-art results on the CNN/DailyMail and Newsroom datasets, improving over current extractive and abstractive methods. Human evaluations demonstrate that our model generates concise and informative summaries. We also make available a new dataset of oracle compressive summaries derived automatically from the CNN/DailyMail reference summaries.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Text summarization is an important NLP problem with a wide range of applications in data-driven industries (e.g., news, health, and defense). Single document summarization—the task of generating a short summary of a document preserving its informative content Spärck Jones (2007)—has been a highly studied research topic in recent years Nallapati et al. (2016b); See et al. (2017); Fan et al. (2018); Pasunuru and Bansal (2018).

Now at Google London.

Modern approaches to single document summarization using neural network architectures have primarily focused on two strategies:

extractive and abstractive. The former select a subset of the sentences to assemble a summary Cheng and Lapata (2016); Nallapati et al. (2017); Narayan et al. (2018a, c). The latter generates sentences that do not appear in the original document See et al. (2017); Narayan et al. (2018b); Paulus et al. (2018)

. Both methods suffer from significant drawbacks: extractive systems are wasteful since they cannot trim the original sentences to fit into the summary, and they lack a mechanism to ensure overall coherence. In contrast, abstractive systems require natural language generation and semantic representation, problems that are inherently harder to solve than just extracting sentences from the original document.

(ExConSumm Extractive) •  (CNN) A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.
•  Ibrahim al-Rubaish died Monday night in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.”
(ExConSumm Compressive) •  (CNN) A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen , the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.
•  Ibrahim al-Rubaish died Monday night in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.”
Figure 1: Summaries produced by our model. For illustration, the compressive summary shows the removed spans strike-through.

In this paper, we present a novel architecture that attempts to mitigate the problems above via a middle ground, compressive summarization Martins and Smith (2009)

. Our model selects a set of sentences from the input document, and compresses them by removing unnecessary words, while keeping the summaries informative, concise and grammatical. We achieve this by dynamically modeling the generated summary using a Long Short Term Memory (LSTM;

Hochreiter and Schmidhuber, 1997) to produce summary state representations. This state provides crucial information to iteratively increment summaries based on previously extracted information. It also facilitates the generation of variable length summaries as opposed to fixed lengths, in previous extractive systems Cheng and Lapata (2016); Nallapati et al. (2017); Narayan et al. (2018c); Zhang et al. (2018). Our model can be trained in both extractive (labeling sentences for extraction) or compressive (labeling words for extraction) settings. Figure 1 shows a summary example generated by our model.

Our contributions in this paper are three-fold:

  • we present the first end-to-end neural architecture for EXtractive and COmpressive Neural SUMMarization (dubbed ExConSumm, see §3),

  • we validate this architecture on the CNN/DailyMail and the Newsroom datasets Hermann et al. (2015); Grusky et al. (2018), showing that our model generates variable-length summaries which correlate well with gold summaries in length and are concise and informative (see §5), and

  • we provide a new CNN/DailyMail dataset annotated with automatic compressions for each sentence, and a set of compressed oracle summaries (see §4).

Experimental results show that when evaluated automatically, both the extractive and compressive variants of our model provide state-of-the-art results. Human evaluation further shows that our model is better than previous state-of-the-art systems at generating informative and concise summaries.

2 Related Work

Recent work on neural summarization has mainly focused on sequence-to-sequence (seq2seq) architectures Sutskever et al. (2014), a formulation particularly suited and initially employed for abstractive summarization Rush et al. (2015)

. However, state-of-the-art results have been achieved by RNN-based methods which are extractive. They select sentences based on an LSTM classifier that predicts a binary label for each sentence 

Cheng and Lapata (2016)

, based on ranking using reinforcement learning 

Narayan et al. (2018c), or even by training an extractive latent model Zhang et al. (2018). Other methods rely on an abstractive approach with strongly conditioned generation on the source document See et al. (2017). In fact, the best results for abstractive summarization have been achieved with models that are more extractive in nature than abstractive, since most of the words in the summary are copied from the document Gehrmann et al. (2018).

Due to the lack of training corpora, there is almost no work on neural architectures for compressive summarization. Most compressive summarization work has been applied to smaller datasets Martins and Smith (2009); Berg-Kirkpatrick et al. (2011); Almeida and Martins (2013)

. Other non-neural summarization systems apply this idea to select and compress the summary. Dorr03 introduced a method to first extract the first sentence of a news article and then use linguistically-motivated heuristics to iteratively trim parts of it. DurrettBK16 also learns a system that selects textual units to include in the summary and compresses them by deleting word spans guided by anaphoric constraints to improve coherence. Recently, zhang2018neural trained an abstractive sentence compression model using attention-based sequence-to-sequence architecture

Rush et al. (2015) to map a sentence in the document selected by the extractive model to a sentence in the summary. However, as the sentences in the document and in the summary are not aligned for compression, their compression model is significantly inferior to the extractive model.

Figure 2: Illustration of our summarization system. The model extracts the most relevant sentences from the document by taking into account the representation of the current sentence , the representation of the previous sentence , the current summary state representation , and the representation of the document . If a sentence is selected (), its representation is fed to , and we move to the next sentence. Here, sentences and were selected. If the model is also compressing, the compressive layer selects words for the final summary (Compressive Decoder). See Figure 3 for details on the decoders.

In this paper, we propose a novel seq2seq architecture for compressive summarization and demonstrate that it avoids the over-extraction of existing extractive approaches Cheng and Lapata (2016); Dlikman and Last (2016); Nallapati et al. (2016a).

Our model builds on recent approaches to neural extractive summarization as a sequence labeling problem, where sentences in the document are labeled to specify whether or not they should be included in the summary Cheng and Lapata (2016); Narayan et al. (2018a). These models often condition their labeling decisions on the document representation only. Nallapati2017SummaRuNNer tries to model the summary as the average representation of the positively labeled sentences. However, as we show later, this strategy is not the most adequate to ensure summary coherence, as it does not take the order of the selected sentences into account. Our approach addresses this problem by maintaining an LSTM cell to dynamically model the generated summary. To the best of our knowledge, our work is the first to use a model that keeps a state of already generated summary to effectively model variable-length summaries in an extractive setting, and the first to learn a compressive summarizer with an end-to end approach.

3 Summarization with Summary State Representation

Our model extracts sentences from a given document and further compresses these sentences by deleting words. More formally, we denote a document as a sequence of sentences, and a sentence as a sequence of words. We denote by , and the embedding of words, sentences and document in a continuous space. We model document summarization as a sequence labeling problem where the labeler transitions between internal states. Each state is dynamically computed based on the context, and it combines an extractive summarizer followed by a compressive one. First, we encode a document in a multi-level approach, to extract the embeddings of words and sentences (“Document Encoder”). Second, we decode these embeddings using a hierarchical “Decision Decoder.” The extractive summarizer labels each sentence with a label where  indicates that the sentence should be included in the final summary and otherwise. An extractive summary is then assembled by selecting all sentences with the label . Analogously, the compressive summarizer labels each word with a label , denoting whether the word in sentence is included in the summary or not. The final summary is then assembled as the sequence of words for each and . See Figures 2 and 3 for an overview of our model. We next describe each of its components in more detail.

3.1 Document Encoder

The document encoder is a two layer biLSTM, one layer encoding each sentence, and the second layer encoding the document. The first layer takes as input the word embeddings for each word in sentence

, and outputs the hidden representation of each word

. The hidden representation consist of the concatenation of a forward and a backward LSTM (WordEncoder in Figure 2). This layer eventually outputs a representation for each sentence that corresponds to the concatenation of the last forward and first backward LSTMs. The second layer encodes information about the document and is also a biLSTM that runs at the sentence-level. This biLSTM takes as input the sentence representation from the previous layer and outputs the hidden representation for each sentence in the document as (SentEncoder in Figure 2). We consider the output of the last forward LSTM over M sentences and first backward LSTM to be the final representation of the document .

The encoder returns two output vectors,

associated with each sentence , and for each word at the specific state of the encoder .

Figure 3: Decision decoder architecture. Decoder contains an extractive level for sentences (orange) and a compressive level for words (green), using an LSTM to model the summary state. Red diamond shapes represent decision variables if for selecting the sentence , and if for skipping this sentence. The same for and for deciding over words to keep in the summary.

3.2 Decision Decoder

Given that our model operates both at the sentence-level and at the word-level, the decision decoder maintains two state LSTMs denoted by and as in Figure 3. For the sentence-level decoder sentences are selected and the state of the summary gets updated by . For the word-level, all compressed word representations in a sentence are pushed to the word-level layer. In the compressive decoder, words that get selected are pushed onto the , and once the decoder has reached the end of the sentence, it pushes the output representation of the last state onto the sentence-level layer for the next sentence.

Extractive Decoder

The extractive decoder selects the sentences that should go to the summary. For each sentence at time step , the decoder takes a decision based on the encoder representation and the state of the summary , computed as follows:

(1)

where the is modeled by an LSTM taking as input the already selected and compressed sentences comprising the summary so far . This way, at each point in time, we have a representation of the summary given by the LSTM that encodes the state of summary generated so far, based on the past sentences already processed by the compressive decoder (in ).222When using only the extractive model the summary state is generated from an LSTM whose inputs correspond to the sentence encoded embeddings instead of the previously generated compressed representations . The summary representation at step () is then used to determine whether to keep or not the current sentence in the summary ( or respectively). The summarizer state subsumes information about the document, sentence and summary as:

where is a model parameter, is the dynamic LSTM state, and is a bias term.

This modeling decision is crucial in order to generate variable length summaries. It captures information about the sentences or words already present in the summary, helping in better understanding the “true” length of the summary given the document.

Finally, the summarizer state

is used to compute the probability of the action at time

as:

where is a model parameter and is a bias term for the summarizer action .

We minimize the negative log-likelihood of the observed labels at training time Dimitroff et al. (2013), where and represent the distribution of each class for the given sentences:333If or , we simply consider the whole term to be zero. Here represents the number of sentences in the document.

(2)

where is the indicator function of class and represents all the training parameters of the sentence encode/decoder. At test time, the model emits probability , which is used as the soft prediction sequentially extracting the sentence . We admit sentences when .

Compressive Decoder

Our compressive decoder shares its architecture with the extractive decoder. The compressive layer is triggered every time a sentence is selected in the summary and is responsible for selecting the words within each selected sentence. In practice, LSTM (see Figure 3) is applied hierarchically after the sentence-level decoder, using as input the collected word embeddings so far:

(3)

After making the selection decision for all words pertaining to a sentence, the final state of the , is fed back to of the extractive level decoder for the consecutive sentence, as depicted in Figure 3.

The word-level summarizer state representation depends on the encoding of words, document and sentence , on the dynamic LSTM encoding for the summary based on the selected words () and sentences () :

(4)

where is a model parameter and is a bias term. Each action at time step is computed by

with parameter and bias . The final loss for the compressive layer is

(5)

where represents the set of all the training parameters of the word-level encoder/decoder, is the compressive layer loss over N words:

(6)

The total final loss is then given by the sum of the extractive and compressive counterparts, .

4 Experimental Setup

We mainly used the CNN/DailyMail corpus Hermann et al. (2015) to evaluate our models. We used the standard splits of Hermann2015Teaching for training, validation, and testing (90,266/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for DailyMail). To evaluate the flexibility of our model, we also evaluated our models on the Newsroom dataset Grusky et al. (2018), which includes articles form a diverse collection of sources (38 publishers) with different summary style subsets: extractive (Ext.), mixed (Mixed) and abstractive (Abs.). We used the standard splits of newsroom_N181065 for training, validation, and testing (331,778/36,332/36,122 documents for Ext., 328,634/35,879/36,006 for Mixed and 332,554/36,380/36,522 for Abs.). We did not anonymize entities or lower case tokens.

4.1 Estimating Oracles

Datasets for training extractive summarization systems do not naturally contain sentence/word-level labels. Instead, they are typically accompanied by abstractive summaries from which extraction labels are extrapolated. We create extractive and compressive summaries prior to training using two types of oracles.

We used an extractive oracle to identify the set of sentences which collectively gives the highest ROUGE Lin and Hovy (2003) with respect to the gold summary Narayan et al. (2018c).

To build a compressive oracle, we trained a supervised sentence labeling classifier, adapted from the Transition-Based Chunking Model Lample et al. (2016), to annotate spans in every sentence that can be dropped in the final summary. We used the publicly released set of 10,000 sentence-compression pairs from the Google sentence compression dataset Filippova and Altun (2013); Filippova et al. (2015) for training. After tagging all sentences in the CNN and DailyMail corpora using this compression model, we generated oracle compressive summaries based on the best average of ROUGE-1 (R1) and ROUGE-2 (R2) F scores from the combination of all possible sentences and all removals of the marked compression chunks.

To verify the adequacy of our proposed oracles, we show in Table 1 a comparison of their scores. Our compressive oracle achieves much better scores than the extractive oracle, because of its capability to make summaries concise. Moreover, the linguistic quality of these oracles was preserved due to the tagging of the entire span by the sentence compressor trained on the sentence compression dataset.444We show examples of both oracles in Appendix §A.1. We believe that our dataset with oracle compression labels will be of significant interest to the sentence compression and summarization community.

Oracle R1 R2 RL
Extractive Oracle 54.67 30.37 50.81
Compressive Oracle 57.12 32.59 53.27
Table 1: Oracle scores obtained for the CNN and DailyMail testsets. We report ROUGE-1 (R1), ROUGE-2 (R2) and ROUGE-L (RL) F1 scores.
Models CNN DailyMail Newsroom Ext. Newsroom Mixed Newsroom Abs.
R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL
LEAD 29.1 11.1 25.9 40.7 18.3 37.2 53.1 49.0 52.4 13.7 2.4 11.2
Refresh 30.0 11.7 26.9 41.0 18.8 37.7
ExConSumm Extractive 32.5 12.6 28.5 42.8 19.3 38.9 69.4 64.3 68.3 31.9 16.3 26.9 17.2 3.1 13.6
ExConSumm Compressive 32.5 12.7 29.2 41.7 18.5 38.4 68.4 62.9 67.3 31.7 16.1 27.0 17.1 3.1 14.1
Pointer+Coverage  39.1 28.0 36.2 25.5 11.0 21.1 14.7 2.3 11.4
Tan2017Abstractive 30.3 9.8 20.0
Table 2: Results on the CNN, DailyMail and Newsroom test sets. We report ROUGE R1, R2 and RL F scores. Extractive systems are in the first block, compressive in the second and abstractive in the third. We use — whenever results are not available. Models marked with are not directly comparable to ours as they are based on an anonymized version of the dataset. The model marked with show here the results for the best configuration of See2017Get, referred to as Pointer-N in newsroom_N181065, which is trained on the whole Newsroom dataset.

4.2 Training Parameters

The parameters for the loss at the sentence-level were and and at the word-level, and . We used LSTMs with

for all hidden layers. We performed mini-batch negative log-likelihood training with a batch size of 2 documents for 5 training epochs.We observed the convergence of the model between the 2nd and the 3rd epochs. It took around 12 hrs on a single GTX 1080 GPU to train. We evaluated our model on the validation set after every 5,000 batches. We trained with Adam

Kingma and Ba (2015) with an initial learning rate of . Our system was implemented using DyNet Neubig et al. (2017).

Models CNNDailyMail
R1 R2 RL
LEAD 39.6 17.7 36.2
SummaRuNNer 39.6 16.2 35.3
Refresh 40.0 18.2 36.6
Latent 41.1 18.8 37.4
ExConSumm Extractive 41.7 18.6 37.8
Latent+Compress 36.7 15.4 34.3
ExConSumm Compressive 40.9 18.0 37.4
Pointer+Coverage 39.5 17.3 36.4
ML + RL 39.9 15.8 36.9
Tan2017Abstractive 38.1 13.9 34.0
li2018guiding 39.0 17.1 35.7
bansal2018fastabstractive 40.4 18.0 37.1
Hsu2018Unified 40.7 18.0 37.1
Pasunuru-multireward18 40.9 17.8 38.5
gehrmann2018bottom 41.2 18.7 38.3
Table 3: Results for combined CNN/DailyMail test set.

4.3 Model Evaluation

We evaluated summarization quality using F ROUGE Lin and Hovy (2003). We report results in terms of unigram and bigram overlap (R1) and (R2) as a means of assessing informativeness, and the longest common subsequence (RL) as a means of assessing fluency.555We used pyrouge to compute the ROUGE scores. The parameters we used were “-a -c 95 -m -n 4 -w 1.2.” In addition to ROUGE, which can be misleading when used as the only means to assess summaries Schluter (2017), we also conducted a question-answering based human evaluation to assess the informativeness of our summaries in their ability to preserve key information from the document Narayan et al. (2018c).666We used the CNN/DailyMail QA test set of Narayan2018Ranking for evaluation. It includes 20 documents with a total of 71 manually written question-answer pairs. First, questions are written using the gold summary, we then examined how many questions participants were able to answer by reading system summaries alone, without access to the article.777See Appendix §A.2 for more details. Figure 5 shows a set of candidate summaries along with questions used for this evaluation.

Models Bounded Unbounded
Human QA ROUGE Human QA ROUGE Pearson
score rank R1 R2 RL score rank R1 R2 RL r
LEAD 25.50 4rd 30.9 11.9 29.1 36.33 5th 31.6 13.5 29.3 0.40
Refresh 20.88 6th 37.4 17.3 34.8 66.34 1st 43.8 25.8 41.6 0.60
Latent 38.45 2nd 38.9 19.6 36.4 53.38 4th 40.7 22.0 38.1 -0.02
ExConSumm Extractive 36.34 3rd 38.4 18.5 35.9 54.93 3rd 40.8 21.0 38.2 0.68
ExConSumm Compressive 39.44 1st 38.8 19.0 37.0 57.32 2nd 41.4 22.6 39.1 0.72
Pointer+Coverage 24.51 5th 38.4 19.7 36.7 28.73 6th 40.2 21.4 38.0 0.30
Table 4: QA evaluations: limited length (Bounded) and full length (Unbounded) summaries. We also show ROUGE scores for the summaries being evaluated. We report the Pearson correlation coefficient between the human and predicted summary lengths

4.4 Model and Baselines

We evaluated our model ExConSumm in two settings: Extractive (selects sentences to assemble the summary) and Compressive (selects sentences and compresses them by removing unnecessary spans of words). We compared our models against a baseline (LEAD) that selects the first  leading sentences from each document,888We follow Narayan2018Ranking and set for CNN and for DailyMail. We follow newsroom_N181065 and set for Newsroom. three neural extractive models, and various abstractive models. For the extractive models, we used SummaRuNNer Nallapati et al. (2017), since it shares some similarity to our model, Refresh Narayan et al. (2018c) trained with reinforcement learning and Latent Zhang et al. (2018) a neural architecture that makes use of latent variable to avoid creating oracle summaries. We further compare against Latent+Compress Zhang et al. (2018), an extension of the Latent model that learns to map extracted sentences to final summaries using an attention-based seq2seq model Rush et al. (2015). All models, unlike ours, extract a fixed number of sentences to assemble their summaries. For abstractive models, we compare against the state-of-the art models of Pointer+Coverage See et al. (2017), ML+RL Paulus et al. (2018), and Tan2017Abstractive among others.

5 Results

5.1 Automatic Evaluation

Table 2 and 3 show results for the evaluations on the CNN/DailyMail and Newsroom test sets.

Comparison with Extractive Systems.

ExConSumm Compressive performs best on the CNN dataset and ExConSumm Extractive on the DailyMail dataset, probably due to the fact that the CNN dataset is less biased towards extractive methods than the DailyMail dataset Narayan et al. (2018b). We report similar results on the Newsroom dataset. ExConSumm Compressive tends to perform better for mixed (Mixed) and abstractive (Abs.) subsets, while ExConSumm Extractive performs better for the extractive (Ext.) subset. Our experiments demonstrate that our compressive model tends to perform better on the dataset which promotes abstractive summaries.

We find that ExConSumm Extractive consistently performs better on all metrics when compared to any of the other extractive models, except for the single case where it is narrowly behind Latent on R2 (18.6 vs 18.8) for the CNN/DailyMail combined test set. It even outperforms Refresh, which is trained with reinforcement learning. We hypothesize that its superior performance stems from the ability to generate variable length summaries. Refresh or Latent, on the other hand, always produces a fixed length summary.

Comparison with Compressive System.

ExConSumm Compressive reports superior performance compared to Latent+Compress (+4.2 for R1, +2.6 for R2 and +3.1 for RL). Our results demonstrate that our compressive system is more suitable for document summarization. It first selects sentences and then compresses them by removing irrelevant spans of words. It makes use of an advance oracle sentence compressor trained on a dedicated sentence compression dataset (Sec. 4.1). In contrast, Latent+Compress naively trains a sequence-to-sequence compressor to map a sentence in the document to a sentence in the summary.

Figure 4: Word distribution in comparison with the human summaries for CNN dataset. Density curves show the length distributions of human authored and system produced summaries.

Comparison with Abstractive Systems.

Both ExConSumm Extractive and Compressive outperform most of the abstractive systems including Pointer+Coverage See et al. (2017). When comparing with more recent methods Pasunuru and Bansal (2018); Gehrmann et al. (2018), our model has comparable performance.

rmed almost as well as the best abstractive model.

LEAD
•  (CNN) A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.
•  Ibrahim al-Rubaish died Monday night in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.”
•  The Al-Malahem Media obituary characterized al-Rubaish as a religious scholar and combat commander.
Refresh
•  (CNN) A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.
•  Ibrahim al-Rubaish died Monday night in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.”
•  Al-Rubaish was once held by the U.S. government at its detention facility in Guantanamo Bay, Cuba.
Latent
•  (CNN) A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.
•  Ibrahim al-Rubaish died Monday night in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.” The Al-Malahem Media obituary characterized al-Rubaish as a religious scholar and combat commander.
•  A Yemeni Defense Ministry official and two Yemeni national security officials not authorized to speak on record confirmed that al-Rubaish had been killed, but could not specify how he died.
ExConSumm Extractive
•  (CNN) A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen, the terror group said, showing the organization is vulnerable even as Yemen appears close to civil war.
•  Ibrahim al-Rubaish died Monday night in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.”
ExConSumm Compressive
•  A top al Qaeda in the Arabian Peninsula leader–who a few years ago was in a U.S. detention facility–was among five killed in an airstrike in Yemen. •  Ibrahim al-Rubaish died in what AQAP’s media wing, Al-Malahem Media, called a “crusader airstrike.”
Pointer+Coverage
•  Ibrahim al-Rubaish was among a number of detainees who sued the administration of then-president George W. Bush to challenge the legality of their confinement in Gitmo. •  al-Rubaish was once held by the U.S. government at its detention facility in Guantanamo bay, Cuba.
GOLD
•  AQAP says a “crusader airstrike” killed Ibrahim al-Rubaish •  Al-Rubaish was once detained by the United States in Guantanamo
Question-Answer Pairs
•  Who said that an airstrike killed Ibrahim al-Rubaish? (AQAP) •  What was the airstrike called? (crusader airstrike) •  Where was Ibrahim al-Rubaish once detained? (Guantanamo)
Figure 5: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions. The questions are manually written using the gold summary. The same ExConSumm summaries are shown in Figure 1, but the strike-through spans are now removed.

Summary Versatility.

We evaluate the ability of our model to generate variable length summaries. Table 4 show the Pearson correlation coefficient between the lengths of the human generated summaries against each unbounded model. Our compressive approach obtains the best results, with a Pearson correlation coefficient of 0.72 ().

Figure 4 also shows the distribution of words per summary for the models where predictions were available. Interestingly, both ExConSumm Extractive and Compressive follow the human distribution much better than other extractive systems (Lead, Refresh and Latent), since they are able to generate variable-length summaries depending on the input text. Our compressive model generates a word distribution much closer to the abstractive Pointer+Coverage model but achieves better compression ratio; the summaries generated by Pointer+Coverage contain 59.8 words, while those generated by ExConSumm Compressive have 54.3 words on average.

5.2 QA Evaluation

Table 4 shows results from our question answering based human evaluation. We elicited human judgements in two settings: the “Unbounded”, where participants were shown the full system produced summaries; and the “Bounded”, where participants were shown summaries that were limited to the same size as the gold summaries.

For the “Unbounded” setting, the output summaries produced by Refresh  were able to answer most of the questions correctly, our Compressive and Extractive systems were placed at the 2nd and 3rd places respectively.999We carried out pairwise comparisons between all models to assess whether system differences are statistically significant. We found that there is no statistically significant difference between Refresh and ExConSumm Compressive. We use a one-way ANOVA with posthoc Tukey HSD tests with . The differences among Latent and both variants of ExConSumm, and between lead and Pointer+Coverage are also statistically insignificant. All other differences are statistically significant.

We observed that our systems were able to produce more concise summaries than those produced by Refresh (avg. length in words: 76.0 for Refresh, 56.2 for ExConSumm Extractive and 54.3 for ExConSumm Compressive; see Figure 4). Refresh is prone to generating verbose summaries, consequently it has an advantage of accumulating more information. In the “Bounded” setting, we aim to reduce this unfair advantage. Scores are overall lower since the summary sizes are truncated to gold size. The ExConSumm Compressive summaries rank first and can answer 39.44% of questions correctly. ExConSumm Extractive retains its 3rd place answering 36.34% of questions correctly.101010The differences among both variants of ExConSumm and Latent, and among lead, Refresh and Pointer+Coverage are statistically insignificant. All other differences are statistically significant. We use a one-way ANOVA with posthoc Tukey HSD tests with . These results demonstrate that our models generate concise and informative summaries that correlate well with the human summary lengths.111111App. §A.2 shows more examples of our summaries.

5.3 Summary State Representation

Next, we performed an ablation study to investigate the importance of the summary state representation w.r.t. the quality of the overall summary. We tested against a State averaging variant, where we replace by a weighted average, analogous to Nallapati2017SummaRuNNer, , where has the same form as but depends recursively on the previous summary state . Table 5 shows that using an LSTM state to model the current sentences in the summary is very important. The other ablation study shows how learning to extract and compress in a disjoint approach (ExConSumm Ext+Comp oracle) performs against a joint learning approach (ExConSumm Compressive). We compared summaries generated from our best extractive model and compressed them with a compressive oracle. Our joint learning model achieves the best performance in all metrics compared with the other ablations, suggesting that joint learning and using a summary state representation is beneficial for summarization.

 State ROUGE
R1 R2 RL
 ExConSumm Extractive 32.5 12.6 28.5
 State averaging 30.0 12.3 26.9
 ExConSumm Compressive 32.5 12.7 29.2
 ExConSumm Ext+Comp oracle 25.5 9.3 23.7
Table 5: Summary state ablation for the CNN dataset.

6 Conclusions

We developed ExConSumm, a novel summarization model to generate variable length extractive and compressive summaries. Experimental results show that the ability of our model to learn a dynamic representation of the summary produces summaries that are informative, concise, and correlate well with human generated summary lengths. Our model outperforms state-of-the-art extractive and most of abstractive systems on the CNN and DailyMail datasets, when evaluated automatically, and through human evaluation for the bounded scenario. We further obtain state-of-the-art results on Newsroom, a more abstractive summary dataset.

Acknowledgments

This work is supported by the EU H2020 SUMMA project (grant agreement No 688139), by Lisbon Regional Operational Programme (Lisboa 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), within project INSIGHT (No 033869), by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundação para a Ciência e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal).

References

  • Almeida and Martins (2013) Miguel Almeida and Andre Martins. 2013. Fast and robust compressive summarization with dual decomposition and multi-task learning. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 196–206.
  • Berg-Kirkpatrick et al. (2011) Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 481–490, Portland, Oregon, USA. Association for Computational Linguistics.
  • Chelba et al. (2013) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2013. One billion word benchmark for measuring progress in statistical language modeling. CoRR, abs/1312.3005.
  • Chen and Bansal (2018) Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686. Association for Computational Linguistics.
  • Cheng and Lapata (2016) Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics.
  • Clarke and Lapata (2008) James Clarke and Mirella Lapata. 2008.

    Global inference for sentence compression: An integer linear programming approach.

    Journal of Artificial Intelligence Research (JAIR)

    , 31:399–429.
  • Dimitroff et al. (2013) Georgi Dimitroff, Laura Tolosi, Borislav Popov, and Georgi Georgiev. 2013. Weighted maximum likelihood loss as a convenient shortcut to optimizing the f-measure of maximum entropy classifiers. In

    Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013

    , pages 207–214, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA.
  • Dlikman and Last (2016) Alexander Dlikman and Mark Last. 2016.

    Using machine learning methods and linguistic features in single-document extractive summarization.

    In DMNLP@PKDD/ECML.
  • Dorr et al. (2003) Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLT-NAACL 03 on Text Summarization Workshop - Volume 5, HLT-NAACL-DUC 2003, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • Durrett et al. (2016) Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1998–2008, Berlin, Germany.
  • Fan et al. (2018) Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In

    Proceedings of the 2nd Workshop on Neural Machine Translation and Generation

    , pages 45–54, Melbourne, Australia.
  • Filippova et al. (2015) Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360–368. Association for Computational Linguistics.
  • Filippova and Altun (2013) Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1481–1491, Seattle, Washington, USA. Association for Computational Linguistics.
  • Gehrmann et al. (2018) Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium.
  • Grusky et al. (2018) Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719. Association for Computational Linguistics.
  • Hermann et al. (2015) Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
  • Hsu et al. (2018) Wan Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. CoRR, abs/1805.06266.
  • Kingma and Ba (2015) Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).
  • Lample et al. (2016) Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.

    Neural architectures for named entity recognition.

    In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Association for Computational Linguistics.
  • Li et al. (2018) Chenliang Li, Weiran Xu, Si Li, and Sheng Gao. 2018. Guiding generation for abstractive text summarization based on key information guide network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 55–60.
  • Lin and Hovy (2003) Chin-Yew Lin and Eduard Hovy. 2003.

    Automatic evaluation of summaries using n-gram co-occurrence statistics.

    In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics.
  • Martins and Smith (2009) Andre F. T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In North American Chapter of the Association for Computational Linguistics: Workshop on Integer Linear Programming for NLP.
  • McDonald (2006) Ryan McDonald. 2006. Discriminative sentence compression with soft syntactic evidence. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 297–304.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc.
  • Nallapati et al. (2017) Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017.

    SummaRuNNer: a recurrent neural network based sequence model for extractive summarization of documents.

    In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 3075–3081.
  • Nallapati et al. (2016a) Ramesh Nallapati, Bowen Zhou, and Mingbo Ma. 2016a. Classify or select: Neural architectures for extractive document summarization. CoRR, abs/1611.04244.
  • Nallapati et al. (2016b) Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016b. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics.
  • Narayan et al. (2018a) Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata, Jiangsheng Yu, and Yi Chang. 2018a. Document modeling with external attention for sentence extraction. In Proceedings of the 56st Annual Meeting of the Association for Computational Linguistics, pages 2020–2030, Melbourne, Australia.
  • Narayan et al. (2018b) Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b.

    Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization.

    In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium.
  • Narayan et al. (2018c) Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018c. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1747–1759, New Orleans, Louisiana.
  • Neubig et al. (2017) Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.
  • Pasunuru and Bansal (2018) Ramakanth Pasunuru and Mohit Bansal. 2018. Multi-reward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646–653. Association for Computational Linguistics.
  • Paulus et al. (2018) Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations.
  • Rush et al. (2015) Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015.

    A neural attention model for abstractive sentence summarization.

    In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics.
  • Schluter (2017) Natalie Schluter. 2017. The limits of automatic summarisation according to rouge. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Short Papers, pages 41–45, Valencia, Spain.
  • See et al. (2017) Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–1083, Vancouver, Canada. Association for Computational Linguistics.
  • Spärck Jones (2007) Karen Spärck Jones. 2007. Automatic summarising: The state of the art. Information Processing & Management, 43(6):1449–1481.
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS, page 9.
  • Tan et al. (2017) Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graph-based attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181, Vancouver, Canada. Association for Computational Linguistics.
  • Zhang et al. (2018) Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 779–784, Brussels, Belgium.

Appendix A Appendices

a.1 Estimating Summary Oracles

We describe our method to estimate extractive and compressive oracle summaries prior to training using two types of

oracles. We build these oracles in order to train our model with a supervised objective by minimizing a negative log-likelihood function. We create documents annotated with sentence-level and word-level extraction labels, which correspond to the gold values of both variables and respectively.

Extractive Oracle.

We followed Narayan2018Ranking and identified the set of sentences which collectively give the highest ROUGE Lin and Hovy (2003) with respect to the gold summary. More concretely, we assembled candidate summaries efficiently by first selecting sentences from the document which on their own have high ROUGE scores. We then generated all possible combinations of sentences subject to maximum length ( for CNN and for DailyMail) and evaluated them against the gold summary. We select the summary with the highest mean of ROUGE-1, ROUGE-2, and ROUGE-L F1 scores.

Extractive Oracle
•  Wijnaldum - who has netted 16 goals in all competitions for the Dutch giants this season - has been linked with a move to Manchester United, Arsenal and Newcastle. •  The PSV captain could help his club help end their seven-year wait for the Eredivisie title if they beat Heerenveen at home on Saturday.
Compressive Oracle
•  PSV Eindhoven midfielder Georginio Wijnaldum has fueled speculation that he is eyeing a move to the Premier League in the summer. •  Wijnaldum - who has netted 16 goals in all competitions for the Dutch giants this season - has been linked with a move to Manchester United, Arsenal and Newcastle.
BOW Oracle (Baseline)
•  Wijnaldum - who has netted 16 goals in all competitions for the Dutch giants this season - has been linked with a move to Manchester United , Arsenal and Newcastle •  The PSV captain could help his club help end their seven-year wait for the Eredivisie title if they beat Heerenveen at home on Saturday
GOLD
•  Georginio Wijnaldum is set to guide PSV to their first title in seven years •  Dutch giants can win Eredivisie with a win over Heerenveen on Saturday •  Manchester United, Arsenal and Newcastle have been linked with him •  Midfielder has scored 16 goals in all competitions for PSV this term
Extractive Oracle
•  Wrap a filter around the centre of an ice-cream cone to keep little hands clean from ice-cream mess. •  Apply a dab of your favourite shoe polish on the filter and use it as an applicator. •  As they are lint-free coffee filters can be used to polish glass, and clean mirrors.
Compressive Oracle
•  Keep celery crispy by storing them with a coffee filter, which are far more absorbent than kitchen roll, and will absorb moisture from the vegetables . •  Wrap a filter around the centre of an ice-cream cone to keep little hands clean from ice-cream mess. •  Use it as a bouquet garni holder the next time you’re making soup or stews . •  Keep clothes smelling fresh
•  Coffee filters are perfect for polishing leather shoes as they are lint-free and so won’t leave unsightly streak marks on your shoes .
BOW Oracle (Baseline)
•  Stop messy ice-cream spillage Wrap a filter around the centre of an ice-cream cone to keep little hands clean from ice-cream mess •  Apply a dab of your favourite shoe polish on the filter and use it as an applicator • As they are lint-free coffee filters can be used to polish glass, and clean mirrors
GOLD
•  Lint-free and tear resistant filters are good for a range of household tasks •  Wrap one sheet around celery stalks when storing in fridge to keep crisp •  Use it to polish shoes, keep laundry smelling fresh and even as a plate
Figure 6: Examples of our estimated oracle summaries along with the reference summary for the CNN and DailyMail datasets. For illustration, the compressive oracle shows the removed spans strike-through.
Models CNN DailyMail Newsroom Ext. Newsroom Mixed Newsroom Abs.
R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL
ExConSumm Extractive 32.5 12.6 28.5 42.8 19.3 38.9 69.4 64.3 68.3 31.9 16.3 26.9 17.2 3.1 13.6
ExConSumm BOW 33.5 12.4 30.0 42.5 17.8 38.8 68.7 59.8 67.6 32.0 15.2 27.3 19.1 2.8 16.5
ExConSumm Compressive 32.5 12.7 29.2 41.7 18.5 38.4 68.4 62.9 67.3 31.7 16.1 27.0 17.1 3.1 14.1
Table 6: Comparison of BOW model against Extractive and Compressive models. Most of the numbers are repeated from Table 2.

Compressive Oracle.

The primary challenge in building a compressive oracle lies in preserving the grammaticality of compressed sentences. Following the sentence compression literature McDonald (2006); Clarke and Lapata (2008); Berg-Kirkpatrick et al. (2011); Filippova and Altun (2013); Filippova et al. (2015), we train a supervised neural model to annotate spans in every sentence that can be dropped. In particular, we trained a supervised sentence labeling classifier adapted from Lample2016Neural. To train our classifier, we used the publicly released set of 10,000 sentence-compression pairs from the Google sentence compression dataset Filippova et al. (2015); Filippova and Altun (2013). We removed the first 1,000 sentences as the development set and used the remaining ones as the training set.

After training our classifier for 30 epochs, it achieved a per-sentence accuracy of 21%, a word-based F-1 score of 78% and a compression ratio of 0.38. The parameters for the model were: 2 layers, dropout of 0.1, hidden dimension of size 400, action dimension of 20 and relation dimension of 20. We used the One Billion Word Benchmark corpus Chelba et al. (2013) to train word embeddings with the skip-gram model Mikolov et al. (2013)

using context window size 6, negative sampling size 10, and hierarchical softmax 1. Same embeddings were used to train our summarization model also. For details of the evaluation metrics, please see Filippova2015Sentence.

After tagging all sentences in the CNN and DailyMail corpora using this compression model, we generated oracle compressive summaries based on the best average of ROUGE-1 and ROUGE-2 F scores from the combination of all possible sentences with all combinations of removals of the marked compression chunks. To solve this combinatorial problem, our algorithm recursively selects the possible sentences with the best accumulated score. Due to performance reasons, we used a simplified algorithm which uses unigrams and bigrams overlap computed incrementally at each recursion level, instead of the official ROUGE metric Lin and Hovy (2003). We only allow a maximum of seven compressed sentences per oracle. Our compressive oracle achieves much better scores than the extractive oracle because of its capability to make summaries concise (see Table 1 in the paper). Moreover, the linguistic quality of these oracles was preserved due to the tagging of the entire span by the sentence compressor. See Figure 6 for examples of our oracle summaries.

ExConSumm Extractive
•  Beverly Hills Police investigated an incident in January 2014 in which West was accused of assaulting a man at a Beverly Hills chiropractor’s office.
•  The photographer, Daniel Ramos, had filed the civil suit against West after the hip-hop star attacked him and tried to wrestle his camera from him in July 2013 at Los Angeles International Airport.
ExConSumm BOW (Baseline)
•  (CNN) Kanye West has settled a lawsuit with a paparazzi photographer he assaulted and the two have shaken on it .
•  The photographer, Daniel Ramos, had filed the civil suit against West after the hip-hop star attacked him and tried to wrestle his camera from him in July 2013 at Los Angeles International Airport .
•  West pleaded no contest last year to a misdemeanor count of battery over the scuffle .
ExConSumm Compressive
•  (CNN) Kanye West has settled a lawsuit with a paparazzi photographer he assaulted–and the two have shaken on it.
•  The photographer, Daniel Ramos, had filed the civil suit against West after the hip-hop star attacked him and tried to wrestle his camera from him in July 2013 at Los Angeles International Airport.
•  West pleaded no contest last year to a misdemeanor count of battery over the scuffle.
Figure 7: Summaries produced by the ExConSumm Extractive, BoW compressive and compressive methods. For illustration, BoW and compressive summaries show the removed spans strike-through.

Baseline BOW Oracle.

Additionally, we also experimented with a bag-of-words oracle (BOW oracle), that serves as a baseline for creating a compressive oracle, in which labels are generated by simply dropping words if they do not appear in the gold summary. Unsurprisingly, oracle sentences compressed with this method are often ungrammatical (see Figure 6).

Our model ExConSumm trained with the BOW oracle (ExConSumm BOW) often score higher than the scores of the compressive model as shown in Table 6. However, looking at the example summaries in Figure 7, we find that the BOW compressive model is incapable of generating a fluent or grammatical summary. The ExConSumm Compressive summary, on the other hand, is fluent and grammatical. Our summary LSTMs ( and ) can preserve the fluency of the summaries if trained with the fluent Compressive oracle. This is not guaranteed when using the BOW oracle.

a.2 Human Evaluation Experiment Design

The main assumption behind this evaluation is that the gold summary highlights the most important content of the document. Based on this assumption, the questions are written using the GOLD summary. For this study, we used the same 20 documents (10 from CNN and 10 from DailyMail testsets) with an accompanying set of questions based on the gold summary from Narayan2018Ranking.121212The test set for the QA evaluation is publicly available at https://github.com/EdinburghNLP/Refresh. We examined how many questions participants were able to answer by reading system summaries alone, without access to the article. The more questions a system can help to answer, the better it is at summarizing the document as a whole. We collected answers from five different participants for each summary and system pair. We marked a correct answer with a score of one, partially correct answers with a score of 0.5, and zero otherwise, the final score is an average of all these scores. Answers were elicited using Amazon’s Mechanical Turk crowd-sourcing platform. Examples of systems summaries used for this evaluation are shown in Figures 589101112 and 13.

LEAD
•  (CNN) Seven people–including Illinois State University associate men’s basketball coach Torrey Ward and deputy athletic director Aaron Leetch–died when their small plane crashed while heading back from the NCAA tournament final.
•  The aircraft went down overnight Monday about 2 miles east of the Central Illinois Regional Airport in Bloomington, McLean County Sheriff’s Office Sgt. Bill Tate said.
•  That’s about 5 miles from the campus of Illinois State, where Ward and Leetch both worked.
Refresh
•  (CNN) Seven people–including Illinois State University associate men’s basketball coach Torrey Ward and deputy athletic director Aaron Leetch–died when their small plane crashed while heading back from the NCAA tournament final.
•  The aircraft went down overnight Monday about 2 miles east of the Central Illinois Regional Airport in Bloomington, McLean County Sheriff’s Office Sgt. Bill Tate said.
•  The plane was coming back from the NCAA Final Four championship game in Indianapolis, according to Illinois State athletics spokesman John Twork.
Latent
•  (CNN) Seven people–including Illinois State University associate men’s basketball coach Torrey Ward and deputy athletic director Aaron Leetch–died when their small plane crashed while heading back from the NCAA tournament final.
•  The plane was coming back from the NCAA Final Four championship game in Indianapolis, according to Illinois State athletics spokesman John Twork.
•  The aircraft went down overnight Monday about 2 miles east of the Central Illinois Regional Airport in Bloomington, McLean County Sheriff’s Office Sgt. Bill Tate said .
ExConSumm Extractive
•  (CNN) Seven people–including Illinois State University associate men’s basketball coach Torrey Ward and deputy athletic director Aaron Leetch–died when their small plane crashed while heading back from the NCAA tournament final.
•  The plane was coming back from the NCAA Final Four championship game in Indianapolis, according to Illinois State athletics spokesman John Twork.
ExConSumm Compressive
•  Seven people died their small plane crashed while heading back from the NCAA tournament final.
•  The aircraft went down overnight Monday about 2 miles east of the Central Illinois Regional Airport in Bloomington.
•  The plane was coming back from the NCAA Final Four championship game.
Pointer+Coverage
•  Illinois State University associate men’s basketball coach Torrey ward and deputy athletic director Aaron Leetch died when their small plane crashed while heading back from the NCAA tournament final.
•  The aircraft went down overnight Monday about 2 miles east of the Central Illinois Regional Airport in Bloomington, iIlinois.
•  It was not immediately known who else was on the aircraft, which the National Transportation Safety Board tweeted was a Cessna 414.
•  There’s also a picture of a small plane with the words, “my ride to the game was n’t bad #indy2015f4”.
GOLD
•  The crashed plane was a Cessna 414, National Transportation Safety Board reports
•  Coach Torrey Ward, administrator Aaron Leetch among the 7 killed in the crash
•  The plane crashed while coming back from the NCAA title game in Indianapolis
Question-Answer Pairs
•  What type of plane crashed? (Cessna 414)
•  Who are confirmed dead in the crash? (Coach Torrey Ward and administrator Aaron Leetch)
•  How many people in total died in the crash? (7 people)
•  The plane crashed while coming back from where? (The NCAA title game in Indianapolis)
Figure 8: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions.
LEAD
•  A hedgehog sniffing around in the dusk was once a common sight - but experts warn it may soon become a thing of the past.
•  One in five people have never seen a hedgehog in their gardens, according to a wildlife survey.
•  And of those who do spot the tiny animals, only a quarter see them frequently, the RSPB found.
•  The startling figures confirm fears that the small British mammal is suffering a huge decline.
Refresh
•  One in five people have never seen a hedgehog in their gardens, according to a wildlife survey.
•  And of those who do spot the tiny animals, only a quarter see them frequently, the RSPB found.
•  The startling figures confirm fears that the small British mammal is suffering a huge decline.
•  There are thought to be less than 1 million hedgehogs living in this country today, an estimated 30 per cent drop since 2013.
Latent
•  There are thought to be less than 1 million hedgehogs living in this country today, an estimated 30 per cent drop since 2013.
•  One in five people have never seen a hedgehog in their gardens, according to a new wildlife survey.
•  The startling figures confirm fears that the small British mammal is suffering a huge decline.
ExConSumm Extractive
•  One in five people have never seen a hedgehog in their gardens, according to a wildlife survey.
•  And of those who do spot the tiny animals , only a quarter see them frequently, the RSPB found.
•  The startling figures confirm fears that the small British mammal is suffering a huge decline.
•  There are thought to be less than 1 million hedgehogs living in this country today, an estimated 30 per cent drop since 2013.
ExConSumm Compressive
•  One in five people have never seen a hedgehog in their gardens.
•  And of those who do spot the tiny animals, only a quarter see them frequently, the RSPB found.
•  The small British mammal is suffering a huge decline.
•  There are thought to be less than 1 million hedgehogs living in this country today, an estimated 30 per cent drop since 2013.
Pointer+Coverage
•  One in five people have never seen a hedgehog in their gardens.
•  Of those who do spot the tiny animals, only a quarter see them frequently.
•  The startling figures confirm fears that the small British mammal is suffering a huge decline.
GOLD
•  One in five people have never seen a hedgehog in their back gardens
•  Only a quarter of those who do admitted seeing the animals frequently
•  Wildlife survey suggested the small British mammal is in huge decline
•  There are thought to be less than 1 million hedgehogs in the country
Question-Answer Pairs
•  How many people have never seen a hedgehog in their back gardens? (One in five people)
•  Who conducted this survey? (Wildlife survey)
•  How many hedgehogs are thought to be left in the country? (less than 1 million)
Figure 9: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions.
LEAD
•  (CNN) Blinky and Pinky on the Champs Elysees?
•  Inky and Clyde running down Broadway?
•  Power pellets on the Embarcadero?
Refresh
•  Leave it to Google to make April Fools’ Day into throwback fun by combining Google Maps with Pac-Man.
•  The massive tech company is known for its impish April Fools’ Day pranks, and Google Maps has been at the center of a few, including a Pokemon Challenge and a treasure map.
•  This year the company was a day early to the party, rolling out the Pac-Man game Tuesday.
Latent
•  Leave it to Google to make April Fools’ Day into throwback fun by combining Google Maps with Pac-Man.
•  (CNN) Blinky and Pinky on the Champs Elysees? Inky and Clyde running down Broadway? Power pellets on the Embarcadero?
•  Twitterers have been tickled by the possibilities, playing Pac-Man in Manhattan, on the University of Illinois quad, in central London and down crooked Lombard Street in San Fran cisco, among many locations:.
ExConSumm Extractive
•  Leave it to Google to make April Fools’ Day into throwback fun by combining Google Maps with Pac-Man.
•  It’s easy to play: Simply pull up Google Maps on your desktop browser, click on the Pac-Man icon on the lower left, and your map suddenly becomes a Pac-Man course.
ExConSumm Compressive
•  Leave it to Google to make April Fools Day into throwback fun by combining Google Maps with Pac-Man.
•  The tech company is known for its April Fools Day pranks.
Pointer+Coverage
•  The massive tech company is known for its impish April fools’ day pranks, and Google Maps has been at the center of a few, including a Pokemon challenge and a treasure map.
•  It’s easy to play : simply pull up Google Maps on your desktop browser.
GOLD
•  Google Maps has a temporary Pac-Man function
•  Google has long been fond of April Fools’ Day pranks and games
•  Many people are turning their cities into Pac-Man courses
Question-Answer Pairs
•  What function does Google Maps have? (Pac-Man)
•  What has Google been long fond of? (April Fools’ Day pranks and games)
•  What are many people turning their cities into? (Pac-Man courses)
Figure 10: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions.
LEAD
•  (CNN) Somewhere over the rainbow, people on the Internet are losing their minds.
•  Is it real?
•  After the New York area received a large amount of rain, four rainbows stretched across the early morning sky on Tuesday.
Refresh
•  (CNN) Somewhere over the rainbow, people on the Internet are losing their minds.
•  After the New York area received a large amount of rain, four rainbows stretched across the early morning sky on Tuesday.
•  Amanda Curtis, CEO of a fashion company in New York, snapped the lucky shot.
Latent
•  Amanda Curtis, CEO of a fashion company in New York, snapped the lucky shot.
•  After the New York area received a large amount of rain, four rainbows stretched across the early morning sky on Tuesday.
•  CNN iReporter Yosemitebear Vasquez posted a video to YouTube in 2010 reacting to a double rainbow he spotted in Yosemite National Park. The video has since garnered over 40 million views.
ExConSumm Extractive
•  After the New York area received a large amount of rain, four rainbows stretched across the early morning sky on Tuesday.
•  Amanda Curtis, CEO of a fashion company in New York, snapped the lucky shot.
•  The video has since garnered over 40 million views.
ExConSumm Compressive
•  Four rainbows stretched across the early morning sky on Tuesday.
•  Amanda Curtis, CEO of a fashion company in New York, snapped the lucky shot.
•  The video has since garnered over 40 million views.
Pointer+Coverage
•  Amanda Curtis, CEO of a fashion company in New York, snapped the lucky shot.
•  She posted the picture to Twitter, and within a few hours, it had already received hundreds of retweets.
GOLD
•  Amanda Curtis, CEO of a fashion company in New York, posted a picture of four rainbows to Twitter

•  “I had a small moment of awe,” she said

Question-Answer Pairs
•  Who posted a picture to Twitter? (Amanda Curtis)
•  What did the picture show? (four rainbows)
•  What is the profession of the person who posted this picture? (CEO of a fashion company in New York)
Figure 11: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions.
LEAD
•  The Fulham fans in the Jimmy Steed Stand applauded their team at the final whistle.
•  It was not the victory manager Kit Symons had called for but a point away to Charlton probably secures their future in the Championship next season.
•  They are eight points clear of Millwall and bar a miraculous resurgence from one of the bottom three sides will stay up but the fact relegation is still mathematically feasible for a club that were in the Premier League last season is alarming.
•  A vertiginous decline, just one victory in their last seven games had seen them dragged back into a relegation battle and after a painful 4-1 trouncing by bitter rivals Brentford last week, Symons was looking for a quick response from his players.
Refresh
•  The Fulham fans in the Jimmy Steed Stand applauded their team at the final whistle.
•  It was not the victory manager Kit Symons had called for but a point away to Charlton probably secures their future in the Championship next season.
•  They are eight points clear of Millwall and bar a miraculous resurgence from one of the bottom three sides will stay up but the fact relegation is still mathematically feasible for a club that were in the Premier League last season is alarming.
•  He got it with Ross McCormack giving them the lead after eight minutes.
Latent
•  Johann Gudmundsson celebrates his first-half effort as Charlton come from behind to earn a point.
•  Ross McCormack headed over a stranded Stephen Henderson with just eight minutes played in London.
•  Fulham now sit 20th in the table eight points clear of fellow London rivals Millwall, but Symons is refusing to relax just yet.
ExConSumm Extractive
•  The Fulham fans in the Jimmy Steed Stand applauded their team at the final whistle.
•  It was not the victory manager Kit Symons had called for but a point away to Charlton probably secures their future in the Championship next season .
ExConSumm Compressive
•  The Fulham fans in the Jimmy Steed Stand applauded their team at the final whistle.
•  It was not the victory manager Kit Symons had called for but a point away.
•  They are eight points clear of Millwall and bar a miraculous resurgence.
•  Fulham sit 20th in the table eight points clear of fellow London rivals, but Symons is refusing to relax just yet.
Pointer+Coverage
•  Fulham fans in the Jimmy Steed Stand applauded their team at the final whistle.
•  It was not the victory manager Kit Symons had called for but a point away to Charlton probably secures their future in the Championship next season.
•  They are eight points clear of Millwall and bar a miraculous resurgence.
GOLD
•  Ross McCormack gave Fulham the lead after eight minutes at The Valley
•  But Johann Gudmundsson leveled the scores less than ten minutes later
•  Scott Parker was booed on his return to club, 11 years after he left
•  Share of the points in London leaves Charlton in 11th and Fulham in 20th
Question-Answer Pairs
•  Who gave Fulham the lead after eight minutes at The Valley? (Ross McCormack)
•  Who leveled the scores less than ten minutes later? (Johann Gudmundsson)
•  Who was booed on his return to club, 11 years after he left? (Scott Parker)
•  What is Charlton’s place after the share of the points in London? (11th)
•  What is Fulham’s place after the share of the points in London? (20th)
Figure 12: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions.
LEAD
•  (CNN) You probably never knew her name, but you were familiar with her work.
•  Betty Whitehead Willis, the designer of the iconic “Welcome to Fabulous Las Vegas” sign, died over the weekend.
•  She was 91.
Refresh
•  Betty Whitehead Willis, the designer of the iconic “Welcome to Fabulous Las Vegas” sign, died over the weekend.
•  Willis played a major role in creating some of the most memorable neon work in the city.
•  Willis visited the Neon Museum in 2013 to celebrate her 90th birthday.
Latent
•  Betty Whitehead Willis, the designer of the iconic “Welcome to Fabulous Las Vegas” sign, died over the weekend. She was 91.
•  The Neon Museum also credits her with designing the signs for Moulin Rouge Hotel and Blue Angel Motel.
•  Willis visited the Neon Museum in 2013 to celebrate her 90th birthday.
ExConSumm Extractive
•  Betty Whitehead Willis, the designer of the iconic “Welcome to Fabulous Las Vegas” sign, died over the weekend.
•  Willis visited the Neon Museum in 2013 to celebrate her 90th birthday.
ExConSumm Compressive
•  Betty Whitehead Willis died over the weekend.
•  Willis played a major role in creating some of the most memorable neon work in the city.
•  Willis visited the Neon Museum in 2013 to celebrate her 90th birthday.
Pointer+Coverage
•  Betty Whitehead Willis, the designer of the iconic “Welcome to Fabulous Las Vegas, died over the weekend.
•  She was 91.
•  Willis never trademarked her most-famous work, calling it “my gift to the city”.
GOLD
•  Willis never trademarked her most-famous work, calling it “my gift to the city”
•  She created some of the city’s most famous neon work
Question-Answer Pairs
•  What was Willis’ most-famous work called? (my gift to the city)
•  What did Willis create in the city? (City’s most famous neon work)
Figure 13: Example output summaries on the CNN/DailyMail dataset, gold standard summary, and corresponding questions.