Dial2Desc: End-to-end Dialogue Description Generation

11/01/2018 ∙ by Haojie Pan, et al. ∙ University of Southern California 0

We first propose a new task named Dialogue Description (Dial2Desc). Unlike other existing dialogue summarization tasks such as meeting summarization, we do not maintain the natural flow of a conversation but describe an object or an action of what people are talking about. The Dial2Desc system takes a dialogue text as input, then outputs a concise description of the object or the action involved in this conversation. After reading this short description, one can quickly extract the main topic of a conversation and build a clear picture in his mind, without reading or listening to the whole conversation. Based on the existing dialogue dataset, we build a new dataset, which has more than one hundred thousand dialogue-description pairs. As a step forward, we demonstrate that one can get more accurate and descriptive results using a new neural attentive model that exploits the interaction between utterances from different speakers, compared with other baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

introduction

Recently, a lot of novel techniques have been proposed to help people consume a large amount of text/audio data from the Internet or Daily life. Researchers and companies have successfully applied opinion mining or summarization into product reviews, news articles, scientific articles, etc. However, very little attention has been given so far to help people consume dialogue records which are generated every day. It is clear that automatic summarization of dialogues can be of benefit in dealing with this overwhelming amount of interactional information

[Murray and Carenini2008].

Dialogue:
A: Is this in color.
B: No , it ’s black and white.
A: Does it look like an old picture.
B: Yes , i think so.
A: How old do you think the man is.
B: It looks like a young boy and he is what makes me think
the picture is older , but the picture is not really old.
A: Do you see more than 1 cow.
B: 2 cows.
A: Is the boy wearing overalls.
B: No , he ’s wearing a plaid short sleeved shirt and
long pants and regular shoes.
A: How about a hat.
B: No.
A: Has he already started to milk.
B: He is attaching the milking apparatus to the cow.
A: Is he sitting down.
B: He is squatting.
A: What color is the cow.
B: Looks like it would be brown and white.
A: Are they inside a barn.
B: Inside a milking facility.
Description: a man prepares to milk a dairy cow.
Table 1: An example of Dial2Desc

Previous conversation summarization works including using extractive approaches [Xie, Liu, and Lin2008, Riedhammer, Favre, and Hakkani-Tür2010] or abstractive approaches [Oya et al.2014, Banerjee, Mitra, and Sugiyama2015, Shang et al.2018] on meeting summarization, which generates summaries that allow people to prepare for an upcoming meeting or review the decisions of a previous group. There is also an increasing research interest in other conversation summarization, such as conversational telephone speech, broadcast news, lectures, and e-mails. Those tasks are mostly focusing on stating the events in a natural flow of raw conversations and can hardly string events together to form the thematic abstracts of the whole transcripts. However, most of the human conversations are involving some specific objects or actions, and speakers may have a clear picture of what they are talking about, which can not be described without stringing speakers’ statements together.

In this work, we proposed a novel task named Dial2Desc, which is a variant of conversation summarization. Unlike the previous works mentioned above, we pay more attention to a higher-abstractive-level of the dialogues, instead of maintaining the natural flow of the given conversations. The target of our task is to describe an object or an action of what speakers are talking about in a dialogue transcript.

In the last several years, we can see deep learning has boosted the development of summarization in written text, such as news articles. Researchers apply modern neural networks with attention mechanism on abstractive summarization

[Rush, Chopra, and Weston2015, Chopra, Auli, and Rush2016, Nallapati et al.2016, See, Liu, and Manning2017, Paulus, Xiong, and Socher2018] and reach the state-of-the-art results on several datasets. The availability of large-scale parallel summarization dataset and powerfulness on the representation of deep learning push this task into a new stage, where one can achieve good results without doing any complex preprocessing procedures. However, in conversation summarization, researchers tend to build unsupervised models and involve manual rules because of the lack of such high-quality datasets.

Inspired by current Image2Text

works in computer vision, we find there are high-valued image-text-mixed datasets in Image Caption tasks, which is to use salient visual information into descriptive languages

[Xu et al.], and VisualDialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content[Das et al.2017]. We now can use an image as a bridge to align the connected dialogue and description(that is where the name Dial2Desc comes from). One can intuitively find that this image is exactly what speakers are talking about or a picture in mind during the conversation, and the captions are the higher-abstractive-level way to describe the dialogues.

We collect 122,621 dialogues from VisualDialog Dataset[Das et al.2017] and corresponding captions from the COCO dataset [Lin et al.2014] to build our final dataset, which enables us to develop more advanced neural models for this task. One of the examples is shown in Table 1.

Directly applying neural models proposed for written text summarization is not a good idea, since spoken conversation languages have some additional issues, e.g., maintaining cross-speaker coherence[Zechner2001]. Most of the neural abstractive summarization models are based on sequence-to-sequence framework[Sutskever, Vinyals, and Le2014], which consider the whole input article as a source sequence and use recurrent ways or hierarchical ways to encode it. However, the interactions between speakers and the flows of dialogue play a more important role in dialogue modeling, which is overlooked.

To address this problem, we propose a novel neural encoder, which use co-attention mechanism and dense connection to enable interactions between speakers and use the transformer framework to maintain the message passing during dialogue turns. And we also apply the transformer as our decoder. The experiment results show that our encoder plus transformer decoder can achieve the highest performance over other summarization baselines.

In summary, we make the following contributions:

  1. A novel task named Dial2Desc, which address the generation of higher-abstractive-level description over the objects or the actions that people are talking about.

  2. A large-scale dataset built from existing public dialogue datasets for our task.

  3. A novel neural attentive model that exploits the interaction between utterances from different speakers.

Related Work

Image caption and visual Dialogue

Image caption has been widely studied[Vinyals et al.2015, You et al.2016, Anderson et al.2017]

. In general, researchers use Convolutional Neural Network to encode a given picture and then use Recurrent Neural Network, especially its variant Long-Short Term Memory

[Hochreiter and Schmidhuber1997] to decode this semantic representation. Visual Dialog [Das et al.2017], is a task that when given an image, a dialog history, and a question about the image, the agent has to ground the question in the image, infer context from history, and answer the question accurately [Das et al.2017]. In this work, we build a bridge between this two tasks to build our dataset, since one of image caption datasets and the VisualDialog dataset111https://visualdialog.org are both evolved from MSCOCO dataset222http://cocodataset.org/#home.

Figure 1: Enhanced Interaction Dialogue Encoder

Conversation summarization

Recently, there is an increasing research interest in speech summarization. [Zechner2001] explored aspects of speech transcripts, e.g. disfluencies, to generate summaries for conversational telephone speech. [Maskey and Hirschberg2005] explored supervised and unsupervised approaches with different kinds of features on broadcast news. [Zhang, Chan, and Fung2007] used extractive methods on lecture speech transcripts. On mail thread summarization, [Nenkova and Bagga2003] proposed a method to generate a summary for the first two levels of the thread discussion. [Rambow et al.2004]

used a machine learning technique and included features related to the thread as well as features of the email structure such as the position of the sentence in the thread, number of recipients, etc. And on meeting summarization,

[Xie, Liu, and Lin2008] treated this task as a binary classification problem.[Riedhammer, Favre, and Hakkani-Tür2010] analyzed and compared two different methods for unsupervised extractive meeting summarization. Oya [Oya et al.2014] leveraged the relationship between human-authored summaries and their source meeting transcriptions to select the templates for generating abstractive summaries for meetings. Banerjee [Banerjee, Mitra, and Sugiyama2015] generated abstractive summaries by fusing important content from several utterances with the dependency graph. [Shang et al.2018] combined the strengths of multiple recent approaches introduce a novel graph-based framework for unsupervised abstractive meeting summarization. Our task, different with these tasks, address higher-abstractive-level of dialogues, which targets on describing what people are talking about instead of stating events of conversations.

Neural attentive models and summarization

Neural attentive models play important roles in many tasks. such as machine translation[Sutskever, Vinyals, and Le2014], text matching[Chen et al.2017], and question answering[Hermann et al.2015]. Attention mechanisms [Bahdanau, Cho, and Bengio2014] make these models more performant and scalable, allowing them to look back at parts of the encoded input sequence while the output is generated.

Researchers introduce those models into text summarization [Rush, Chopra, and Weston2015, Chopra, Auli, and Rush2016, Nallapati et al.2016]. [Nallapati et al.2016] used different attention and pointer functions on the CNN and Daily Mail datasets combined. [See, Liu, and Manning2017] developed an abstractive summarization model on this dataset with an extra loss term to increase temporal coverage of the encoder attention function. And [Paulus, Xiong, and Socher2018]

used intra-attention and reinforcement learning to boost the results of summarization. In this work, we adapt neural attentive models into conversation summarization scenario and exploit the interaction between utterances from different speakers.

Our Approach

In this section, we describe the architecture of our neural model, including an enhanced interaction dialogue encoder and transformer-pointer generator.

Position Encoding and Multi-head Attention

First, we introduce some background on position encoding and multi-head attention [Vaswani et al.2017], which are the building block for our model.

Position encoding is designed for models contains no recurrence or convolution, to make use of the order of a given sequence. We use sine and cosine functions of different frequencies to build our Position Encoder(PE):

(1)
(2)

where is the position and is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from to . This function would allow the model to easily learn to attend by relative positions [Vaswani et al.2017].

To introduce Multi-head attention, we start from the scaled dot-product attention. Given a query from all queries, a set of keys and values where , the scaled dot-product attention outputs a weighted sum of values , where the weights are determined by the dot-products of query and keys . In practice, we pack and into matricies and , respectively. The attention output on query q is:

(3)

The multi-head attention consists of paralleled scaled dot-product attention layers called ”head”, where each ”head” is an independent dot-product attention. The attention output from multi-head attention is as below:

(4)
(5)

where the projections are parameter matrices , and . Both formulations of and is quite general, and it represents the common cross-module attention. If queries, keys, and values are all the same, it is called Self-attention [Vaswani et al.2017].

Enhanced Interaction Dialogue Encoder

In this section, we describe our encoder which is composed of the following four components: (1) utterance encoding layer, (2) utterance interaction layer, (3) densely connected recurrent layer, (4) memory output layer, to encode the dialogue into a memory. We denote Dialogues as A/B speaker utterance pairs , where is the number of turns in a dialogue. For each pair, and , where and are number of tokens of and (we omit the subscript of for convenience).

utterance encoding layer

encodes word list into fixed context vector list. Firstly, we represent each pair as list of

dimension word embeddings:

(6)

and we then employ bidirectional LSTM [Graves, Fernández, and Schmidhuber2005] for preserving sequential information of and (we skip the description of the basic chain LSTM due to the space limit):

(7)
(8)

and obtain utterance representations:

(9)

utterance interaction layer

uses attention mechanism in Equation (3) to re-encode the contexts from interaction perspective. The local interaction information of the word , and of the word is computed by using scaled dot-product attention as follows:

(10)

And we further enhance the local interaction information by computing the difference and the element-wise product for the tuple as well as for [Chen et al.2017]. The difference and element-wise product are then concatenated with the original vectors, and , or and , respectively, then we get:

(11)
(12)

densely connected recurrent layer

uses another biLSTM to enables us to build up higher-level representation. Instead of directly inputting the encoded hidden features from the last layer, we concatenate them with the word embeddings, to preserve the encoded hidden features until they reach to the uppermost layer and all the previous features work for prediction as collective knowledge [Huang et al.2017]:

(13)
(14)
Figure 2: Transformer Decoder

memory output layer

We concatenate all the and in each turn into a sequence of hidden memories . Then we follow the work [Vaswani et al.2017] to apply position encoding into and apply one transformer layer to output a turns-aware encoder memory bank . The transformer layer contains two sublayers: (1) a self-attention layer, where we take the output of the previous layer as queries, keys, and values and employ multi-head attention mechanism. (2) a simple, position-wise fully connected feed-forward network which is applied to each position separately and identically:

(15)

In addition, we employ a residual connection

[He et al.2016] around each of the two sub-layers, followed by layer normalization [Ba, Kiros, and Hinton2016]. That is, the output of each sub-layer is , where is the function implemented by the sub-layer itself. The overall encoder is shown in Figure 1.

Transformer-pointer Generator

We also use the transformer as the basic block of our description decoder, as shown in Figure 2. Then we use pointer network [Vinyals, Fortunato, and Jaitly2015] to generate final outputs, as it allows both copying words via pointing and generating words from a fixed vocabulary.

We first put decoder inputs into an embedding layer, which is similar to encoder embedding layer. And we use position encoding in Equation (1) and (2) to make use of sequential information. For one decoder-transformer layer, there is an extra layer compared with encoder transformer layer: a context multi-head attention layer where the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder, . After multiple transformer layers, we get decoder state for each decoder timestep . Then we put into one linear layer to produce the vocabulary distribution

(16)

where and are learnable parameters.

For each decode step , we take the attention weights of the second sub-layer of the last transformer layer as the encoder memory attention distribution

. The generation probability

for timestep is calculated from the decoder state :

(17)

where vectors and scalar

are learnable parameters and σ is the sigmoid function. Then we use

to choose between generating a word from the vocabulary by sampling from , or copying a word from the input sequence by sampling from the attention distribution

. For each dialogue, we get an extended vocabulary from the union of the vocabulary and all words appearing in the source dialogue. We obtain the following probability distribution over the extended vocabulary:

(18)

This pointer generator models have advantages of producing OOV words compared to other seq2seq models which are restricted to their pre-set vocabulary.

During training, the loss for timestep is the negative log likelihood of the target words for that timestep:

(19)

and the overall sequence loss is:

(20)

Experiments

In this section, we describe the dataset, experimental setup, evaluation metrics and the results of our experiments.

Dataset

Dial2Desc dataset is based on two public data resources:

VisDial [Das et al.2017]: Visual Dialog is a task that requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Using images from MSCOCO dataset[Lin et al.2014]. They paired 2 workers on AMT to chat with each other in real-time to build dialogues that have (1) temporal continuity, (2) grounding in the image, and (3) mimic natural ‘conversational’ exchanges. VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on  120k images from COCO, with a total of  1.2M dialog question-answer pairs. And every dialogue in VisDial has 10 turns.

MSCOCO [Lin et al.2014]: this dataset contains human annotated captions of over 120K images. Each image contains five captions from five different annotators. This dataset is a standard benchmark dataset for image caption generation task. In a majority of the cases, annotators describe the most prominent object/action in an image, which makes this dataset suitable for our setting.

We can find the dialogues from VisDial and captions from MSCOCO are from the same image set. The intuition is that when two speakers are talking about one object or scene, they may have a clear picture in their mind, which can be described using higher-level-abstractive captions.

To build our Dial2Desc dataset, we collected all the dialogues from VisDial, and find attached image captions from MSCOCO dataset. Then we selected distinct captions with their attached dialogues to create dialogue-to-description pairs. Finally, we got 122,621 training pairs. And we split them into train/dev/test set. Some statistics are shown in Table 2.

Furthermore, for test data, we collected 5 descriptions in total for each dialogue, to ensure the stability of evaluation.

#samples mean mean
#dialog tokens #desc tokens
train 98,256 122.8 10.59
dev 12,282 122.7 10.6
test 12,083 122.6 10.6
Table 2: An overview of Dial2Desc dataset

Baselines

We empirically find that some unsupervised summarization methods such as Maximal Marginal Relevance(MMR) cannot reach good results because our ground truth descriptions are so abstractive. So we compare our methods with several neural generative approaches as follows:

BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-L METEOR CIDEr
Attn-Seq2seq(20k vocab) 66.2 48.4 34.4 24.4 47.8 22.6 82.2
Attn-Seq2seq(28k vocab) 65.0 47.7 33.8 23.9 46.7 22.3 80.5
PGN 67.5 49.9 35.4 25.1 48.4 22.9 85.6
Onmt-brnn 68.0 50.6 36.4 26.0 49.0 23.2 86.5
Onmt-transformer 67.2 50.2 35.9 25.5 49.2 22.9 84.9
Our model 69.6 53.1 39.0 28.4 50.6 24.2 94.0
Table 3: Performance comparison of the proposed method with other methods on Dial2Desc dataset
  • Attn-Seq2seq is a base model described in [See, Liu, and Manning2017], where the encoder is a single-layer bidirectional LSTM producing a sequence of encoder hidden states and the decoder is a single-layer unidirectional LSTM which exploit the information from the encoder hidden states via attention mechanism.

  • PGN is a hybrid pointer-generator network that can copy words from the source text via pointing [See, Liu, and Manning2017]. 333Implementation of Attn-Seq2seq and PGN can be found in https://github.com/abisee/pointer-generator

  • Onmt-brnn is similar to PGN, which use a bidirectional LSTM to encode dialogues, a unidirectional LSTM and copy-mechanism to generate descriptions. However, the implementation details are slightly different from PGN.

  • Onmt-transformer use transformer[Vaswani et al.2017] framework(which is the state-of-the-art model in machine translation) combined with a copy-generator. 444Impletations of Onmt-brnn and Onmt-transformer can be found in https://github.com/OpenNMT/OpenNMT-py

Experimental setup

We use a vocabulary of 20k words, shared by both source(dialogues) and target(descriptions) for all of the models, to make use of copy-mechanism. For Attn-Seq2seq, we also try a larger vocabulary size of 28k (almost the same as the vocabulary of training data). And for all RNN-based models, 256-dimensional RNN hidden states and 128-dimensional word embeddings are applied. And we use Adagrad [Duchi, Hazan, and Singer2011] with learning rate 0.15 and an initial accumulator value of 0.1 to train these models. For Onmt-transformer and our model, both the dimension of RNN hidden states and word embeddings are set to 256. Following the work [Vaswani et al.2017], we used the Adam optimizer [Kingma and Ba2014] with and , and the is set to 8000. Word embeddings are learned from scratch during training instead of using any pre-trained ones.

During training and at test time we truncate the utterance of each turn to 20 tokens and limit the length of the summary to 5-15 tokens for both training and at test time. We use PyTorch to conduct all the experiments, and all the models are trained in GTX 1080Ti GPU with a batch size of 16 for RNN-based models and a batch size of 4096 for transformer-based models.

Evaluation Metrics

Same as image caption task, we use several unsupervised automated metrics for NLG(Natural Language Generation) to evaluate baselines and our model. Those metrics include BLEU

[Papineni et al.2002], ROUGE-L [Lin2004], METEOR [Lavie and Agarwal2007], CIDEr [Vedantam, Zitnick, and Parikh2015].When computing CIDEr, we compute IDF values using the reference sentences provided to adapt our setting, which is different with image caption task. We use nlg-eval to conduct evaluation555https://github.com/Maluuba/nlg-eval.

Figure 3: The influence of beam search size K on the Onmt-brnn and our model
Figure 4: Co-attention between utterences from two speakers. The summation of each row is equal to 1
Figure 5: A case study

Results

We perform experiments on Dial2Desc dataset, and report both qualitative and quantitative results of our approach.

Quantitative Results

The results of mentioned baselines and our model are listed in Table 3. As we can see, we have a significant improvement w.r.t.the baselines. We can find vocabularies is very important to those neural generative models. Attn-Seq2seq, which has no other way to generate OOV words, performs worst. And in fact, the larger vocabulary size does not seem to help. However, when copy mechanism is applied, models such as PGN, Onmt-brnn gain a very big improvement, compared with Attn-Seq2seq. Onmt-transformer with copy generator has also as good performance as RNN-based approaches. It reaches better scores on some metrics such as ROUGE-L, but has lower scores on other metrics than PGN and Onmt-brnn. Our model, however, takes advantage of interaction information from utterances and perform the best on all the metrics (it achieves an improvement of 9.8% and 10.8% in terms of the CIDEr metric compared with PGN and Onmt-transformer model respectively.), even though most of the components of our model are similar to the transformer. We also tried to apply coverage mechanism into all approaches, but we found it hurt the performance tremendously.

We now analyze the influence of the beam search size in the test stage. We contrast the Onmt-brnn model with our model with the beam size in the range of . The results are depicted in Figure 3. We can see our model need a bigger beam size to reach the optimal results than Onmt-brnn, and is more stable. We suppose our model need a larger space to search since is use transformer to decode instead of RNN.

Qualitative Analysis

Given two utterances from different two speakers:

Utterance A: how many are there

Utterance B: several, it is in a parking garage at least 10 Figure 4 visualize the attached co-attention in the utterance interaction layer, consisting of two parts: (1) words of utterance A attends to utterance B; (2) words of utterance B attends to utterance A. We can find in the left part, serveral, at, least, and 10 are paid more attention when many is the query. It is reasonable because those number-relevant words are exactly the answers of how many. On the other hand, when several or other words are treated as queries, many is paid more attention because many is the most informative word in utterance A.

Here we also provide one quality example of our experiments shown in Figure 5. The dialogue consists of 10 turns and most of them are question-answer pairs. Next to the dialogue is the attached image from MSCOCO dataset(notice the image is placed here just for better understanding the case, we do not involve any image in our setting). Several descriptions, including ground-truth and some system outputs, are placed below. The speakers are talking about a man sitting behind a table, where a group of different wines and a plate of grapes are placed. Speaker A keeps asking the details of the given picture while speaker B keeps answering those questions. However, different from question answering, the questions are popping up sequentially and each question may contain some word like they(Turn 2, A) to connect previous questions. From the results, we can find all the neural generative models perform well and generate a decent description of the dialogue. As we can summarize, the dialogue contains some key information: a man, a table, wine glasses with different wine, a plate of grapes. All of the generated descriptions are missing grapes information. Attn-Seq2seq and PGN generate some redundant or wrong pieces of information such as woman. Onmt-brnn does not mention wine glasses but misuses the word plate. Attn-Seq2seq, PGN, and Onmt-transformer miss the information of wines in wine glasses. The description generated by our model, however, is more accurate and comprehensive.

Conclusion

In this work, we propose a new task named Dial2Desc, which encode the input dialogue and decode a high-abstractive-level description. Unlike the previous conversation summarization tasks, we focus more on the object or the action which the speakers are talking about, instead of maintaining the natural flow of the given conversations. We link two open source dataset to create a well-aligned dialogue-to-description dataset Dial2Desc. Furthermore, we propose a novel neural attentive model, including an enhanced interaction dialogue encoder and transformer-pointer generator. Results on our Dial2Desc dataset demonstrated the effectiveness of our proposed method.

References

  • [Anderson et al.2017] Anderson, P.; Fernando, B.; Johnson, M.; and Gould, S. 2017. Guided open vocabulary image captioning with constrained beam search. In EMNLP, 936–945.
  • [Ba, Kiros, and Hinton2016] Ba, L. J.; Kiros, R.; and Hinton, G. E. 2016. Layer normalization. CoRR abs/1607.06450.
  • [Bahdanau, Cho, and Bengio2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473.
  • [Banerjee, Mitra, and Sugiyama2015] Banerjee, S.; Mitra, P.; and Sugiyama, K. 2015. Multi-document abstractive summarization using ilp based multi-sentence compression. In IJCAI, 1208–1214.
  • [Chen et al.2017] Chen, Q.; Zhu, X.; Ling, Z.; Wei, S.; Jiang, H.; and Inkpen, D. 2017. Enhanced LSTM for natural language inference. In ACL, 1657–1668.
  • [Chopra, Auli, and Rush2016] Chopra, S.; Auli, M.; and Rush, A. M. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In NAACL HLT, 93–98.
  • [Das et al.2017] Das, A.; Kottur, S.; Gupta, K.; Singh, A.; Yadav, D.; Moura, J. M.; Parikh, D.; and Batra, D. 2017. Visual Dialog. In CVPR.
  • [Duchi, Hazan, and Singer2011] Duchi, J. C.; Hazan, E.; and Singer, Y. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12.
  • [Graves, Fernández, and Schmidhuber2005] Graves, A.; Fernández, S.; and Schmidhuber, J. 2005. Bidirectional LSTM networks for improved phoneme classification and recognition. In ICANN, 799–804.
  • [He et al.2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770–778.
  • [Hermann et al.2015] Hermann, K. M.; Kociský, T.; Grefenstette, E.; Espeholt, L.; Kay, W.; Suleyman, M.; and Blunsom, P. 2015. Teaching machines to read and comprehend. In NIPS, 1693–1701.
  • [Hochreiter and Schmidhuber1997] Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural Computation 9(8):1735–1780.
  • [Huang et al.2017] Huang, G.; Liu, Z.; van der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In CVPR, 2261–2269.
  • [Kingma and Ba2014] Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980.
  • [Lavie and Agarwal2007] Lavie, A., and Agarwal, A. 2007. METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments. In WMT@ACL, 228–231.
  • [Lin et al.2014] Lin, T.; Maire, M.; Belongie, S. J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft COCO: common objects in context. In ECCV, 740–755.
  • [Lin2004] Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries.
  • [Maskey and Hirschberg2005] Maskey, S., and Hirschberg, J. 2005. Comparing lexical, acoustic/prosodic, structural and discourse features for speech summarization. In INTERSPEECH, 621–624.
  • [Murray and Carenini2008] Murray, G., and Carenini, G. 2008. Summarizing spoken and written conversations. In EMNLP, 773–782.
  • [Nallapati et al.2016] Nallapati, R.; Zhou, B.; dos Santos, C. N.; Gülçehre, Ç.; and Xiang, B. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In SIGNLL.
  • [Nenkova and Bagga2003] Nenkova, A., and Bagga, A. 2003. Facilitating email thread access by extractive summary generation. In RANLP.
  • [Oya et al.2014] Oya, T.; Mehdad, Y.; Carenini, G.; and Ng, R. 2014. A template-based abstractive meeting summarization: Leveraging summary and source text relationships. In INLG.
  • [Papineni et al.2002] Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, 311–318.
  • [Paulus, Xiong, and Socher2018] Paulus, R.; Xiong, C.; and Socher, R. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations.
  • [Rambow et al.2004] Rambow, O.; Shrestha, L.; Chen, J.; and Laurdisen, C. 2004. Summarizing email threads. In HLT-NAACL.
  • [Riedhammer, Favre, and Hakkani-Tür2010] Riedhammer, K.; Favre, B.; and Hakkani-Tür, D. 2010. Long story short–global unsupervised models for keyphrase based meeting summarization. Speech Communication 52(10).
  • [Rush, Chopra, and Weston2015] Rush, A. M.; Chopra, S.; and Weston, J. 2015. A neural attention model for abstractive sentence summarization. In EMNLP, 379–389.
  • [See, Liu, and Manning2017] See, A.; Liu, P. J.; and Manning, C. D. 2017. Get to the point: Summarization with pointer-generator networks. In ACL, 1073–1083.
  • [Shang et al.2018] Shang, G.; Ding, W.; Zhang, Z.; Tixier, A.; Meladianos, P.; Vazirgiannis, M.; and Lorré, J.-P. 2018. Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In ACL.
  • [Sutskever, Vinyals, and Le2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., NIPS. Curran Associates, Inc. 3104–3112.
  • [Vaswani et al.2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. In NIPS, 6000–6010.
  • [Vedantam, Zitnick, and Parikh2015] Vedantam, R.; Zitnick, C. L.; and Parikh, D. 2015. Cider: Consensus-based image description evaluation. In

    IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015

    .
  • [Vinyals et al.2015] Vinyals, O.; Toshev, A.; Bengio, S.; and Erhan, D. 2015. Show and tell: A neural image caption generator. In CVPR.
  • [Vinyals, Fortunato, and Jaitly2015] Vinyals, O.; Fortunato, M.; and Jaitly, N. 2015. Pointer networks. In NIPS, 2692–2700.
  • [Xie, Liu, and Lin2008] Xie, S.; Liu, Y.; and Lin, H. 2008. Evaluating the effectiveness of features and sampling in extractive meeting summarization. In SLT, 157–160. IEEE.
  • [Xu et al.] Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A. C.; Salakhutdinov, R.; Zemel, R. S.; and Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In ICML.
  • [You et al.2016] You, Q.; Jin, H.; Wang, Z.; Fang, C.; and Luo, J. 2016. Image captioning with semantic attention. In CVPR.
  • [Zechner2001] Zechner, K. 2001. Automatic generation of concise summaries of spoken dialogues in unrestricted domains. In SIGIR.
  • [Zhang, Chan, and Fung2007] Zhang, J. J.; Chan, R. H. Y.; and Fung, P. 2007. Improving lecture speech summarization using rhetorical information. In ASRU, 195–200.