Neural Latent Extractive Document Summarization

08/22/2018
by   Xingxing Zhang, et al.
4

Extractive summarization models need sentence level labels, which are usually created with rule-based methods since most summarization datasets only have document summary pairs. These labels might be suboptimal. We propose a latent variable extractive model, where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training, the loss can come directly from gold summaries. Experiments on CNN/Dailymail dataset show our latent extractive model outperforms a strong extractive baseline trained on rule-based labels and also performs competitively with several recent models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2022

Text Summarization with Oracle Expectation

Extractive summarization produces summaries by identifying and concatena...
research
03/01/2020

StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization

Traditional preneural approaches to single document summarization relied...
research
05/16/2019

HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

Neural extractive summarization models usually employ a hierarchical enc...
research
11/19/2020

Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT

Most current extractive summarization models generate summaries by selec...
research
04/27/2020

Screenplay Summarization Using Latent Narrative Structure

Most general-purpose extractive summarization models are trained on news...
research
11/17/2022

Abstractive Summarization Guided by Latent Hierarchical Document Structure

Sequential abstractive neural summarizers often do not use the underlyin...

Please sign up or login with your details

Forgot password? Click here to reset