Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies

by   Max Grusky, et al.

We present NEWSROOM, a summarization dataset of 1.3 million articles and summaries written by authors and editors in newsrooms of 38 major news publications. Extracted from search and social media metadata between 1998 and 2017, these high-quality summaries demonstrate high diversity of summarization styles. In particular, the summaries combine abstractive and extractive strategies, borrowing words and phrases from articles at varying rates. We analyze the extraction strategies used in NEWSROOM summaries against other datasets to quantify the diversity and difficulty of our new data, and train existing methods on the data to evaluate its utility and challenges. The dataset is available online at


CLTS+: A New Chinese Long Text Summarization Dataset with Abstractive Summaries

The abstractive methods lack of creative ability is particularly a probl...

LANS: Large-scale Arabic News Summarization Corpus

Text summarization has been intensively studied in many languages, and s...

Analyzing the Abstractiveness-Factuality Tradeoff With Nonlinear Abstractiveness Constraints

We analyze the tradeoff between factuality and abstractiveness of summar...

WikiHow: A Large Scale Text Summarization Dataset

Sequence-to-sequence models have recently gained the state of the art pe...

Two Huge Title and Keyword Generation Corpora of Research Articles

Recent developments in sequence-to-sequence learning with neural network...

Template-based Abstractive Microblog Opinion Summarisation

We introduce the task of microblog opinion summarisation (MOS) and share...

Generating abstractive summaries of Lithuanian news articles using a transformer model

In this work, we train the first monolingual Lithuanian transformer mode...

Please sign up or login with your details

Forgot password? Click here to reset