Text Summarization with Pretrained Encoders

08/22/2019
by   Yang Liu, et al.
0

Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2021

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

Pretrained bidirectional Transformers, such as BERT, have achieved signi...
research
09/11/2020

UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition Extraction

This work presents our contribution in the context of the 6th task of Se...
research
04/04/2023

San-BERT: Extractive Summarization for Sanskrit Documents using BERT and it's variants

In this work, we develop language models for the Sanskrit language, name...
research
06/17/2021

DocNLI: A Large-scale Dataset for Document-level Natural Language Inference

Natural language inference (NLI) is formulated as a unified framework fo...
research
07/08/2022

Hidden Schema Networks

Most modern language models infer representations that, albeit powerful,...
research
09/13/2022

CNN-Trans-Enc: A CNN-Enhanced Transformer-Encoder On Top Of Static BERT representations for Document Classification

BERT achieves remarkable results in text classification tasks, it is yet...
research
10/03/2022

Probing of Quantitative Values in Abstractive Summarization Models

Abstractive text summarization has recently become a popular approach, b...

Please sign up or login with your details

Forgot password? Click here to reset