On Automatic Text Extractive Summarization Based on Graph and pre-trained Language Model Attention

10/10/2021
by   Yuan-Ching Lin, et al.
0

Representing text as graph to solve the summarization task has been discussed for more than 10 years. However, with the development of attention or Transformer, the connection between attention and graph remains poorly understood. We demonstrate that the text structure can be analyzed through the attention matrix, which represents the relation between sentences by the attention weights. In this work, we show that the attention matrix produced in pre-training language model can be used as the adjacent matrix of graph convolutional network model. Our model performs a competitive result on 2 different datasets based on the ROUGE index. Also, with fewer parameters, the model reduces the computation resource when training and inferring.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2019

Sample Efficient Text Summarization Using a Single Pre-Trained Transformer

Language model (LM) pre-training has resulted in impressive performance ...
research
09/02/2019

Enriching Medcial Terminology Knowledge Bases via Pre-trained Language Model and Graph Convolutional Network

Enriching existing medical terminology knowledge bases (KBs) is an impor...
research
10/12/2021

HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization

To capture the semantic graph structure from raw text, most existing sum...
research
03/29/2020

Abstractive Text Summarization based on Language Model Conditioning and Locality Modeling

We explore to what extent knowledge about the pre-trained language model...
research
11/16/2021

Meeting Summarization with Pre-training and Clustering Methods

Automatic meeting summarization is becoming increasingly popular these d...
research
05/18/2023

MolXPT: Wrapping Molecules with Text for Generative Pre-training

Generative pre-trained Transformer (GPT) has demonstrates its great succ...
research
03/17/2022

HiStruct+: Improving Extractive Text Summarization with Hierarchical Structure Information

Transformer-based language models usually treat texts as linear sequence...

Please sign up or login with your details

Forgot password? Click here to reset