Parallel Hierarchical Transformer with Attention Alignment for Abstractive Multi-Document Summarization

08/16/2022
by   Ye Ma, et al.
0

In comparison to single-document summarization, abstractive Multi-Document Summarization (MDS) brings challenges on the representation and coverage of its lengthy and linked sources. This study develops a Parallel Hierarchical Transformer (PHT) with attention alignment for MDS. By incorporating word- and paragraph-level multi-head attentions, the hierarchical architecture of PHT allows better processing of dependencies at both token and document levels. To guide the decoding towards a better coverage of the source documents, the attention-alignment mechanism is then introduced to calibrate beam search with predicted optimal attention distributions. Based on the WikiSum data, a comprehensive evaluation is conducted to test improvements on MDS by the proposed architecture. By better handling the inner- and cross-document information, results in both ROUGE and human evaluation suggest that our hierarchical model generates summaries of higher quality relative to other Transformer-based baselines at relatively low computational cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/15/2020

Attention-Aware Inference for Neural Abstractive Summarization

Inspired by Google's Neural Machine Translation (NMT) <cit.> that models...
research
09/13/2022

Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization

One key challenge in multi-document summarization is to capture the rela...
research
05/30/2019

Hierarchical Transformers for Multi-Document Summarization

In this paper, we develop a neural summarization model which can effecti...
research
09/23/2021

Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization

Within natural language processing tasks, linguistic knowledge can alway...
research
10/03/2020

Multilevel Text Alignment with Cross-Document Attention

Text alignment finds application in tasks such as citation recommendatio...
research
05/19/2021

Analysis of GraphSum's Attention Weights to Improve the Explainability of Multi-Document Summarization

Modern multi-document summarization (MDS) methods are based on transform...
research
03/06/2022

A Multi-Document Coverage Reward for RELAXed Multi-Document Summarization

Multi-document summarization (MDS) has made significant progress in rece...

Please sign up or login with your details

Forgot password? Click here to reset