Analysis of GraphSum's Attention Weights to Improve the Explainability of Multi-Document Summarization

05/19/2021
by   M. Lautaro Hickmann, et al.
0

Modern multi-document summarization (MDS) methods are based on transformer architectures. They generate state of the art summaries, but lack explainability. We focus on graph-based transformer models for MDS as they gained recent popularity. We aim to improve the explainability of the graph-based MDS by analyzing their attention weights. In a graph-based MDS such as GraphSum, vertices represent the textual units, while the edges form some similarity graph over the units. We compare GraphSum's performance utilizing different textual units, i. e., sentences versus paragraphs, on two news benchmark datasets, namely WikiSum and MultiNews. Our experiments show that paragraph-level representations provide the best summarization performance. Thus, we subsequently focus oAnalysisn analyzing the paragraph-level attention weights of GraphSum's multi-heads and decoding layers in order to improve the explainability of a transformer-based MDS model. As a reference metric, we calculate the ROUGE scores between the input paragraphs and each sentence in the generated summary, which indicate source origin information via text similarity. We observe a high correlation between the attention weights and this reference metric, especially on the the later decoding layers of the transformer architecture. Finally, we investigate if the generated summaries follow a pattern of positional bias by extracting which paragraph provided the most information for each generated summary. Our results show that there is a high correlation between the position in the summary and the source origin.

READ FULL TEXT

page 6

page 8

page 9

page 12

page 13

page 14

research
10/23/2022

How "Multi" is Multi-Document Summarization?

The task of multi-document summarization (MDS) aims at models that, give...
research
06/07/2023

Absformer: Transformer-based Model for Unsupervised Multi-Document Abstractive Summarization

Multi-document summarization (MDS) refers to the task of summarizing the...
research
03/12/2023

Compressed Heterogeneous Graph for Abstractive Multi-Document Summarization

Multi-document summarization (MDS) aims to generate a summary for a numb...
research
08/16/2022

Parallel Hierarchical Transformer with Attention Alignment for Abstractive Multi-Document Summarization

In comparison to single-document summarization, abstractive Multi-Docume...
research
10/25/2021

SgSum: Transforming Multi-document Summarization into Sub-graph Selection

Most of existing extractive multi-document summarization (MDS) methods s...
research
04/28/2015

Reader-Aware Multi-Document Summarization via Sparse Coding

We propose a new MDS paradigm called reader-aware multi-document summari...
research
11/07/2017

Extractive Multi-document Summarization Using Multilayer Networks

Huge volumes of textual information has been produced every single day. ...

Please sign up or login with your details

Forgot password? Click here to reset