Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization

09/23/2021
by   Congbo Ma, et al.
10

Within natural language processing tasks, linguistic knowledge can always serve an important role in assisting the model to learn excel representations and better guide the natural language generation. In this work, we develop a neural network based abstractive multi-document summarization (MDS) model which leverages dependency parsing to capture cross-positional dependencies and grammatical structures. More concretely, we process the dependency information into the linguistic-guided attention mechanism and further fuse it with the multi-head attention for better feature representation. With the help of linguistic signals, sentence-level relations can be correctly captured, thus improving MDS performance. Our model has two versions based on Flat-Transformer and Hierarchical Transformer respectively. Empirical studies on both versions demonstrate that this simple but effective method outperforms existing works on the benchmark dataset. Extensive analyses examine different settings and configurations of the proposed model which provide a good reference to the community.

READ FULL TEXT
research
09/13/2022

Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization

One key challenge in multi-document summarization is to capture the rela...
research
08/16/2022

Parallel Hierarchical Transformer with Attention Alignment for Abstractive Multi-Document Summarization

In comparison to single-document summarization, abstractive Multi-Docume...
research
05/20/2018

A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS)

The recent advance in neural network architecture and training algorithm...
research
05/30/2019

Hierarchical Transformers for Multi-Document Summarization

In this paper, we develop a neural summarization model which can effecti...
research
10/09/2022

HEGEL: Hypergraph Transformer for Long Document Summarization

Extractive summarization for long documents is challenging due to the ex...
research
09/11/2021

HYDRA – Hyper Dependency Representation Attentions

Attention is all we need as long as we have enough data. Even so, it is ...
research
09/12/2016

Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks

Natural language understanding (NLU) is a core component of a spoken dia...

Please sign up or login with your details

Forgot password? Click here to reset