Structured Neural Summarization

11/05/2018
by   Patrick Fernandes, et al.
0

Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2023

Tackling Hallucinations in Neural Chart Summarization

Hallucinations in text generation occur when the system produces text th...
research
07/23/2019

MacNet: Transferring Knowledge from Machine Comprehension to Sequence-to-Sequence Models

Machine Comprehension (MC) is one of the core problems in natural langua...
research
10/18/2019

Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs

Query-based open-domain NLP tasks require information synthesis from lon...
research
01/15/2021

TextGNN: Improving Text Encoder via Graph Neural Network in Sponsored Search

Text encoders based on C-DSSM or transformers have demonstrated strong p...
research
08/16/2017

Deconvolutional Paragraph Representation Learning

Learning latent representations from long text sequences is an important...
research
03/20/2022

Differentiable Reasoning over Long Stories – Assessing Systematic Generalisation in Neural Models

Contemporary neural networks have achieved a series of developments and ...
research
11/17/2015

Gated Graph Sequence Neural Networks

Graph-structured data appears frequently in domains including chemistry,...

Please sign up or login with your details

Forgot password? Click here to reset