Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization

06/02/2021
by   Yichen Jiang, et al.
13

Abstractive summarization, the task of generating a concise summary of input documents, requires: (1) reasoning over the source document to determine the salient pieces of information scattered across the long document, and (2) composing a cohesive text by reconstructing these salient facts into a shorter summary that faithfully reflects the complex relations connecting these facts. In this paper, we adapt TP-TRANSFORMER (Schlag et al., 2019), an architecture that enriches the original Transformer (Vaswani et al., 2017) with the explicitly compositional Tensor Product Representation (TPR), for the task of abstractive summarization. The key feature of our model is a structural bias that we introduce by encoding two separate representations for each token to represent the syntactic structure (with role vectors) and semantic content (with filler vectors) separately. The model then binds the role and filler vectors into the TPR as the layer output. We argue that the structured intermediate representations enable the model to take better control of the contents (salient facts) and structures (the syntax that connects the facts) when generating the summary. Empirically, we show that our TP-TRANSFORMER outperforms the Transformer and the original TP-TRANSFORMER significantly on several abstractive summarization datasets based on both automatic and human evaluations. On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs. Code and models are available at https://github.com/jiangycTarheel/TPT-Summ.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2021

MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization

One of the most challenging aspects of current single-document news summ...
research
10/19/2020

SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline

The Scholarly Document Processing (SDP) workshop is to encourage more ef...
research
03/01/2021

Long Document Summarization in a Low Resource Setting using Pretrained Language Models

Abstractive summarization is the task of compressing a long document int...
research
03/18/2020

Selective Attention Encoders by Syntactic Graph Convolutional Networks for Document Summarization

Abstractive text summarization is a challenging task, and one need to de...
research
06/05/2018

Explaining Away Syntactic Structure in Semantic Document Representations

Most generative document models act on bag-of-words input in an attempt ...
research
09/10/2020

Sparsifying Transformer Models with Differentiable Representation Pooling

We propose a novel method to sparsify attention in the Transformer model...
research
09/02/2019

Logic and the 2-Simplicial Transformer

We introduce the 2-simplicial Transformer, an extension of the Transform...

Please sign up or login with your details

Forgot password? Click here to reset