Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !

12/03/2020
by   Wen Xiao, et al.
0

The multi-head self-attention of popular transformer models is widely used within Natural Language Processing (NLP), including for the task of extractive summarization. With the goal of analyzing and pruning the parameter-heavy self-attention mechanism, there are multiple approaches proposing more parameter-light self-attention alternatives. In this paper, we present a novel parameter-lean self-attention mechanism using discourse priors. Our new tree self-attention is based on document-level discourse information, extending the recently proposed "Synthesizer" framework with another lightweight alternative. We show empirical results that our tree self-attention approach achieves competitive ROUGE-scores on the task of extractive summarization. When compared to the original single-head transformer model, the tree attention approach reaches similar performance on both, EDU and sentence level, despite the significant reduction of parameters in the attention component. We further significantly outperform the 8-head transformer model on sentence level when applying a more balanced hyper-parameter setting, requiring an order of magnitude less parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2019

A Tensorized Transformer for Language Modeling

Latest development of neural models has connected the encoder and decode...
research
04/14/2021

Predicting Discourse Trees from Transformer-based Neural Summarizers

Previous work indicates that discourse information benefits summarizatio...
research
12/10/2021

Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization

The transformer multi-head self-attention mechanism has been thoroughly ...
research
02/24/2022

Attention Enables Zero Approximation Error

Deep learning models have been widely applied in various aspects of dail...
research
11/26/2019

Low Rank Factorization for Compact Multi-Head Self-Attention

Effective representation learning from text has been an active area of r...
research
04/07/2022

Accelerating Attention through Gradient-Based Learned Runtime Pruning

Self-attention is a key enabler of state-of-art accuracy for various tra...
research
05/14/2022

Multiformer: A Head-Configurable Transformer-Based Model for Direct Speech Translation

Transformer-based models have been achieving state-of-the-art results in...

Please sign up or login with your details

Forgot password? Click here to reset