An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification

10/11/2022
by   Ilias Chalkidis, et al.
0

Non-hierarchical sparse attention Transformer-based models, such as Longformer and Big Bird, are popular approaches to working with long documents. There are clear benefits to these approaches compared to the original Transformer in terms of efficiency, but Hierarchical Attention Transformer (HAT) models are a vastly understudied alternative. We develop and release fully pre-trained HAT models that use segment-wise followed by cross-segment encoders and compare them with Longformer models and partially pre-trained HATs. In several long document downstream classification tasks, our best HAT model outperforms equally-sized Longformer models while using 10-20 memory and processing documents 40-45 we find that HATs perform best with cross-segment contextualization throughout the model than alternative configurations that implement either early or late cross-segment contextualization. Our code is on GitHub: https://github.com/coastalcph/hierarchical-transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2022

Revisiting Transformer-based Models for Long Document Classification

The recent literature in text classification is biased towards short tex...
research
01/27/2022

Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences

Transformers-based models, such as BERT, have dramatically improved the ...
research
10/23/2020

Long Document Ranking with Query-Directed Sparse Transformer

The computing cost of transformer self-attention often necessitates brea...
research
09/10/2021

Query-driven Segment Selection for Ranking Long Documents

Transformer-based rankers have shown state-of-the-art performance. Howev...
research
07/18/2023

Attention over pre-trained Sentence Embeddings for Long Document Classification

Despite being the current de-facto models in most NLP tasks, transformer...
research
11/02/2022

Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer

Pre-trained Transformers currently dominate most NLP tasks. They impose,...
research
01/24/2023

Model soups to increase inference without increasing compute time

In this paper, we compare Model Soups performances on three different mo...

Please sign up or login with your details

Forgot password? Click here to reset