DeepAI AI Chat
Log In Sign Up

Efficient Classification of Long Documents Using Transformers

03/21/2022
by   Hyunji Hayley Park, et al.
Amazon
Microsoft
University of Illinois at Urbana-Champaign
9

Several methods have been proposed for classifying long textual documents using Transformers. However, there is a lack of consensus on a benchmark to enable a fair comparison among different approaches. In this paper, we provide a comprehensive evaluation of the relative efficacy measured against various baselines and diverse datasets – both in terms of accuracy as well as time and space overheads. Our datasets cover binary, multi-class, and multi-label classification tasks and represent various ways information is organized in a long text (e.g. information that is critical to making the classification decision is at the beginning or towards the end of the document). Our results show that more complex models often fail to outperform simple baselines and yield inconsistent performance across datasets. These findings emphasize the need for future studies to consider comprehensive baselines and datasets that better represent the task of long document classification to develop robust models.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/18/2019

Improving Document Classification with Multi-Sense Embeddings

Efficient representation of text documents is an important building bloc...
02/15/2022

MuLD: The Multitask Long Document Benchmark

The impressive progress in NLP techniques has been driven by the develop...
04/14/2022

Revisiting Transformer-based Models for Long Document Classification

The recent literature in text classification is biased towards short tex...
10/24/2020

ReadOnce Transformers: Reusable Representations of Text for Transformers

While large-scale language models are extremely effective when directly ...
12/20/2016

SCDV : Sparse Composite Document Vectors using soft clustering over distributional representations

We present a feature vector formation technique for documents - Sparse C...
12/03/2021

Improving Predictions of Tail-end Labels using Concatenated BioMed-Transformers for Long Medical Documents

Multi-label learning predicts a subset of labels from a given label set ...
05/30/2019

Hierarchical Transformers for Multi-Document Summarization

In this paper, we develop a neural summarization model which can effecti...