Hierarchical Neural Network Approaches for Long Document Classification

01/18/2022
by   Snehal Khandve, et al.
0

Text classification algorithms investigate the intricate relationships between words or phrases and attempt to deduce the document's interpretation. In the last few years, these algorithms have progressed tremendously. Transformer architecture and sentence encoders have proven to give superior results on natural language processing tasks. But a major limitation of these architectures is their applicability for text no longer than a few hundred words. In this paper, we explore hierarchical transfer learning approaches for long document classification. We employ pre-trained Universal Sentence Encoder (USE) and Bidirectional Encoder Representations from Transformers (BERT) in a hierarchical setup to capture better representations efficiently. Our proposed models are conceptually simple where we divide the input data into chunks and then pass this through base models of BERT and USE. Then output representation for each chunk is then propagated through a shallow neural network comprising of LSTMs or CNNs for classifying the text data. These extensions are evaluated on 6 benchmark datasets. We show that USE + CNN/LSTM performs better than its stand-alone baseline. Whereas the BERT + CNN/LSTM performs on par with its stand-alone counterpart. However, the hierarchical BERT models are still desirable as it avoids the quadratic complexity of the attention mechanism in BERT. Along with the hierarchical approaches, this work also provides a comparison of different deep learning algorithms like USE, BERT, HAN, Longformer, and BigBird for long document classification. The Longformer approach consistently performs well on most of the datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2021

Comparative Study of Long Document Classification

The amount of information stored in the form of documents on the interne...
research
10/23/2019

Hierarchical Transformers for Long Document Classification

BERT, which stands for Bidirectional Encoder Representations from Transf...
research
04/26/2020

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching

Many information retrieval and natural language processing problems can ...
research
01/02/2022

On Sensitivity of Deep Learning Based Text Classification Algorithms to Practical Input Perturbations

Text classification is a fundamental Natural Language Processing task th...
research
09/13/2022

CNN-Trans-Enc: A CNN-Enhanced Transformer-Encoder On Top Of Static BERT representations for Document Classification

BERT achieves remarkable results in text classification tasks, it is yet...
research
02/07/2022

Universal Spam Detection using Transfer Learning of BERT Model

Deep learning transformer models become important by training on text da...
research
06/12/2021

A Sentence-level Hierarchical BERT Model for Document Classification with Limited Labelled Data

Training deep learning models with limited labelled data is an attractiv...

Please sign up or login with your details

Forgot password? Click here to reset