HiPool: Modeling Long Documents Using Graph Neural Networks

05/05/2023
by   Irene Li, et al.
0

Encoding long sequences in Natural Language Processing (NLP) is a challenging problem. Though recent pretraining language models achieve satisfying performances in many NLP tasks, they are still restricted by a pre-defined maximum length, making them challenging to be extended to longer sequences. So some recent works utilize hierarchies to model long sequences. However, most of them apply sequential models for upper hierarchies, suffering from long dependency issues. In this paper, we alleviate these issues through a graph-based method. We first chunk the sequence with a fixed length to model the sentence-level information. We then leverage graphs to model intra- and cross-sentence correlations with a new attention mechanism. Additionally, due to limited standard benchmarks for long document classification (LDC), we propose a new challenging benchmark, totaling six datasets with up to 53k samples and 4034 average tokens' length. Evaluation shows our model surpasses competitive baselines by 2.6 dataset. Our method is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

MuLD: The Multitask Long Document Benchmark

The impressive progress in NLP techniques has been driven by the develop...
research
01/02/2021

Cross-Document Language Modeling

We introduce a new pretraining approach for language models that are gea...
research
08/21/2023

Giraffe: Adventures in Expanding Context Lengths in LLMs

Modern large language models (LLMs) that rely on attention mechanisms ar...
research
01/09/2019

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformer networks have a potential of learning longer-term dependency...
research
09/26/2022

Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers

Transformer-based language models utilize the attention mechanism for su...
research
06/12/2021

A Sentence-level Hierarchical BERT Model for Document Classification with Limited Labelled Data

Training deep learning models with limited labelled data is an attractiv...
research
12/14/2021

Simple Local Attentions Remain Competitive for Long-Context Tasks

Many NLP tasks require processing long contexts beyond the length limit ...

Please sign up or login with your details

Forgot password? Click here to reset