MuLD: The Multitask Long Document Benchmark

02/15/2022
by   G Thomas Hudson, et al.
8

The impressive progress in NLP techniques has been driven by the development of multi-task benchmarks such as GLUE and SuperGLUE. While these benchmarks focus on tasks for one or two input sentences, there has been exciting work in designing efficient techniques for processing much longer inputs. In this paper, we present MuLD: a new long document benchmark consisting of only documents over 10,000 tokens. By modifying existing NLP tasks, we create a diverse benchmark which requires models to successfully model long-term dependencies in the text. We evaluate how existing models perform, and find that our benchmark is much more challenging than their `short document' equivalents. Furthermore, by evaluating both regular and efficient transformers, we show that models with increased context length are better able to solve the tasks presented, suggesting that future improvements in these models are vital for solving similar long document problems. We release the data and code for baselines to encourage further research on efficient NLP models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2023

HiPool: Modeling Long Documents Using Graph Neural Networks

Encoding long sequences in Natural Language Processing (NLP) is a challe...
research
07/18/2023

Can Model Fusing Help Transformers in Long Document Classification? An Empirical Study

Text classification is an area of research which has been studied over t...
research
03/21/2022

Efficient Classification of Long Documents Using Transformers

Several methods have been proposed for classifying long textual document...
research
11/08/2019

ERASER: A Benchmark to Evaluate Rationalized NLP Models

State-of-the-art models in NLP are now predominantly based on deep neura...
research
06/15/2023

SCALE: Scaling up the Complexity for Advanced Language Model Evaluation

Recent strides in Large Language Models (LLMs) have saturated many NLP b...
research
11/23/2022

SciRepEval: A Multi-Format Benchmark for Scientific Document Representations

Learned representations of scientific documents can serve as valuable in...
research
05/15/2023

It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance

Progress in NLP is increasingly measured through benchmarks; hence, cont...

Please sign up or login with your details

Forgot password? Click here to reset