NT5?! Training T5 to Perform Numerical Reasoning

04/15/2021
by   Peng-Jian Yang, et al.
6

Numerical reasoning over text (NRoT) presents unique challenges that are not well addressed by existing pre-training objectives. We explore five sequential training schedules that adapt a pre-trained T5 model for NRoT. Our final model is adapted from T5, but further pre-trained on three datasets designed to strengthen skills necessary for NRoT and general reading comprehension before being fine-tuned on the Discrete Reasoning over Text (DROP) dataset. The training improves DROP's adjusted F1 performance (a numeracy-focused score) from 45.90 to 70.83. Our model closes in on GenBERT (72.4), a custom BERT-Base model using the same datasets with significantly more parameters. We show that training the T5 multitasking framework with multiple numerical reasoning datasets of increasing difficulty, good performance on DROP can be achieved without manually engineering partitioned functionality between distributed and symbol modules.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2020

Injecting Numerical Reasoning Skills into Language Models

Large pre-trained language models (LMs) are known to encode substantial ...
research
08/31/2019

Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension

Reading comprehension models have been successfully applied to extractiv...
research
03/01/2019

DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs

Reading comprehension has recently seen rapid progress, with systems mat...
research
10/28/2019

What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?

Multiple-Choice Reading Comprehension (MCRC) requires the model to read ...
research
07/15/2021

Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills

Models pre-trained with a language modeling objective possess ample worl...
research
06/02/2020

A Pairwise Probe for Understanding BERT Fine-Tuning on Machine Reading Comprehension

Pre-trained models have brought significant improvements to many NLP tas...
research
05/13/2022

Improving the Numerical Reasoning Skills of Pretrained Language Models

State-of-the-art pretrained language models tend to perform below their ...

Please sign up or login with your details

Forgot password? Click here to reset