DeepAI
Log In Sign Up

Masked Measurement Prediction: Learning to Jointly Predict Quantities and Units from Textual Context

12/16/2021
by   Daniel Spokoyny, et al.
0

Physical measurements constitute a large portion of numbers in academic papers, engineering reports, and web tables. Current benchmarks fall short of properly evaluating numeracy of pretrained language models on measurements, hindering research on developing new methods and applying them to numerical tasks. To that end, we introduce a novel task, Masked Measurement Prediction (MMP), where a model learns to reconstruct a number together with its associated unit given masked text. MMP is useful for both training new numerically informed models as well as evaluating numeracy of existing systems. In order to address this task, we introduce a new Generative Masked Measurement (GeMM) model that jointly learns to predict numbers along with their units. We perform fine-grained analyses comparing our model with various ablations and baselines. We use linear probing of traditional pretrained transformer models (RoBERTa) to show that they significantly underperform jointly trained number-unit models, highlighting the difficulty of this new task and the benefits of our proposed pretraining approach. We hope this framework accelerates the progress towards building more robust numerical reasoning systems in the future.

READ FULL TEXT

page 7

page 11

05/13/2022

Improving the Numerical Reasoning Skills of Pretrained Language Models

State-of-the-art pretrained language models tend to perform below their ...
10/23/2022

Do Language Models Understand Measurements?

Recent success of pre-trained language models (PLMs) has stimulated inte...
05/17/2020

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

Recent years have witnessed the burgeoning of pretrained language models...
05/06/2021

TABBIE: Pretrained Representations of Tabular Data

Existing work on tabular representation learning jointly models tables a...
09/24/2020

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

Pretrained neural language models (LMs) are prone to generating racist, ...
03/31/2022

Misogynistic Meme Detection using Early Fusion Model with Graph Network

In recent years , there has been an upsurge in a new form of entertainme...
10/10/2020

Toward Micro-Dialect Identification in Diaglossic and Code-Switched Environments

Although the prediction of dialects is an important language processing ...