Do Language Models Understand Measurements?

10/23/2022
by   Sungjin Park, et al.
0

Recent success of pre-trained language models (PLMs) has stimulated interest in their ability to understand and work with numbers. Yet, the numerical reasoning over measurements has not been formally studied despite their importance. In this study, we show that PLMs lack the capability required for reasoning over measurements. Furthermore, we find that a language model trained on a measurement-rich corpus shows better performance on understanding measurements. We propose a simple embedding strategy to better distinguish between numbers and units, which leads to a significant improvement in the probing tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2023

Enhance Reasoning Ability of Visual-Language Models via Large Language Models

Pre-trained visual language models (VLM) have shown excellent performanc...
research
05/21/2018

Numeracy for Language Models: Evaluating and Improving their Ability to Predict Numbers

Numeracy is the ability to understand and work with numbers. It is a nec...
research
08/29/2023

Measurement Tampering Detection Benchmark

When training powerful AI systems to perform complex tasks, it may be ch...
research
05/09/2023

Large Language Model Programs

In recent years, large pre-trained language models (LLMs) have demonstra...
research
12/16/2021

Masked Measurement Prediction: Learning to Jointly Predict Quantities and Units from Textual Context

Physical measurements constitute a large portion of numbers in academic ...
research
09/17/2019

Do NLP Models Know Numbers? Probing Numeracy in Embeddings

The ability to understand and work with numbers (numeracy) is critical f...
research
05/09/2023

Investigating the effect of sub-word segmentation on the performance of transformer language models

We would like to explore how morphemes can affect the performance of a l...

Please sign up or login with your details

Forgot password? Click here to reset