Do Language Embeddings Capture Scales?

10/11/2020
by   Xikun Zhang, et al.
0

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance and show that a simple method of canonicalizing numbers can have a significant effect on the results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2022

What do Large Language Models Learn beyond Language?

Large language models (LMs) have rapidly become a mainstay in Natural La...
research
11/08/2019

Why Do Masked Neural Language Models Still Need Common Sense Knowledge?

Currently, contextualized word representations are learned by intricate ...
research
08/26/2019

Improving Neural Story Generation by Targeted Common Sense Grounding

Stories generated with neural language models have shown promise in gram...
research
05/25/2023

Not wacky vs. definitely wacky: A study of scalar adverbs in pretrained language models

Vector space models of word meaning all share the assumption that words ...
research
04/18/2021

Linguistic dependencies and statistical dependence

What is the relationship between linguistic dependencies and statistical...
research
06/07/2023

ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models

Humor is a central aspect of human communication that has not been solve...
research
10/16/2020

Linguistically-Informed Transformations (LIT): A Method forAutomatically Generating Contrast Sets

Although large-scale pretrained language models, such as BERT and RoBERT...

Please sign up or login with your details

Forgot password? Click here to reset