-
Why Do Masked Neural Language Models Still Need Common Sense Knowledge?
Currently, contextualized word representations are learned by intricate ...
read it
-
Improving Neural Story Generation by Targeted Common Sense Grounding
Stories generated with neural language models have shown promise in gram...
read it
-
Multi-Sense Language Modelling
The effectiveness of a language model is influenced by its token represe...
read it
-
Linguistically-Informed Transformations (LIT): A Method forAutomatically Generating Contrast Sets
Although large-scale pretrained language models, such as BERT and RoBERT...
read it
-
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
Large-scale pretrained language models are the major driving force behin...
read it
-
Modeling Semantic Expectation: Using Script Knowledge for Referent Prediction
Recent research in psycholinguistics has provided increasing evidence th...
read it
-
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Pre-training by language modeling has become a popular and successful ap...
read it
Do Language Embeddings Capture Scales?
Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance and show that a simple method of canonicalizing numbers can have a significant effect on the results.
READ FULL TEXT
Comments
There are no comments yet.