Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text

11/13/2019
by   Ian Porada, et al.
0

Modeling semantic plausibility requires commonsense knowledge about the world and has been used as a testbed for exploring various knowledge representations. Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting. At the same time, distributional models, namely large pretrained language models, have led to improved results for many natural language understanding tasks. In this work, we show that these pretrained language models are in fact effective at modeling physical plausibility in the supervised setting. We therefore present the more difficult problem of learning to model physical plausibility directly from text. We create a training set by extracting attested events from a large corpus, and we provide a baseline for training on these attested events in a self-supervised manner and testing on a physical plausibility task. We believe results could be further improved by injecting explicit commonsense knowledge into a distributional model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2022

SafeText: A Benchmark for Exploring Physical Safety in Language Models

Understanding what constitutes safe text is an important issue in natura...
research
04/02/2018

Modeling Semantic Plausibility by Injecting World Knowledge

Distributional data tells us that a man can swallow candy, but not that ...
research
03/15/2022

Things not Written in Text: Exploring Spatial Commonsense from Visual Signals

Spatial commonsense, the knowledge about spatial position and relationsh...
research
06/04/2023

Probing Physical Reasoning with Counter-Commonsense Context

In this study, we create a CConS (Counter-commonsense Contextual Size co...
research
01/13/2023

Infusing Commonsense World Models with Graph Knowledge

While language models have become more capable of producing compelling l...
research
05/16/2022

Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences

The increase in performance in NLP due to the prevalence of distribution...
research
12/01/2021

Controlling Conditional Language Models with Distributional Policy Gradients

Machine learning is shifting towards general-purpose pretrained generati...

Please sign up or login with your details

Forgot password? Click here to reset