Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach

05/26/2022
by   Chao Zhao, et al.
0

Pre-trained models (PTMs) have lead to great improvements in natural language generation (NLG). However, it is still unclear how much commonsense knowledge they possess. With the goal of evaluating commonsense knowledge of NLG models, recent work has proposed the problem of generative commonsense reasoning, e.g., to compose a logical sentence given a set of unordered concepts. Existing approaches to this problem hypothesize that PTMs lack sufficient parametric knowledge for this task, which can be overcome by introducing external knowledge or task-specific pre-training objectives. Different from this trend, we argue that PTM's inherent ability for generative commonsense reasoning is underestimated due to the order-agnostic property of its input. In particular, we hypothesize that the order of the input concepts can affect the PTM's ability to utilize its commonsense knowledge. To this end, we propose a pre-ordering approach to elaborately manipulate the order of the given concepts before generation. Experiments show that our approach can outperform the more sophisticated models that have access to a lot of external data and resources.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/26/2023

Knowledge Graph-Augmented Korean Generative Commonsense Reasoning

Generative commonsense reasoning refers to the task of generating accept...
01/28/2022

Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey

While commonsense knowledge acquisition and reasoning has traditionally ...
09/12/2023

Learning to Predict Concept Ordering for Common Sense Generation

Prior work has shown that the ordering in which concepts are shown to a ...
08/06/2018

Logical Semantics and Commonsense Knowledge: Where Did we Go Wrong, and How to Go Forward, Again

We argue that logical semantics might have faltered due to its failure i...
05/24/2022

GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models

Recent work has shown that Pre-trained Language Models (PLMs) have the a...
04/14/2019

No Adjective Ordering Mystery, and No Raven Paradox, Just an Ontological Mishap

In the concluding remarks of Ontological Promiscuity Hobbs (1985) made w...
07/01/2016

Situated Structure Learning of a Bayesian Logic Network for Commonsense Reasoning

This paper details the implementation of an algorithm for automatically ...

Please sign up or login with your details

Forgot password? Click here to reset