Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey

01/28/2022
by   Prajjwal Bhargava, et al.
0

While commonsense knowledge acquisition and reasoning has traditionally been a core research topic in the knowledge representation and reasoning community, recent years have seen a surge of interest in the natural language processing community in developing pre-trained models and testing their ability to address a variety of newly designed commonsense knowledge reasoning and generation tasks. This paper presents a survey of these tasks, discusses the strengths and weaknesses of state-of-the-art pre-trained models for commonsense reasoning and generation as revealed by these tasks, and reflects on future research directions.

READ FULL TEXT

page 3

page 5

04/02/2019

Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches

Commonsense knowledge and commonsense reasoning are some of the main bot...
05/26/2022

Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach

Pre-trained models (PTMs) have lead to great improvements in natural lan...
06/14/2021

Probing Pre-Trained Language Models for Disease Knowledge

Pre-trained language models such as ClinicalBERT have achieved impressiv...
10/06/2020

Does the Objective Matter? Comparing Training Objectives for Pronoun Resolution

Hard cases of pronoun resolution have been used as a long-standing bench...
07/27/2019

A Hybrid Neural Network Model for Commonsense Reasoning

This paper proposes a hybrid neural network (HNN) model for commonsense ...
07/07/2022

Can Language Models perform Abductive Commonsense Reasoning?

Abductive Reasoning is a task of inferring the most plausible hypothesis...