Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

08/23/2022
by   Letian Peng, et al.
0

Commonsense reasoning is an appealing topic in natural language processing (NLP) as it plays a fundamental role in supporting the human-like actions of NLP systems. With large-scale language models as the backbone, unsupervised pre-training on numerous corpora shows the potential to capture commonsense knowledge. Current pre-trained language model (PLM)-based reasoning follows the traditional practice using perplexity metric. However, commonsense reasoning is more than existing probability evaluation, which is biased by word frequency. This paper reconsiders the nature of commonsense reasoning and proposes a novel commonsense reasoning metric, Non-Replacement Confidence (NRC). In detail, it works on PLMs according to the Replaced Token Detection (RTD) pre-training objective in ELECTRA, in which the corruption detection objective reflects the confidence on contextual integrity that is more relevant to commonsense reasoning than existing probability. Our proposed novel method boosts zero-shot performance on two commonsense reasoning benchmark datasets and further seven commonsense question-answering datasets. Our analysis shows that pre-endowed commonsense knowledge, especially for RTD-based PLMs, is essential in downstream reasoning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2021

A Systematic Investigation of Commonsense Understanding in Large Language Models

Large language models have shown impressive performance on many natural ...
research
10/10/2020

Beyond Language: Learning Commonsense from Images for Reasoning

This paper proposes a novel approach to learn commonsense from images, i...
research
05/25/2022

ToKen: Task Decomposition and Knowledge Infusion for Few-Shot Hate Speech Detection

Hate speech detection is complex; it relies on commonsense reasoning, kn...
research
05/24/2023

Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations

Abductive reasoning aims to find plausible explanations for an event. Th...
research
04/16/2021

Back to Square One: Bias Detection, Training and Commonsense Disentanglement in the Winograd Schema

The Winograd Schema (WS) has been proposed as a test for measuring commo...
research
10/12/2022

Zero-Shot Prompting for Implicit Intent Prediction and Recommendation with Commonsense Reasoning

Intelligent virtual assistants are currently designed to perform tasks o...
research
07/28/2023

An Overview Of Temporal Commonsense Reasoning and Acquisition

Temporal commonsense reasoning refers to the ability to understand the t...

Please sign up or login with your details

Forgot password? Click here to reset