Prompting Contrastive Explanations for Commonsense Reasoning Tasks

06/12/2021
by   Bhargavi Paranjape, et al.
4

Many commonsense reasoning NLP tasks involve choosing between one or more possible answers to a question or prompt based on knowledge that is often implicit. Large pretrained language models (PLMs) can achieve near-human performance on such tasks, while providing little human-interpretable evidence of the underlying reasoning they use. In this work, we show how to use these same models to generate such evidence: inspired by the contrastive nature of human explanations, we use PLMs to complete explanation prompts which contrast alternatives according to the key attribute(s) required to justify the correct answer (for example, peanuts are usually salty while raisins are sweet). Conditioning model decisions on these explanations improves performance on two commonsense reasoning benchmarks, as compared to previous non-contrastive alternatives. These explanations are also judged by humans to be more relevant for solving the task, and facilitate a novel method to evaluate explanation faithfulfness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2019

Explain Yourself! Leveraging Language Models for Commonsense Reasoning

Deep learning models perform poorly on tasks that require commonsense re...
research
05/24/2020

KaLM at SemEval-2020 Task 4: Knowledge-aware Language Models for Comprehension And Generation

This paper presents our strategies in SemEval 2020 Task 4: Commonsense V...
research
05/24/2023

Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations

Abductive reasoning aims to find plausible explanations for an event. Th...
research
04/15/2021

ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning

Recent commonsense-reasoning tasks are typically discriminative in natur...
research
05/02/2020

Contrastive Self-Supervised Learning for Commonsense Reasoning

We propose a self-supervised method to solve Pronoun Disambiguation and ...
research
09/24/2020

Generating Commonsense Explanation by Extracting Bridge Concepts from Reasoning Paths

Commonsense explanation generation aims to empower the machine's sense-m...
research
10/08/2020

Precise Task Formalization Matters in Winograd Schema Evaluations

Performance on the Winograd Schema Challenge (WSC), a respected English ...

Please sign up or login with your details

Forgot password? Click here to reset