Can Pretrained Language Models (Yet) Reason Deductively?

10/12/2022
by   Zhangdie Yuan, et al.
0

Acquiring factual knowledge with Pretrained Language Models (PLMs) has attracted increasing attention, showing promising performance in many knowledge-intensive tasks. Their good performance has led the community to believe that the models do possess a modicum of reasoning competence rather than merely memorising the knowledge. In this paper, we conduct a comprehensive evaluation of the learnable deductive (also known as explicit) reasoning capability of PLMs. Through a series of controlled experiments, we posit two main findings. (i) PLMs inadequately generalise learned logic rules and perform inconsistently against simple adversarial surface form edits. (ii) While the deductive reasoning fine-tuning of PLMs does improve their performance on reasoning over unseen knowledge facts, it results in catastrophically forgetting the previously learnt knowledge. Our main results suggest that PLMs cannot yet perform reliable deductive reasoning, demonstrating the importance of controlled examinations and probing of PLMs' reasoning abilities; we reach beyond (misleading) task performance, revealing that PLMs are still far from human-level reasoning capabilities, even for simple deductive tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/26/2021

Rethinking Why Intermediate-Task Fine-Tuning Works

Supplementary Training on Intermediate Labeled-data Tasks (STILTs) is a ...
research
09/12/2023

Do PLMs Know and Understand Ontological Knowledge?

Ontological knowledge, which comprises classes and properties and their ...
research
12/21/2022

Language Models as Inductive Reasoners

Inductive reasoning is a core component of human intelligence. In the pa...
research
05/28/2023

Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks

Large Language Models (LLMs) have shown promising performance in knowled...
research
03/22/2023

Are LLMs the Master of All Trades? : Exploring Domain-Agnostic Reasoning Skills of LLMs

The potential of large language models (LLMs) to reason like humans has ...
research
09/11/2023

Evaluating the Deductive Competence of Large Language Models

The development of highly fluent large language models (LLMs) has prompt...
research
05/24/2023

SETI: Systematicity Evaluation of Textual Inference

We propose SETI (Systematicity Evaluation of Textual Inference), a novel...

Please sign up or login with your details

Forgot password? Click here to reset