Can Transformers Reason in Fragments of Natural Language?

11/10/2022
by   Viktor Schlegel, et al.
0

State-of-the-art deep-learning-based approaches to Natural Language Processing (NLP) are credited with various capabilities that involve reasoning with natural language texts. In this paper we carry out a large-scale empirical study investigating the detection of formally valid inferences in controlled fragments of natural language for which the satisfiability problem becomes increasingly complex. We find that, while transformer-based language models perform surprisingly well in these scenarios, a deeper analysis re-veals that they appear to overfit to superficial patterns in the data rather than acquiring the logical principles governing the reasoning in these fragments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

Nature Language Reasoning, A Survey

This survey paper proposes a clearer view of natural language reasoning ...
research
12/16/2021

Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability

Investigating the reasoning abilities of transformer models, and discove...
research
05/06/2020

Probing the Natural Language Inference Task with Automated Reasoning Tools

The Natural Language Inference (NLI) task is an important task in modern...
research
11/10/2020

Natural Language Inference in Context – Investigating Contextual Reasoning over Long Texts

Natural language inference (NLI) is a fundamental NLP task, investigatin...
research
07/26/2023

Three Bricks to Consolidate Watermarks for Large Language Models

The task of discerning between generated and natural texts is increasing...
research
12/04/2021

LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLI

Natural Language Inference (NLI) is considered a representative task to ...

Please sign up or login with your details

Forgot password? Click here to reset