On the Paradox of Learning to Reason from Data

05/23/2022
by   Honghua Zhang, et al.
0

Logical reasoning is needed in a wide range of NLP tasks. Can a BERT model be trained end-to-end to solve logical reasoning problems presented in natural language? We attempt to answer this question in a confined problem space where there exists a set of parameters that perfectly simulates logical reasoning. We make observations that seem to contradict each other: BERT attains near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space. Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems. We also show that it is infeasible to jointly remove statistical features from data, illustrating the difficulty of learning to reason in general. Our result naturally extends to other neural models and unveils the fundamental difference between learning to reason and learning to achieve high performance on NLP benchmarks using statistical features.

READ FULL TEXT

page 6

page 11

research
03/21/2023

Logical Reasoning over Natural Language as Knowledge Representation: A Survey

Logical reasoning is central to human cognition and intelligence. Past r...
research
05/01/2019

Declarative Question Answering over Knowledge Bases containing Natural Language Text with Answer Set Programming

While in recent years machine learning (ML) based approaches have been t...
research
04/07/2023

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

Harnessing logical reasoning ability is a comprehensive natural language...
research
02/11/2022

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking

Machine learning systems perform well on pattern matching tasks, but the...
research
05/22/2023

Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

State-of-the-art neural models can now reach human performance levels ac...
research
05/10/2019

Using syntactical and logical forms to evaluate textual inference competence

In the light of recent breakthroughs in transfer learning for Natural La...
research
10/28/2012

Illustrating a neural model of logic computations: The case of Sherlock Holmes' old maxim

Natural languages can express some logical propositions that humans are ...

Please sign up or login with your details

Forgot password? Click here to reset