Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

06/11/2020
by   Alon Talmor, et al.
0

To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent reasoning over explicit symbolic facts, under a "closed-world" assumption. However, in an open-domain setup, it is desirable to tap into the vast reservoir of implicit knowledge already encoded in the parameters of pre-trained LMs. In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements. To do this, we describe a procedure for automatically generating datasets that teach a model new reasoning skills, and demonstrate that models learn to effectively perform inference which involves implicit taxonomic and world knowledge, chaining and counting. Finally, we show that "teaching" models to reason generalizes beyond the training distribution: they successfully compose the usage of multiple reasoning skills in single examples. Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2020

Pre-trained Language Models as Symbolic Reasoners over Knowledge?

How can pre-trained language models (PLMs) learn factual knowledge from ...
research
05/23/2023

Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models

We explore whether Large Language Models (LLMs) are capable of logical r...
research
07/15/2021

Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills

Models pre-trained with a language modeling objective possess ample worl...
research
04/16/2023

Automated Program Repair Based on Code Review: How do Pre-trained Transformer Models Perform?

Sequence-to-sequence models have been used to transform erroneous progra...
research
04/09/2020

Injecting Numerical Reasoning Skills into Language Models

Large pre-trained language models (LMs) are known to encode substantial ...
research
12/14/2020

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

The black-box nature of neural models has motivated a line of research t...
research
04/22/2020

Logical Natural Language Generation from Open-Domain Tables

Neural natural language generation (NLG) models have recently shown rema...

Please sign up or login with your details

Forgot password? Click here to reset