Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

12/14/2020
by   Faeze Brahman, et al.
0

The black-box nature of neural models has motivated a line of research that aims to generate natural language rationales to explain why a model made certain predictions. Such rationale generation models, to date, have been trained on dataset-specific crowdsourced rationales, but this approach is costly and is not generalizable to new tasks and domains. In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales. We investigate multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and train generative models capable of composing explanatory rationales for unseen instances. We demonstrate our approach on the defeasible inference task, a nonmonotonic reasoning task in which an inference may be strengthened or weakened when new information (an update) is introduced. Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information, however, it mostly generates trivial rationales reflecting the fundamental limitations of neural language models. Conversely, the more realistic setup of jointly predicting the update or its type and generating rationale is more challenging, suggesting an important future direction.

READ FULL TEXT
research
10/04/2020

Explaining Deep Neural Networks

Deep neural networks are becoming more and more popular due to their rev...
research
01/11/2022

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Estimating the predictive uncertainty of pre-trained language models is ...
research
05/22/2023

SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations

Explaining the decisions of neural models is crucial for ensuring their ...
research
08/08/2019

Do Neural Language Representations Learn Physical Commonsense?

Humans understand language based on the rich background knowledge about ...
research
06/11/2020

Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

To what extent can a neural network systematically reason over symbolic ...
research
10/24/2020

Temporal Reasoning on Implicit Events from Distant Supervision

Existing works on temporal reasoning among events described in text focu...
research
06/02/2023

PassGPT: Password Modeling and (Guided) Generation with Large Language Models

Large language models (LLMs) successfully model natural language from va...

Please sign up or login with your details

Forgot password? Click here to reset