PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

11/03/2022
by   Peifeng Wang, et al.
0

Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters. To make this reasoning process more explicit, recent works retrieve a rationalizing LM's internal knowledge by training or prompting it to generate free-text rationales, which can be used to guide task predictions made by either the same LM or a separate reasoning LM. However, rationalizing LMs require expensive rationale annotation and/or computation, without any assurance that their generated rationales improve LM task performance or faithfully reflect LM decision-making. In this paper, we propose PINTO, an LM pipeline that rationalizes via prompt-based learning, and learns to faithfully reason over rationales via counterfactual regularization. First, PINTO maps out a suitable reasoning process for the task input by prompting a frozen rationalizing LM to generate a free-text rationale. Second, PINTO's reasoning LM is fine-tuned to solve the task using the generated rationale as context, while regularized to output less confident predictions when the rationale is perturbed. Across four datasets, we show that PINTO significantly improves the generalization ability of the reasoning LM, yielding higher performance on both in-distribution and out-of-distribution test sets. Also, we find that PINTO's rationales are more faithful to its task predictions than those generated by competitive baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2023

Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks

Large Language Models (LLMs) have shown promising performance in knowled...
research
06/06/2023

Certified Reasoning with Language Models

Language models often achieve higher accuracy when reasoning step-by-ste...
research
05/15/2023

Text Classification via Large Language Models

Despite the remarkable success of large-scale Language Models (LLMs) suc...
research
10/06/2022

Learning to Reason With Relational Abstractions

Large language models have recently shown promising progress in mathemat...
research
12/19/2022

KNIFE: Knowledge Distillation with Free-Text Rationales

Free-text rationales (FTRs) follow how humans communicate by explaining ...
research
09/14/2023

A Data Source for Reasoning Embodied Agents

Recent progress in using machine learning models for reasoning tasks has...
research
10/16/2020

Reflective Decoding: Unsupervised Paraphrasing and Abductive Reasoning

Pretrained Language Models (LMs) generate text with remarkable quality, ...

Please sign up or login with your details

Forgot password? Click here to reset