Reasoning Like Program Executors

01/27/2022
by   Xinyu Pi, et al.
0

Reasoning over natural language is a long-standing goal for the research community. However, studies have shown that existing language models are inadequate in reasoning. To address the issue, we present POET, a novel reasoning pre-training paradigm. Through pre-training language models with programs and their execution results, POET empowers language models to harvest the reasoning knowledge possessed by program executors via a data-driven approach. POET is conceptually simple and can be instantiated by different kinds of program executors. In this paper, we showcase two simple instances POET-Math and POET-Logic, in addition to a complex instance, POET-SQL. Experimental results on six benchmarks demonstrate that POET can significantly boost model performance in natural language reasoning, such as numerical reasoning, logical reasoning, and multi-hop reasoning. POET opens a new gate on reasoning-enhancement pre-training, and we hope our analysis would shed light on the future research of reasoning like program executors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2021

TAPEX: Table Pre-training via Learning a Neural SQL Executor

Recent years pre-trained language models hit a success on modeling natur...
research
04/22/2023

An Empirical Study on Using Large Language Models for Multi-Intent Comment Generation

Code comment generation aims at generating natural language descriptions...
research
07/28/2022

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

Combining deep learning with symbolic logic reasoning aims to capitalize...
research
05/18/2022

LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

We present LogiGAN, an unsupervised adversarial pre-training framework f...
research
01/27/2023

Case-Based Reasoning with Language Models for Classification of Logical Fallacies

The ease and the speed of spreading misinformation and propaganda on the...
research
03/01/2022

MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning

Logical reasoning is of vital importance to natural language understandi...
research
08/07/2023

Symmetry-Preserving Program Representations for Learning Code Semantics

Large Language Models (LLMs) have shown promise in automated program rea...

Please sign up or login with your details

Forgot password? Click here to reset