Re-Reading Improves Reasoning in Language Models

09/12/2023
by   Xiaohan Xu, et al.
0

Reasoning presents a significant and challenging issue for Large Language Models (LLMs). The predominant focus of research has revolved around developing diverse prompting strategies to guide and structure the reasoning processes of LLMs. However, these approaches based on decoder-only causal language models often operate the input question in a single forward pass, potentially missing the rich, back-and-forth interactions inherent in human reasoning. Scant attention has been paid to a critical dimension, i.e., the input question itself embedded within the prompts. In response, we introduce a deceptively simple yet highly effective prompting strategy, termed question "re-reading". Drawing inspiration from human learning and problem-solving, re-reading entails revisiting the question information embedded within input prompts. This approach aligns seamlessly with the cognitive principle of reinforcement, enabling LLMs to extract deeper insights, identify intricate patterns, establish more nuanced connections, and ultimately enhance their reasoning capabilities across various tasks. Experiments conducted on a series of reasoning benchmarks serve to underscore the effectiveness and generality of our method. Moreover, our findings demonstrate that our approach seamlessly integrates with various language models, though-eliciting prompting methods, and ensemble techniques, further underscoring its versatility and compatibility in the realm of LLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2023

Metacognitive Prompting Improves Understanding in Large Language Models

In Large Language Models (LLMs), there have been consistent advancements...
research
05/18/2023

Are Large Language Models Fit For Guided Reading?

This paper looks at the ability of large language models to participate ...
research
12/20/2022

Towards Reasoning in Large Language Models: A Survey

Reasoning is a fundamental aspect of human intelligence that plays a cru...
research
10/21/2022

A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models

We have recently witnessed a number of impressive results on hard mathem...
research
05/22/2023

Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction

Analogical reasoning is essential for human cognition, allowing us to co...
research
09/11/2023

Evaluating the Deductive Competence of Large Language Models

The development of highly fluent large language models (LLMs) has prompt...
research
06/21/2023

Understanding Social Reasoning in Language Models with Language Models

As Large Language Models (LLMs) become increasingly integrated into our ...

Please sign up or login with your details

Forgot password? Click here to reset