Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

05/23/2023
by   Zhiheng Xi, et al.
0

Prompting methods such as Chain-of-Thought (CoT) have shed new light on enhancing the reasoning capabilities of large language models, and researchers have extensively explored the generation process of rationales and answers. However, they have overlooked the potential challenges posed by the poor quality of reasoning problems, which may influence the reasoning performance significantly. In this work, we propose Self-Polish (SP), a novel method that facilitates the model's problem-solving process by prompting them to progressively refine the given problems to be more comprehensible and solvable. Specifically, the method teaches models to eliminate irrelevant information, rearrange the logic structure and organize local conditions into new ones parallelly. SP is orthogonal to all other prompting methods, making it convenient to integrate with state-of-the-art techniques for further improvement. We conduct thorough experiments on five benchmarks to illustrate the effectiveness of the proposed method. For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by 8.0% on GSM8K and 17.8% on MultiArith; it also improves the performance of CoT by 6.0% on GSM8K and 6.0% on MathQA, respectively. Furthermore, our method also showcases impressive performance on robustness evaluation.

READ FULL TEXT
research
04/19/2023

Progressive-Hint Prompting Improves Reasoning in Large Language Models

The performance of Large Language Models (LLMs) in reasoning tasks depen...
research
08/18/2023

Enhancing Reasoning Capabilities of Large Language Models: A Graph-Based Verification Approach

Large Language Models (LLMs) have showcased impressive reasoning capabil...
research
02/01/2023

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

Large language models can perform various reasoning tasks by using chain...
research
07/20/2023

Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting

Language models can be prompted to reason through problems in a manner t...
research
09/11/2023

Large Language Model for Science: A Study on P vs. NP

In this work, we use large language models (LLMs) to augment and acceler...
research
08/17/2023

CodeCoT and Beyond: Learning to Program and Test like a Developer

In natural language processing, transformer-based large language models ...
research
08/18/2023

Graph of Thoughts: Solving Elaborate Problems with Large Language Models

We introduce Graph of Thoughts (GoT): a framework that advances promptin...

Please sign up or login with your details

Forgot password? Click here to reset