Progressive-Hint Prompting Improves Reasoning in Large Language Models

04/19/2023
by   Chuanyang Zheng, et al.
0

The performance of Large Language Models (LLMs) in reasoning tasks depends heavily on prompt design, with Chain-of-Thought (CoT) and self-consistency being critical methods that enhance this ability. However, these methods do not fully exploit the answers generated by the LLM to guide subsequent responses. This paper proposes a new prompting method, named Progressive-Hint Prompting (PHP), that enables automatic multiple interactions between users and LLMs by using previously generated answers as hints to progressively guide toward the correct answers. PHP is orthogonal to CoT and self-consistency, making it easy to combine with state-of-the-art techniques to further improve performance. We conducted an extensive and comprehensive evaluation to demonstrate the effectiveness of the proposed method. Our experimental results on six benchmarks show that combining CoT and self-consistency with PHP significantly improves accuracy while remaining highly efficient. For instance, with text-davinci-003, we observed a 4.2 compared to Complex CoT, and a 46.17 self-consistency. With GPT-4 and PHP, we achieve state-of-the-art performances on SVAMP (91.9

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2022

Self-Consistency Improves Chain of Thought Reasoning in Language Models

We explore a simple ensemble strategy, self-consistency, that significan...
research
05/23/2023

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

Prompting methods such as Chain-of-Thought (CoT) have shed new light on ...
research
06/06/2023

Certified Reasoning with Language Models

Language models often achieve higher accuracy when reasoning step-by-ste...
research
04/27/2023

Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs Answering

We investigate how to enhance answer precision in frequently asked quest...
research
03/28/2022

STaR: Bootstrapping Reasoning With Reasoning

Generating step-by-step "chain-of-thought" rationales improves language ...
research
08/15/2023

Forward-Backward Reasoning in Large Language Models for Verification

Chain-of-Though (CoT) prompting has shown promising performance in vario...
research
07/11/2023

Self-consistency for open-ended generations

In this paper, we present a novel approach for improving the quality and...

Please sign up or login with your details

Forgot password? Click here to reset