Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning

06/04/2023
by   Beichen Zhang, et al.
0

Chain-of-thought prompting (CoT) and tool augmentation have been validated in recent work as effective practices for improving large language models (LLMs) to perform step-by-step reasoning on complex math-related tasks. However, most existing math reasoning datasets may be not able to fully evaluate and analyze the ability of LLMs in manipulating tools and performing reasoning, as they may only require very few invocations of tools or miss annotations for evaluating intermediate reasoning steps. To address the issue, we construct CARP, a new Chinese dataset consisting of 4,886 computation-intensive algebra problems with formulated annotations on intermediate steps. In CARP, we test four LLMs with CoT prompting, and find that they are all prone to make mistakes at the early steps of the solution, leading to wrong answers. Based on this finding, we propose a new approach that can deliberate the reasoning steps with tool interfaces, namely DELI. In DELI, we first initialize a step-by-step solution based on retrieved exemplars, then iterate two deliberation procedures that check and refine the intermediate steps of the generated solution, from the perspectives of tool manipulation and natural language reasoning, until obtaining converged solutions or reaching the maximum turn. Experimental results on CARP and six other datasets show that the proposed DELI mostly outperforms competitive baselines, and can further boost the performance of existing CoT methods. Our data and code are available in <https://github.com/RUCAIBox/CARP>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models

Although large language models (LLMs) have achieved excellent performanc...
research
03/16/2023

ART: Automatic multi-step reasoning and tool-use for large language models

Large language models (LLMs) can perform complex reasoning in few- and z...
research
04/21/2023

ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness

Multi-step reasoning ability is fundamental to many natural language tas...
research
07/28/2022

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

Combining deep learning with symbolic logic reasoning aims to capitalize...
research
04/07/2023

Why think step-by-step? Reasoning emerges from the locality of experience

Humans have a powerful and mysterious capacity to reason. By working thr...
research
03/14/2023

It Takes One to Tango but More Make Trouble? In-Context Training with Different Number of Demonstrations

Large language models (LLMs) are capable to perform complex reasoning by...
research
08/31/2022

Generating Intermediate Steps for NLI with Next-Step Supervision

The Natural Language Inference (NLI) task often requires reasoning over ...

Please sign up or login with your details

Forgot password? Click here to reset