Cumulative Reasoning with Large Language Models

08/08/2023
by   Yifan Zhang, et al.
0

While language models are powerful and versatile, they often fail to address highly complex problems. This is because solving complex problems requires deliberate thinking, which has been only minimally guided during training. In this paper, we propose a new method called Cumulative Reasoning (CR), which employs language models in a cumulative and iterative manner to emulate human thought processes. By decomposing tasks into smaller components, CR streamlines the problem-solving process, rendering it both more manageable and effective. For logical inference tasks, CR consistently outperforms existing methods with an improvement up to 9.3 the curated FOLIO wiki dataset. In the context of the Game of 24, CR achieves an accuracy of 94 previous state-of-the-art method (code is available at https://github.com/iiis-ai/cumulative-reasoning).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset