Self-Consistency Improves Chain of Thought Reasoning in Language Models

03/21/2022
by   Xuezhi Wang, et al.
0

We explore a simple ensemble strategy, self-consistency, that significantly improves the reasoning accuracy of large language models. The idea is to sample a diverse set of outputs from a language model and return the most consistent answer in the set. Such ensembling method improves reasoning accuracy when combined with chain of thought prompting. For arithmetic and commonsense reasoning benchmarks we find that self-consistency yields significant accuracy improvements in a variety of datasets, such as GSM8K (+10 MultiArith (+24

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset