True Detective: A Challenging Benchmark for Deep Abductive Reasoning in Foundation Models

12/20/2022
by   Maksym Del, et al.
0

Large language models (LLMs) have demonstrated strong performance in zero-shot reasoning tasks, including abductive reasoning. This is reflected in their ability to perform well on current benchmarks in this area. However, to truly test the limits of LLMs in abductive reasoning, a more challenging benchmark is needed. In this paper, we present such a benchmark, consisting of 191 long-form mystery stories, each approximately 1200 words in length and presented in the form of detective puzzles. Each puzzle includes a multiple-choice question for evaluation sourced from the "5 Minute Mystery" platform. Our results show that state-of-the-art GPT models perform significantly worse than human solvers on this benchmark, with an accuracy of 28% compared to 47% for humans. This indicates that there is still a significant gap in the abductive reasoning abilities of LLMs and highlights the need for further research in this area. Our work provides a challenging benchmark for future studies on reasoning in language models and contributes to a better understanding of the limits of LLMs' abilities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2023

Response: Emergent analogical reasoning in large language models

In their recent Nature Human Behaviour paper, "Emergent analogical reaso...
research
06/07/2023

The Two Word Test: A Semantic Benchmark for Large Language Models

Large Language Models (LLMs) have shown remarkable abilities recently, i...
research
06/07/2023

STEPS: A Benchmark for Order Reasoning in Sequential Tasks

Various human activities can be abstracted into a sequence of actions in...
research
07/28/2023

Med-HALT: Medical Domain Hallucination Test for Large Language Models

This research paper focuses on the challenges posed by hallucinations in...
research
08/21/2023

LatEval: An Interactive LLMs Evaluation Benchmark with Incomplete Information from Lateral Thinking Puzzles

With the continuous evolution and refinement of LLMs, they are endowed w...
research
05/24/2023

Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models

The performance on Large Language Models (LLMs) on existing reasoning be...
research
04/12/2022

NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks

Given the ubiquitous nature of numbers in text, reasoning with numbers t...

Please sign up or login with your details

Forgot password? Click here to reset