LatEval: An Interactive LLMs Evaluation Benchmark with Incomplete Information from Lateral Thinking Puzzles

08/21/2023
by   Shulin Huang, et al.
0

With the continuous evolution and refinement of LLMs, they are endowed with impressive logical reasoning or vertical thinking capabilities. But can they think out of the box? Do they possess proficient lateral thinking abilities? Following the setup of Lateral Thinking Puzzles, we propose a novel evaluation benchmark, LatEval, which assesses the model's lateral thinking within an interactive framework. In our benchmark, we challenge LLMs with 2 aspects: the quality of questions posed by the model and the model's capability to integrate information for problem-solving. We find that nearly all LLMs struggle with employing lateral thinking during interactions. For example, even the most advanced model, GPT-4, exhibits the advantage to some extent, yet still maintain a noticeable gap when compared to human. This evaluation benchmark provides LLMs with a highly challenging and distinctive task that is crucial to an effective AI assistant.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

Hi-Phy: A Benchmark for Hierarchical Physical Reasoning

Reasoning about the behaviour of physical objects is a key capability of...
research
07/06/2023

Covering Uncommon Ground: Gap-Focused Question Generation for Answer Assessment

Human communication often involves information gaps between the interloc...
research
12/20/2022

True Detective: A Challenging Benchmark for Deep Abductive Reasoning in Foundation Models

Large language models (LLMs) have demonstrated strong performance in zer...
research
10/27/2021

How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI

Many real-world problems require the combined application of multiple re...
research
05/24/2023

Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models

The performance on Large Language Models (LLMs) on existing reasoning be...
research
08/10/2020

DQI: A Guide to Benchmark Evaluation

A `state of the art' model A surpasses humans in a benchmark B, but fail...

Please sign up or login with your details

Forgot password? Click here to reset