Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models

09/08/2023
by   Yangyi Chen, et al.
0

Vision-language models (VLMs) have recently demonstrated strong efficacy as visual assistants that can parse natural queries about the visual content and generate human-like outputs. In this work, we explore the ability of these models to demonstrate human-like reasoning based on the perceived information. To address a crucial concern regarding the extent to which their reasoning capabilities are fully consistent and grounded, we also measure the reasoning consistency of these models. We achieve this by proposing a chain-of-thought (CoT) based consistency measure. However, such an evaluation requires a benchmark that encompasses both high-level inference and detailed reasoning chains, which is costly. We tackle this challenge by proposing a LLM-Human-in-the-Loop pipeline, which notably reduces cost while simultaneously ensuring the generation of a high-quality dataset. Based on this pipeline and the existing coarse-grained annotated dataset, we build the CURE benchmark to measure both the zero-shot reasoning performance and consistency of VLMs. We evaluate existing state-of-the-art VLMs, and find that even the best-performing model is unable to demonstrate strong visual reasoning capabilities and consistency, indicating that substantial efforts are required to enable VLMs to perform visual reasoning as systematically and consistently as humans. As an early step, we propose a two-stage training framework aimed at improving both the reasoning performance and consistency of VLMs. The first stage involves employing supervised fine-tuning of VLMs using step-by-step reasoning samples automatically generated by LLMs. In the second stage, we further augment the training process by incorporating feedback provided by LLMs to produce reasoning chains that are highly consistent and grounded. We empirically highlight the effectiveness of our framework in both reasoning performance and consistency.

READ FULL TEXT

page 2

page 11

page 17

page 18

page 20

page 21

page 22

research
04/16/2023

Chain of Thought Prompt Tuning in Vision Language Models

Language-Image Pre-training has demonstrated promising results on zero-s...
research
02/02/2023

Multimodal Chain-of-Thought Reasoning in Language Models

Large language models (LLMs) have shown impressive performance on comple...
research
06/30/2023

Look, Remember and Reason: Visual Reasoning with Grounded Rationales

Large language models have recently shown human level performance on a v...
research
05/19/2023

RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought

Large language Models (LLMs) have achieved promising performance on arit...
research
06/10/2023

Human-in-the-Loop through Chain-of-Thought

While the emergence of powerful language models along with Chain-of-thou...
research
05/24/2023

ECHo: Event Causality Inference via Human-centric Reasoning

We introduce ECHo, a diagnostic dataset of event causality inference gro...
research
03/30/2023

Humans in Humans Out: On GPT Converging Toward Common Sense in both Success and Failure

Increase in computational scale and fine-tuning has seen a dramatic impr...

Please sign up or login with your details

Forgot password? Click here to reset