The Art of SOCRATIC QUESTIONING: Zero-shot Multimodal Reasoning with Recursive Thinking and Self-Questioning

05/24/2023
by   Jingyuan Qi, et al.
0

Chain-of-Thought prompting (CoT) enables large-scale language models to solve complex reasoning problems by decomposing the problem and tackling it step-by-step. However, Chain-of-Thought is a greedy thinking process that requires the language model to come up with a starting point and generate the next step solely based on previous steps. This thinking process is different from how humans approach a complex problem e.g., we proactively raise sub-problems related to the original problem and recursively answer them. In this work, we propose Socratic Questioning, a divide-and-conquer fashion algorithm that simulates the self-questioning and recursive thinking process. Socratic Questioning is driven by a Self-Questioning module that employs a large-scale language model to propose sub-problems related to the original problem as intermediate steps and Socratic Questioning recursively backtracks and answers the sub-problems until reaches the original problem. We apply our proposed algorithm to the visual question-answering task as a case study and by evaluating it on three public benchmark datasets, we observe a significant performance improvement over all baselines on (almost) all datasets. In addition, the qualitative analysis clearly demonstrates the intermediate thinking steps elicited by Socratic Questioning are similar to the human's recursively thinking process of a complex reasoning problem.

READ FULL TEXT
research
06/06/2023

Deductive Verification of Chain-of-Thought Reasoning

Large Language Models (LLMs) significantly benefit from Chain-of-Thought...
research
09/20/2022

Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

When answering a question, humans utilize the information available acro...
research
10/03/2022

Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought

Large language models (LLMs) have shown remarkable reasoning capabilitie...
research
06/10/2023

Human-in-the-Loop through Chain-of-Thought

While the emergence of powerful language models along with Chain-of-thou...
research
08/16/2023

Detoxify Language Model Step-by-Step

Detoxification for LLMs is challenging since it requires models to avoid...
research
05/04/2023

An automatically discovered chain-of-thought prompt generalizes to novel models and datasets

Emergent chain-of-thought (CoT) reasoning capabilities promise to improv...
research
05/24/2023

Reasoning with Language Model is Planning with World Model

Large language models (LLMs) have shown remarkable reasoning capabilitie...

Please sign up or login with your details

Forgot password? Click here to reset