Successive Prompting for Decomposing Complex Questions

12/08/2022
by   Dheeru Dua, et al.
0

Answering complex questions that require making latent decisions is a challenging task, especially when limited supervision is available. Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting by demonstrating how to output intermediate rationalizations while solving the complex question in a single pass. We introduce “Successive Prompting”, where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution. Successive prompting decouples the supervision for decomposing complex questions from the supervision for answering simple questions, allowing us to (1) have multiple opportunities to query in-context examples at each reasoning step (2) learn question decomposition separately from question answering, including using synthetic data, and (3) use bespoke (fine-tuned) components for reasoning steps where a large LM does not perform well. The intermediate supervision is typically manually written, which can be expensive to collect. We introduce a way to generate a synthetic dataset which can be used to bootstrap a model's ability to decompose and answer intermediate questions. Our best model (with successive prompting) achieves an improvement of  5 a state-of-the-art model with the same supervision.

READ FULL TEXT
research
06/13/2023

Question Decomposition Tree for Answering Complex Questions over Knowledge Bases

Knowledge base question answering (KBQA) has attracted a lot of interest...
research
07/01/2020

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

Answering questions that involve multi-step reasoning requires decomposi...
research
04/05/2021

Paired Examples as Indirect Supervision in Latent Decision Models

Compositional, structured models are appealing because they explicitly d...
research
05/31/2023

Let's Verify Step by Step

In recent years, large language models have greatly improved in their ab...
research
10/27/2021

How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI

Many real-world problems require the combined application of multiple re...
research
03/28/2022

STaR: Bootstrapping Reasoning With Reasoning

Generating step-by-step "chain-of-thought" rationales improves language ...
research
04/12/2017

What's in a Question: Using Visual Questions as a Form of Supervision

Collecting fully annotated image datasets is challenging and expensive. ...

Please sign up or login with your details

Forgot password? Click here to reset