Decomposed Prompting: A Modular Approach for Solving Complex Tasks

10/05/2022
by   Tushar Khot, et al.
2

Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks.

READ FULL TEXT

page 20

page 22

page 23

page 24

page 25

page 26

page 27

page 28

research
09/01/2020

Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models

A common approach to solve complex tasks is by breaking them down into s...
research
02/12/2023

Analyzing the Effectiveness of the Underlying Reasoning Tasks in Multi-hop Question Answering

To explain the predicted answers and evaluate the reasoning abilities of...
research
12/01/2022

Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions

Step-by-step reasoning approaches like chain-of-thought (CoT) have prove...
research
05/24/2022

From Easy to Hard: Two-stage Selector and Reader for Multi-hop Question Answering

Multi-hop question answering (QA) is a challenging task requiring QA sys...
research
02/21/2018

Learning Multiple Categories on Deep Convolution Networks

Deep convolution networks have proved very successful with big datasets ...
research
06/03/2019

Sentiment Tagging with Partial Labels using Modular Architectures

Many NLP learning tasks can be decomposed into several distinct sub-task...
research
03/04/2019

Sequential Relational Decomposition

The concept of decomposition in computer science and engineering is cons...

Please sign up or login with your details

Forgot password? Click here to reset