Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models

09/01/2020
by   Tushar Khot, et al.
7

A common approach to solve complex tasks is by breaking them down into simple sub-problems that can then be solved by simpler modules. However, these approaches often need to be designed and trained specifically for each complex task. We propose a general approach, Text Modular Networks(TMNs), where the system learns to decompose any complex task into the language of existing models. Specifically, we focus on Question Answering (QA) and learn to decompose complex questions into sub-questions answerable by existing QA models. TMNs treat these models as blackboxes and learn their textual input-output behavior (i.e., their language) through their task datasets. Our next-question generator then learns to sequentially produce sub-questions that help answer a given complex question. These sub-questions are posed to different existing QA models and, together with their answers, provide a natural language explanation of the exact reasoning used by the model. We present the first system, incorporating a neural factoid QA model and a symbolic calculator, that uses decomposition for the DROP dataset, while also generalizing to the multi-hop HotpotQA dataset. Our system, ModularQA, outperforms a cross-task baseline by 10-60 F1 points and performs comparable to task-specific systems, while also providing an easy-to-read explanation of its reasoning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2020

Unsupervised Question Decomposition for Question Answering

We aim to improve question answering (QA) by decomposing hard questions ...
research
10/05/2022

Decomposed Prompting: A Modular Approach for Solving Complex Tasks

Few-shot prompting is a surprisingly powerful way to use Large Language ...
research
02/12/2023

Analyzing the Effectiveness of the Underlying Reasoning Tasks in Multi-hop Question Answering

To explain the predicted answers and evaluate the reasoning abilities of...
research
05/23/2022

QASem Parsing: Text-to-text Modeling of QA-based Semantics

Several recent works have suggested to represent semantic relations with...
research
05/24/2023

Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

We train a language model (LM) to robustly answer multistep questions by...
research
01/15/2014

Enhancing QA Systems with Complex Temporal Question Processing Capabilities

This paper presents a multilayered architecture that enhances the capabi...
research
12/05/2019

Easy-to-Hard: Leveraging Simple Questions for Complex Question Generation

This paper makes one of the first efforts toward automatically generatin...

Please sign up or login with your details

Forgot password? Click here to reset