ManyModalQA: Modality Disambiguation and QA over Diverse Inputs

01/22/2020
by   Darryl Hannan, et al.
18

We present a new multimodal question answering challenge, ManyModalQA, in which an agent must answer a question by considering three distinct modalities: text, images, and tables. We collect our data by scraping Wikipedia and then utilize crowdsourcing to collect question-answer pairs. Our questions are ambiguous, in that the modality that contains the answer is not easily determined based solely upon the question. To demonstrate this ambiguity, we construct a modality selector (or disambiguator) network, and this model gets substantially lower accuracy on our challenge set, compared to existing datasets, indicating that our questions are more ambiguous. By analyzing this model, we investigate which words in the question are indicative of the modality. Next, we construct a simple baseline ManyModalQA model, which, based on the prediction from the modality selector, fires a corresponding pre-trained state-of-the-art unimodal QA model. We focus on providing the community with a new manymodal evaluation set and only provide a fine-tuning set, with the expectation that existing datasets and approaches will be transferred for most of the training, to encourage low-resource generalization without large, monolithic training sets for each new task. There is a significant gap between our baseline models and human performance; therefore, we hope that this challenge encourages research in end-to-end modality disambiguation and multimodal QA models, as well as transfer learning. Code and data available at: https://github.com/hannandarryl/ManyModalQA

READ FULL TEXT

page 4

page 5

research
04/22/2020

AmbigQA: Answering Ambiguous Open-domain Questions

Ambiguity is inherent to open-domain question answering; especially when...
research
06/08/2021

Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering

Disfluencies is an under-studied topic in NLP, even though it is ubiquit...
research
05/26/2023

An Empirical Comparison of LM-based Question and Answer Generation Methods

Question and answer generation (QAG) consists of generating a set of que...
research
09/13/2019

PubMedQA: A Dataset for Biomedical Research Question Answering

We introduce PubMedQA, a novel biomedical question answering (QA) datase...
research
11/01/2018

Shifting the Baseline: Single Modality Performance on Visual Navigation & QA

Language-and-vision navigation and question answering (QA) are exciting ...
research
10/11/2022

Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA

Retrieving evidences from tabular and textual resources is essential for...
research
07/04/2023

Consistent Multimodal Generation via A Unified GAN Framework

We investigate how to generate multimodal image outputs, such as RGB, de...

Please sign up or login with your details

Forgot password? Click here to reset