CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over Images

04/13/2021
by   Shailaja Keyur Sampat, et al.
0

Most existing research on visual question answering (VQA) is limited to information explicitly present in an image or a video. In this paper, we take visual understanding to a higher level where systems are challenged to answer questions that involve mentally simulating the hypothetical consequences of performing specific actions in a given scenario. Towards that end, we formulate a vision-language question answering task based on the CLEVR (Johnson et. al., 2017) dataset. We then modify the best existing VQA methods and propose baseline solvers for this task. Finally, we motivate the development of better vision-language models by providing insights about the capability of diverse architectures to perform joint reasoning over image-text modality. Our dataset setup scripts and codes will be made publicly available at https://github.com/shailaja183/clevr_hyp.

READ FULL TEXT

page 1

page 3

page 6

page 12

page 13

page 15

page 16

research
05/01/2020

Diverse Visuo-Lingustic Question Answering (DVLQA) Challenge

Existing question answering datasets mostly contain homogeneous contexts...
research
11/27/2020

Point and Ask: Incorporating Pointing into Visual Question Answering

Visual Question Answering (VQA) has become one of the key benchmarks of ...
research
08/19/2023

BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions

Vision Language Models (VLMs), which extend Large Language Models (LLM) ...
research
07/09/2023

SAS Video-QA: Self-Adaptive Sampling for Efficient Video Question-Answering

Video question–answering is a fundamental task in the field of video und...
research
12/02/2016

Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering

Problems at the intersection of vision and language are of significant i...
research
06/27/2022

Consistency-preserving Visual Question Answering in Medical Imaging

Visual Question Answering (VQA) models take an image and a natural-langu...
research
07/11/2023

Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology Reporting

Radiology reporting is a crucial part of the communication between radio...

Please sign up or login with your details

Forgot password? Click here to reset