TallyQA: Answering Complex Counting Questions

10/29/2018
by   Manoj Acharya, et al.
4

Most counting questions in visual question answering (VQA) datasets are simple and require no more than object detection. Here, we study algorithms for complex counting questions that involve relationships between objects, attribute identification, reasoning, and more. To do this, we created TallyQA, the world's largest dataset for open-ended counting. We propose a new algorithm for counting that uses relation networks with region proposals. Our method lets relation networks be efficiently used with high-resolution imagery. It yields state-of-the-art results compared to baseline and recent systems on both TallyQA and the HowMany-QA benchmark.

READ FULL TEXT

page 1

page 5

page 7

page 8

research
02/15/2018

Learning to Count Objects in Natural Images for Visual Question Answering

Visual Question Answering (VQA) models have struggled with counting obje...
research
04/12/2016

Counting Everyday Objects in Everyday Scenes

We are interested in counting the number of instances of object classes ...
research
12/23/2017

Interpretable Counting for Visual Question Answering

Questions that require counting a variety of objects in images remain a ...
research
04/24/2020

Revisiting Modulated Convolutions for Visual Counting and Beyond

This paper targets at visual counting, where the setup is to estimate th...
research
07/23/2019

Graph Reasoning Networks for Visual Question Answering

The interaction between language and visual information has been emphasi...
research
01/29/2018

Object-based reasoning in VQA

Visual Question Answering (VQA) is a novel problem domain where multi-mo...
research
09/11/2018

The Visual QA Devil in the Details: The Impact of Early Fusion and Batch Norm on CLEVR

Visual QA is a pivotal challenge for higher-level reasoning, requiring u...

Please sign up or login with your details

Forgot password? Click here to reset