A Free Lunch in Generating Datasets: Building a VQG and VQA System with Attention and Humans in the Loop

11/30/2019
by   Jihyeon Janel Lee, et al.
0

Despite their importance in training artificial intelligence systems, large datasets remain challenging to acquire. For example, the ImageNet dataset required fourteen million labels of basic human knowledge, such as whether an image contains a chair. Unfortunately, this knowledge is so simple that it is tedious for human annotators but also tacit enough such that they are necessary. However, human collaborative efforts for tasks like labeling massive amounts of data are costly, inconsistent, and prone to failure, and this method does not resolve the issue of the resulting dataset being static in nature. What if we asked people questions they want to answer and collected their responses as data? This would mean we could gather data at a much lower cost, and expanding a dataset would simply become a matter of asking more questions. We focus on the task of Visual Question Answering (VQA) and propose a system that uses Visual Question Generation (VQG) to produce questions, asks them to social media users, and collects their responses. We present two models that can then parse clean answers from the noisy human responses significantly better than our baselines, with the goal of eventually incorporating the answers into a Visual Question Answering (VQA) dataset. By demonstrating how our system can collect large amounts of data at little to no cost, we envision similar systems being used to improve performance on other tasks in the future.

READ FULL TEXT

page 6

page 7

research
02/04/2022

Grounding Answers for Visual Questions Asked by Visually Impaired People

Visual question answering is the task of answering questions about image...
research
02/22/2018

VizWiz Grand Challenge: Answering Visual Questions from Blind People

The study of algorithms to automatically answer visual questions current...
research
08/29/2016

Visual Question: Predicting If a Crowd Will Agree on the Answer

Visual question answering (VQA) systems are emerging from a desire to em...
research
06/27/2016

Revisiting Visual Question Answering Baselines

Visual question answering (VQA) is an interesting learning setting for e...
research
04/28/2022

Reliable Visual Question Answering: Abstain Rather Than Answer Incorrectly

Machine learning has advanced dramatically, narrowing the accuracy gap t...
research
11/06/2017

Active Learning for Visual Question Answering: An Empirical Study

We present an empirical study of active learning for Visual Question Ans...
research
08/15/2017

VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation

Rich and dense human labeled datasets are among the main enabling factor...

Please sign up or login with your details

Forgot password? Click here to reset