Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation

09/10/2019
by   Arijit Ray, et al.
0

While models for Visual Question Answering (VQA) have steadily improved over the years, interacting with one quickly reveals that these models lack consistency. For instance, if a model answers "red" to "What color is the balloon?", it might answer "no" if asked, "Is the balloon red?". These responses violate simple notions of entailment and raise questions about how effectively VQA models ground language. In this work, we introduce a dataset, ConVQA, and metrics that enable quantitative evaluation of consistency in VQA. For a given observable fact in an image (e.g. the balloon's color), we generate a set of logically consistent question-answer (QA) pairs (e.g. Is the balloon red?) and also collect a human-annotated set of common-sense based consistent QA pairs (e.g. Is the balloon the same color as tomato sauce?). Further, we propose a consistency-improving data augmentation module, a Consistency Teacher Module (CTM). CTM automatically generates entailed (or similar-intent) questions for a source QA pair and fine-tunes the VQA model if the VQA's answer to the entailed question is consistent with the source QA pair. We demonstrate that our CTM-based training improves the consistency of VQA models on the ConVQA datasets and is a strong baseline for further research.

READ FULL TEXT

page 2

page 4

research
02/15/2019

Cycle-Consistency for Robust Visual Question Answering

Despite significant progress in Visual Question Answering over the years...
research
11/19/2020

Logically Consistent Loss for Visual Question Answering

Given an image, a back-ground knowledge, and a set of questions about an...
research
03/15/2022

CARETS: A Consistency And Robustness Evaluative Test Suite for VQA

We introduce CARETS, a systematic test suite to measure consistency and ...
research
09/14/2017

Robustness Analysis of Visual QA Models by Basic Questions

Visual Question Answering (VQA) models should have both high robustness ...
research
08/03/2022

TAG: Boosting Text-VQA via Text-aware Visual Question-answer Generation

Text-VQA aims at answering questions that require understanding the text...
research
03/09/2023

Toward Unsupervised Realistic Visual Question Answering

The problem of realistic VQA (RVQA), where a model has to reject unanswe...
research
03/16/2023

Logical Implications for Visual Question Answering Consistency

Despite considerable recent progress in Visual Question Answering (VQA) ...

Please sign up or login with your details

Forgot password? Click here to reset