Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models

06/01/2021
by   Linjie Li, et al.
22

With large-scale pre-training, the past two years have witnessed significant performance boost on the Visual Question Answering (VQA) task. Though rapid progresses have been made, it remains unclear whether these state-of-the-art (SOTA) VQA models are robust when encountering test examples in the wild. To study this, we introduce Adversarial VQA, a new large-scale VQA benchmark, collected iteratively via an adversarial human-and-model-in-the-loop procedure. Through this new benchmark, we present several interesting findings. (i) Surprisingly, during dataset collection, we find that non-expert annotators can successfully attack SOTA VQA models with relative ease. (ii) We test a variety of SOTA VQA models on our new dataset to highlight their fragility, and find that both large-scale pre-trained models and adversarial training methods can only achieve far lower performance than what they can achieve on the standard VQA v2 dataset. (iii) When considered as data augmentation, our dataset can be used to improve the performance on other robust VQA benchmarks. (iv) We present a detailed analysis of the dataset, providing valuable insights on the challenges it brings to the community. We hope Adversarial VQA can serve as a valuable benchmark that will be used by future work to test the robustness of its developed VQA models. Our dataset is publicly available at https://adversarialvqa. github.io/.

READ FULL TEXT

page 1

page 8

page 14

page 15

page 16

page 18

page 20

page 21

research
06/04/2021

Human-Adversarial Visual Question Answering

Performance on the most commonly used Visual Question Answering dataset ...
research
10/31/2019

Adversarial NLI: A New Benchmark for Natural Language Understanding

We introduce a new large-scale NLI benchmark dataset, collected via an i...
research
07/26/2018

Pythia v0.1: the Winning Entry to the VQA Challenge 2018

This document describes Pythia v0.1, the winning entry from Facebook AI ...
research
10/21/2019

Good, Better, Best: Textual Distractors Generation for Multi-Choice VQA via Policy Gradient

Textual distractors in current multi-choice VQA datasets are not challen...
research
06/11/2020

Exploring Weaknesses of VQA Models through Attribution Driven Insights

Deep Neural Networks have been successfully used for the task of Visual ...
research
05/30/2022

An Efficient Modern Baseline for FloodNet VQA

Designing efficient and reliable VQA systems remains a challenging probl...
research
11/16/2017

A Novel Framework for Robustness Analysis of Visual QA Models

Deep neural networks have been playing an essential role in many compute...

Please sign up or login with your details

Forgot password? Click here to reset