Robustness Analysis of Visual QA Models by Basic Questions

Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/19/2017

VQABQ: Visual Question Answering by Basic Questions

Taking an image and question as the input of our method, it can output t...
11/16/2017

A Novel Framework for Robustness Analysis of Visual QA Models

Deep neural networks have been playing an essential role in many compute...
11/30/2019

Assessing the Robustness of Visual Question Answering

Deep neural networks have been playing an essential role in the task of ...
05/13/2019

Quantifying and Alleviating the Language Prior Problem in Visual Question Answering

Benefiting from the advancement of computer vision, natural language pro...
06/08/2021

Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions

Deep learning algorithms have shown promising results in visual question...
08/28/2021

QACE: Asking Questions to Evaluate an Image Caption

In this paper, we propose QACE, a new metric based on Question Answering...
06/09/2020

Roses Are Red, Violets Are Blue... but Should Vqa Expect Them To?

To be reliable on rare events is an important requirement for systems ba...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Motivations. 

Visual Question Answering (VQA) is one of the most challenging computer vision tasks in which an algorithm is given a natural language question about an image and tasked with producing a natural language answer for that question-image pair. Recently, various VQA models [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] have been proposed to tackle this problem, and their main performance measure is accuracy. However, the community has started to realize that accuracy is not the only metric to evaluate model performance. More specifically, these models should also be robust, i.e., their output should not be affected much by some small noise or perturbation

to the input. The idea of analyzing model robustness as well as training robust models is already a rapidly growing research topic for deep learning models applied to images

[12, 13, 14]. However, and to the best of our knowledge, an acceptable and standardized method to measure robustness in VQA models does not seem to exist. As such, this paper is the first work to analyze VQA models from this point of view by proposing a robustness measure and a standardized large-scale dataset.

Figure 1: Our proposed framework for measuring the robustness of VQA models. The – our proposed robustness measure – is generated by the two white boxes. In the upper white box, we have two main components, VQA Module and Noise Generator, and the detail of the noise generator can be referred to Figure 2. “” denotes the direct concatenation of basic questions.
Figure 2: The figure shows details of Noise Generator. We have two choices, GBQD and YNBQD, of Basic Question Dataset and eight choices, BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE, CIDEr, METEOR and LASSO, of questions ranking methods. If a new Basic Question Dataset or ranking method is proposed in the future, we will also add them into our proposed framework. The output of Noise Generator is the concatenation of three ranked basic questions. “” denotes the direct concatenation of basic questions.

Assumptions. 

The ultimate goal is for VQA models to perform as humans do for the same task. If a human is presented with a question or this question accompanied with some highly similar questions to it, he/she tends to give the same or a very similar answer in both cases. Evidence of this has been reported on in the psychology domain. Therefore, when we add or replace some words or phrases by similar words or phrases to the query question, called the main question, the VQA model should output the same or a very similar answer. In some sense, we consider similar words or phrases as small perturbations or noise to the input, so we say that the model is robust if it produces the same answer. Note that we define a basic question as a semantically similar question to the given main question. Based on evidence from deductive reasoning in human thinking [15], we consider basic questions as noise. In Figure 3, cases (a) and (b) explain the general idea. In case (a), the person may have the answer “Mercedes Benz” in mind. However, in case (b), he/she would start to think about the relations among the two given questions and candidate answers to form the final answer which may be different from the final answer in case (a). If the person is given more basic questions, he/she would start to think about all the possible relations of all the provided questions and possible answer candidates. These relationships will clearly be more complicated, especially when the additional basic questions have low similarity score to the main question. In such cases, they will mislead the person. That is to say, those extra basic questions are large disturbances in some sense. Because robustness analysis requires studying the accuracy of VQA models under different noise levels, we need to know how to quantify the level of noise for the given question. We hypothesize that a basic question with larger similarity score to the main question is considered to inject a smaller amount of noise if it is added to the main question and vice versa. Our proposed basic question ranking method is one way to quantify and control the strength of this injected noise level.

Figure 3: Inspired by Deductive Reasoning in Human Thinking [15], this figure showcases the behavior of humans when subjected to multiple questions about a certain subject. Note that the relationships and the final answer in the case (a) and (b) can be different.

2 Robustness Framework

Inspired by the above reasoning, we propose a novel framework for measuring the robustness of VQA models. Figure 1 depicts the structure of our framework. It contains two modules, a VQA model and a Noise Generator. The Noise Generator, illustrated in Figure 2, takes a plain text main question (MQ) and a plain text basic question dataset (BQD) as input. It starts by ranking the basic questions in BQD by their similarity to MQ using some text similarity ranking method. Then, depending on the required level of noise, it takes the top (e.g., ) ranked BQs and directly concatenates them. The concatenation of these BQs with MQ is the generated noisy question. Instead of feeding the MQ to the VQA model, we replace it with the generated noisy question and measure the accuracy of the output. To measure the robustness of this VQA model, the accuracy with and without the generated noise is compared. To this end, we propose a robustness measure to quantify this comparison.

For the questions ranking method [16, 17], given any two questions we can have different measures that quantify the similarity of those questions and produce a score between . Using such similarity measures, we can have different rankings of the similarity of MQ to the questions in BQD, where the BQs with higher similarity score to MQ rank higher than those with less similarity. Along those lines, we propose a new question ranking method formulated using optimization and compare it against other rankings produced by seven different yet popular textual similarity measures. We do this comparison to rank our proposed BQDs, General Basic Question Dataset (GBQD) and Yes/No Basic Question Dataset (YNBQD). Furthermore, we evaluate the robustness of six pretrained state-of-the-art VQA models [1, 6, 7, 9]. Finally, extensive experiments show that is the best BQD ranking method among others.

Contributions. 

(i) We propose a novel framework to measure the robustness of VQA models and test it on six different models. (ii) We propose a new text-based similarity ranking method and compare it against seven popular similarity metrics, BLEU-1, BLEU-2, BLEU-3, BLEU-4 [18], ROUGE [19], CIDEr [20] and METEOR [21]. Then, we show that our ranking method is the best among them. (iii) We introduce two large-scale basic questions datasets: General Basic Question Dataset (GBQD) and Yes/No Basic Question Dataset (YNBQD).

Acknowledgement

This work is supported by competitive research funding from King Abdullah University of Science and Technology (KAUST) and the High Performance Computing Center in KAUST.

References

  • [1] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433, 2015.
  • [2] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on CVPR, pages 39–48, 2016.
  • [3] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz.

    Ask your neurons: A neural-based approach to answering questions about images.

    In Proceedings of the IEEE International Conference on Computer Vision, pages 1–9, 2015.
  • [4] Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han.

    Image question answering using convolutional neural network with dynamic parameter prediction.

    In Proceedings of the IEEE Conference on CVPR, pages 30–38, 2016.
  • [5] Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. Ask me anything: Free-form visual question answering based on knowledge from external sources. In CVPR, pages 4622–4630, 2016.
  • [6] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, pages 289–297, 2016.
  • [7] Hedi Ben-younes, Remi Cadene, Matthieu Cord, and Nicolas Thome. Mutan: Multimodal tucker fusion for visual question answering. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [8] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In

    Conference on Empirical Methods in Natural Language Processing (EMNLP)

    , 2016.
  • [9] Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Hadamard product for low-rank bilinear pooling. In 5th International Conference on Learning Representations, 2017.
  • [10] Mateusz Malinowski and Mario Fritz. Towards a visual turing challenge. arXiv preprint arXiv:1410.8027, 2014.
  • [11] Donald Geman, Stuart Geman, Neil Hallonquist, and Laurent Younes. Visual turing test for computer vision systems. Proceedings of the National Academy of Sciences, 112(12):3618–3623, 2015.
  • [12] Alhussein Fawzi, Seyed Mohsen Moosavi Dezfooli, and Pascal Frossard. A geometric perspective on the robustness of deep networks. Technical report, Institute of Electrical and Electronics Engineers, 2017.
  • [13] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017.
  • [14] Huan Xu, Constantine Caramanis, and Shie Mannor.

    Robustness and regularization of support vector machines.

    Journal of Machine Learning Research

    , 10(Jul):1485–1510, 2009.
  • [15] Lance J Rips. The psychology of proof: Deductive reasoning in human thinking. Mit Press, 1994.
  • [16] Jia-Hong Huang, Modar Alfadly, and Bernard Ghanem. Robustness analysis of visual qa models by basic questions. arXiv preprint arXiv:1709.04625, 2017.
  • [17] Jia-Hong Huang, Modar Alfadly, and Bernard Ghanem. Vqabq: Visual question answering by basic questions.

    Computer Vision and Pattern Recognition Workshop (CVPRW)

    , 2017.
  • [18] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
  • [19] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
  • [20] Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [21] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pages 65–72, 2005.