has emerged as a challenging task that requires artificial intelligence systems to predict answers by jointly analyzing both natural language questions and visual content. Most state-of-the-art VQA systemsanderson2017bottom ; kim2018bilinear ; ben2017mutan ; jiang2018pythia ; cadene2019murel ; lu2019vilbert ; liu2019learning ; tan2019lxmert are trained to simply fit the answer distribution using question and visual features and achieve high performance on simple visual questions. However, these systems often exhibit poor explanatory capabilities and take shortcuts by only focusing on simple visual concepts or question priors instead of finding the right answer for the right reasons ross2017right ; selvaraju2019taking . This problem becomes increasingly severe when the questions require more complex reasoning and commonsense knowledge.
For more complex questions, VQA systems need to be right for the right reasons in order to generalize well to test problems. Two ways to provide these reasons are to crowdsource human visual explanations das2017human or textual explanations park2018multimodal . While visual explanations only annotate which parts of an image contribute most to the answer, textual explanations encode richer information such as detailed attributes, relationships, or commonsense knowledge that is not necessarily directly found in the image. Therefore, we adopt textual explanations to guide VQA systems.
Recent research utilizing textual explanations adopts a multi-task learning strategy that jointly trains an answer predictor and an explanation generator li2018vqa ; park2018multimodal . However, this approach only considers explanations for the one chosen answer. Our approach considers explanations for multiple competing answers, comparing these explanations when choosing a final answer, as shown in Figure 1.
Our framework is end-to-end trainable and therefore can be applied to any differentiable VQA system. Our experiments show improvements for our method combined with Up-Down anderson2017bottom and LXMERT tan2019lxmert on the VQA-X dataset park2018multimodal
, consisting of more complex questions estimated to require the abilities of at least a 9-year old and comes with human textual explanations. We also show that our approach learns better representations for the questions and visual content by training to retrieve explanations, and achieves state-of-the-art results by further jointly considering competing explanations.
We also developed a new VQA explanation generation approach utilizing competing explanations. This approach uses retrieved explanations for each competing answer to help generate improved explanations during testing. Using both automated metrics comparing to human explanations and human evaluation, we show that these explanations are also improved, beating the current state-of-the-art method presented by wu2018faithful .
2 Related Work
2.1 VQA with Human Visual Explanations
To train a VQA system to be right for the right reason, recent research has collected human visual attention das2017human ; Gan_2017_ICCV highlighting image regions that most contribute to the answer. Two popular approaches are to have crowdsourced workers deblur the image das2017human or select segmented objects from the image Gan_2017_ICCV . Then, the VQA systems try to align either the VQA system’s attention qiao2018exploring ; zhang2019interpretable or the gradient-based visual explanation selvaraju2019taking ; wu2019self to the human attentions. These approaches help the systems focus on the right regions, and improve VQA performance when the training and test distributions are very different, such as in the VQA-CP dataset vqa-cp .
2.2 Human Textual Explanations
While human visual explanations can help VQA systems know where to attend, human textual explanations can also provide information on how the attended image regions contribute to the answer. There are two textual explanation datasets, VQA-E li2018vqa and VQA-X park2018multimodal . Explanations in VQA-E are automatically refined versions of the most relevant captions from the COCO dataset chen2015microsoft , which have a larger scale but are of less quality. Therefore, we adopt VQA-X, where crowdsourced human workers were directly asked to provide textual explanations for the questions that are judged to require children older than 9 years to answer.
2.3 Generating Textual Explanations
In order to automatically generate textual explanations, park2018multimodal present a single-layer LSTM network trained on image, question, and answer features to mimic crowdsourced human explanations. wu2018faithful uses question-attended segmentation features from the original VQA system as input, and try to generate explanations that are more faithful to the actual VQA process at the object level.
2.4 VQA with Human Textual Explanations
For more complex questions requiring reasoning and general knowledge, visual attention, which only shows important regions, is less helpful. We know of only two papers that use textual explanations to aid VQA. li2018vqa train to jointly predict the answer and generate an explanation. However, the explanations may not faithfully reflect the actual VQA process, can hallucinate visual content rohrbach2018object , and therefore may not properly supervise the underlying VQA system. wu2019self only use textual explanations to extract a set of important visual objects, but ignore other critical richer content, attributes, relationships, commonsense knowledge, etc. In contrast, our approach trains a system to distinguish correct human explanations from competing explanations supporting incorrect answers.
3 Baseline VQA Models
Many recent VQA systems fukui2016multimodal ; ben2017mutan ; ramakrishnan2018overcoming utilize a trainable top-down attention mechanism over convolutional features to recognize relevant image regions. Up-Down (UpDn) anderson2017bottom introduced complementary bottom-up attention that first detects common objects and attributes so that the top-down attention can directly model the contribution of higher-level concepts. Several recent systems have used this approach selvaraju2019taking ; jiang2018pythia ; lu2019vilbert ; tan2019lxmert , significantly improving VQA performance. These systems first extract a visual feature set = for each image whose element
is a feature vector for the-th detected object. On the language side, UpDn systems sequentially encode each question to produce a question vector q. Let denote the answer prediction operator that takes both visual features and question features as input and predicts the confidence for each answer in the answer candidate set ,
. The VQA task is framed as a multi-label regression problem with the gold-standard soft scores as targets in order to be consistent with the evaluation metric. Finally, binary cross entropy loss with soft score is used to supervise the sigmoid-normalized outputs.
We briefly introduce two variants of this approach adopted in our experiments:
. This is the original UpDn system, which uses a single layer GRU to encode questions. The question vector is then used to compute a single-stage attention over the detected objects to produce attended visual features. Finally, a two-layer feed-forward network computes answer probabilities given the joint features of the question and visual content .
LXMERT. In order to learn richer representations for both questions and visual content, LXMERT tan2019lxmert uses transformers vaswani2017attention ; devlin2018bert that learn multiple layers of attention over the input. In particular, it first learns 9 layers over the input question and 5 layers over detected objects, then finally learns another 5 layers of attention across the two modalities to produce the final joint representation.
This section presents our approach to utilizing competing explanations to aid VQA. As shown in Figure 2, after the base VQA system computes the top- answers, our approach retrieves the most supportive explanations for each answer from the training set to construct the set of competing explanations. Then, these explanations are used to help generate explanations for the current question. Next, we learn to predict verification scores that indicates how well the retrieved or generated explanations support the predictions given the input question and visual content. The final answer is determined by jointly considering the original answer probabilities and these verification scores.
4.1 Retrieving Explanations
This section presents our approach to retrieving the most supportive human textual explanation from the training set for each answer candidate. Ideally, we should dynamically retrieve explanations for each answer at each iteration. However, it will be very computational costly because the question and visual features have to be computed for each image from the training set. Therefore, we adopt the below relaxation for computational efficiency that only needs to compute the features once.
In particular, we first pretrain the VQA system, and extract the question and visual embeddings, q and v ,for each pair in the training set. For UpDn, we use the attended visual features and the question GRU’s last hidden state as the visual and question embeddings. For LXMERT, we use the last cross-modal attention layer’s visual and question output as the embeddings.
Then, for each pair, we only compute the top-10 answer candidates since the top-10 answers have already achieved high recall. After that, for each answer candidate , we extract explanations from the training set that have the same ground truth answer 111More specifically, the soft score of the answer candidate in the retrieved explanation’s example is over 0.6 as the current candidate. We then sort these explanations by the L2 distance between the explanations’ embeddings, , and the example’s and pick the closest 8 explanations as the competing explanations set denoted as .
4.2 Generating Explanations
Next,the retrieved explanations for similar VQA examples from the training set are used to help generate even better explanations.
We adopt the explainer from wu2018faithful , a two-layer LSTM network similar to the UpDn captioner anderson2017bottom , as our baseline. Since the current VQA systems are built upon detected objects, we use them as the visual inputs instead of segmentations.
The baseline explainer first computes a set of question-attended visual features, , and an average pooled version, . The explainer then uses and together with question and answer embeddings as inputs to produce explanations.
Our approach simply replaces the average pooled question-attended visual features with the retrieved explanations’ features, x
. We use a single-layer GRU to encode all of the retrieved explanations for the correct answer, and then max pool the last hidden states among these explanations to computex. We sample 8 explanations for each answer candidate to construct the generated explanation set.
4.3 Learning Verification Scores
Next, a verification system is trained to score how well a generated or retrieved explanation supports a corresponding answer candidate given the question and visual content. The verification system takes four inputs: the visual, question, answer and its explanation features; and outputs the verification score, .
where a is the one-hot embedding of the answer, and is the feature vector for the explanation, , encoded using a GRU cho2014learning , . We use to denote consecutive feed-forward layers (for simplicity is omitted when ). We use
to denote the sigmoid function. The verification system is similar to the answer predictor in architecture except for the number of outputs,1 for the verification system and for the answer predictor.
Given the VQA examples with their explanations in the VQA-X dataset, we use binary cross-entropy loss to maximize the verification score for the matching human explanations, .
Intuitively, we want the verification score to be high only when the explanation is matched to the VQA example, replacement of any of the four input sources should lower the score. Therefore, we designed the five kinds of replacements below for constructing negative examples.
Replacement of Visual and Question Features: Ideally, we should replace the visual and question features with the complementary features antol2015vqa that lead to the opposite answer. For example, for the question “Is this a vegetarian pizza?”, with an image of a vegetarian pizza, we should replace the image with one of a meat pizza, counter-factual images. However, such replacement requires retrieving and computing the visual features v for the meat pizza, which is computationally inefficient. Therefore, we simply randomly choose a or replacement from the current batch and minimize the binary cross entropy loss , for the verification scores, .
Replacement of Answer Features: We sample the answer for replacement according to the current VQA’s predicted incorrect probabilities. At each step, we try to minimize the expectation of binary cross-entropy loss for the incorrect predictions, where denotes the human VQA soft score for answer . In practice, we only sample one incorrect answer during training.
Replacement of Explanation Features: We try to replace the matched human explanations with the most supportive explanations for the sampled incorrect answer and train our verification system to disprefer that explanation using the loss . In particular, given the sampled incorrect answer from the previous section, we compute the verification score for each retrieved or generated explanation from the set for that wrong answer and regard the one with maximum verification score as the most supportive one.
Replacement of Answer and Explanation Features: To further prevent the system from being falsely confident in the sampled incorrect answer , we also minimize the verification score for its most supportive explanation for the incorrect answer ,
To sum up, the verification loss is the sum of the aforementioned 6 losses as shown in Eq. 1:
Since we have more negative examples and only one positive examples, we assign a larger loss weight ( ) for the only positive example.
4.4 Using Verification Scores
The original VQA system provides the answer probabilities conditioned on the question and visual content, . The verification scores are further used to reweight the original VQA predictions so that the final predictions , shown in Eq. 2, can take the explanations into account.
where the denotes the generated or retrieved explanation set for the answer .
Since we try to select the correct answer with its explanation, the prediction should only be high when the answer is correct and the explanation supports , which is enforced using the loss in Eq. 3:
where the denotes the human explanation for the answer .
During testing, we first extract the top 10 answer candidates , and then select the explanation for the answer candidate with the highest verification score. Then, we compute the explanation-reweighted score for each answer candidate to determine the final answer .
4.5 Training and Implementation Details
We first pre-train our base VQA system (Up-Down or LXMERT) on either the entire VQA v2 training set for 20 epochs or only the VQA-X training set for 30 epochs with the standard VQA loss (binary cross-entropy loss with soft scores as supervision) and the Adam optimizerkingma2014adam . As the VQA-X validation and test set are both from the VQA v2 validation set that is covered in the LXMERT pretraining, we do not use the officially released LXMERT parameters. The learning rate is fixed to 5e-4 for UpDn and 5e-5 for LXMERT, with a batch size of 384 during the pre-training process. For answer prediction part, We use hidden units in UpDn and hidden units in LXMERT, and for verification part, we use hidden units in both systems.
We fine-tune our system using the verification loss and VQA loss on the VQA-X training set for another 40 epochs. The initial learning rate for VQA system is set to be same as the pretraining if the system is pretrained on VQA-X training set, and 0.1 if the system is pretrained on VQA v2 training set. For the verification systems, the initial learning rate is set to 0.0005. The learning rate for every parameter is decayed by 0.8 every 5 epochs. During test, we consider the top-10 answer candidates for the VQA systems and use the explanation-reweighted prediction as the final answer.
Implementation. We implemented our approach on top of the original UpDn and LXMERT. Both base systems utilize a Faster R-CNN head girshick2015fast in conjunction with a ResNet-101 base network he2016deep as the object detection module. The detection head is pre-trained on the Visual Genome dataset krishna2017visual and is capable of detecting objects categories and attributes. Both base systems take the final detection outputs and perform non-maximum suppression (NMS) for each object category using an IoU threshold of . Convolutional features for the top objects are then extracted for each image as the visual features, a dimensional vector for each object. For question embedding, following anderson2017bottom , we perform standard text pre-processing and tokenization for UpDn. In particular, questions are first converted to lower case, trimmed to a maximum of words, and tokenized by white spaces. A single layer GRU cho2014learning is used to sequentially process the word vectors and produce a sentential representation for the pre-processed question. We also use Glove vectors pennington2014glove to initialize the word embedding matrix when embedding the questions. For LXMERT, we also follow the original BERT word-level sentence embedding strategy that first splits the sentence into words with length of by the same WordPiece tokenizer wu2016google in devlin2018bert . Next, the word and its index ( absolute position in the sentence) are projected to vectors by embedding sub-layers, and then added to the index-aware word embeddings. We use a single-layer GRU and three-layer GRU to encode the generated or retrieved explanation in the verification system when using UpDn and LXMERT as base system, respectively.
|VQA-X Pretrain||VQA v2 Pretrain|
|Gen. Expl.||Ret. Expl.||Gen. Expl.||Ret. Expl.|
|Faith. Expl. wu2018faithful||25.0||20.0||47.1||91.1||18.6||49.5|
|Faith. Expl. + E (ours)||26.4||20.4||48.5||95.3||18.7||55.6|
5 Experimental Results
This section presents experimental results on the VQA-X park2018multimodal dataset where the questions require more cognitive maturity than the original VQA-v2 dataset. We combine the validation set (1,459 examples) and test set (1,968 examples) of the VQA-X dataset as our larger test set (3,427 examples) for more stable results since both are relatively small. We compare our system’s VQA performance against two corresponding base systems using the standard protocol. In addition, we examine the quality of explanations by comparing our system against a baseline model as well as human explanations. Finally, we perform ablation studies to show that both improved feature representation and explanation reweighting are key aspects of the improvements.
Also, we report the ablation results on each way of constructing negative examples and qualitative examples in the supplementary materials.
5.1 Results on VQA performance
Table 1 reports the results of our competing explanation approach. Our approach combined with UpDn pretrained on the entire VQA v2 dataset achieves the best results. When training only on the VQA-X training set, we improve the original UpDn and LXMERT by 4.5 % and 1.2 %, respectively. UpDn benefits more from using competing explanations than LXMERT, but both improve. By using transformers, LXMERT already creates better, but less flexible, representations which are harder to improve upon by using explanations.
5.2 Results on Explanation Generation
We evaluated the generated explanations using both automatic evaluation metrics comparing them to human explanations ( BLEU-4 Papineni:2002:BMA:1073083.1073135 , METEOR banerjee2005meteor , ROUGE-L lin2004rouge , CIDEr vedantam2015cider and SPICE spice2016 ) and human evaluation using Amazon Mechanical Turk (AMT) platform. The explanation generator uses the features from the UpDn VQA system pretrained on VQA v2 and employs beam search with a beam size of two. We compare to a recent state of the art VQA explanation system wu2018faithful as a baseline system.
In the AMT evaluation, we ask human judges to compare generated explanations to human ones. We randomly sampled 500 such pairs of explanations for both the baseline and our system. In order to measure the difference in explanation quality relative to standard oracles, we perform two groups of comparisons: our system v.s. oracles and baseline v.s. oracles, where each group contains randomly sampled sets of comparisons. Each comparison consists of a (question, image, answer) triplet and two randomly ordered explanations, one human and one generated. Three turkers were asked which explanation was a better justification of the answer, or declare a tie. We aggregated results by assigning 2 points to winning explanations, 0 to losing ones, and 1 to each in the case of a tie, such that each comparison is zero-sum and robust to noise to some extent. We normalized the total score for automated explainers by the score for human explanations, so the final score ranges from 0 to 1, where 1 represents equivalent to human performance.
As reported in Table 2, by additionally conditioning on human explanations for similar visual questions, the generated explanations achieve both better automatic scores, and more importantly, higher human ratings.
5.3 Effect of Using Different Explanations
This section compares variations of our approach using different sources of explanations, including generated, retrieved and human ones. Table 4 reports overall VQA scores using UpDn pretrained on the VQA-X train set. We include two baseline settings, "UpDn" and "UpDn + VQA-E", where the model is trained to jointly predict the answer and generate the explanation, using a two-layer attentional LSTM on top of the VQA shared features. This version models the approach used in li2018vqa .
In the first human explanation setting, denoted by , we only eplace the retrieved explanation for the ight answer with the corresponding human ones, and still use retrieved explanation for the incorrect answer. This setting shows how much retrieved explanations for correct answers impacts the results. The second human explanation setting, denoted by , assumes that human explanations are used to eplace the retrieved explanations for ll the potential answer candidates. This setting provides an upper bound on our approach that uses textual explanations.
The results indicate that using explanations even in a simple joint model li2018vqa is helpful, providing 2% improvement on the overall score. However, our competing explanation approach with either generated or retrieved explanations significantly outperforms both baseline models.
Our system with retrieved explanations performs slightly better than the one with generated explanations. This is probably because there are no guarantees that the generated explanations will support the answers upon which they are conditioned. Since all the explanation training examples are for correct answers, the explanation generation system tends to support the ground truth answer regardless of the answer candidate it is generated to support. Also, the generated explanations sometimes ignore or hallucinate rohrbach2018object visual content when explaining the answer. Therefore, although ideally, generated explanations could work better than retrieved ones, they are currently less helpful to the VQA performance due to their imperfections.
Not surprisingly, the human oracle explanations help the VQA system more than the retrieved ones. It indicates that our approach could achieve even better performance with more informative explanations, which could be achieved by either developing a better explanation generator, or enlarging the explanation training set from which human explanations are retrieved. Our system’s results using retrieved explanations is only 0.6% lower than with human oracle explanations for the correct answers. This indicates that the retrieved explanations (for related questions) are a reasonable approximation to human explanations for the specific question.
5.4 Evaluating Representation Improvement
This section presents an ablation investigating how our approach improves the learned representations. The “ reweighting” ablation still uses the fine-tuned representation trained using explanations, but it does not reweight the final predictions, therefore it tests the improvement solely due to better joint representations for the question and the visual content. The “fixed VQA” ablation uses reweighting, but does not fine-tune the VQA parameters during verification-score training ( only the verification parameters are trained).
Table 4 reports the results of the UpDn system pretrained on VQA-X dataset. Using explanations as additional supervision helps the VQA systems are able to build better representations for the question and answer, improving performance by 3.6%. This is because minimizing the verification loss prevents the VQA system from taking shortcuts. First, the component forces the VQA system to produce visual and question features whose mapping can match the explanation features. Second, by minimizing and , the system is forced not to solely focus on question and/or visual priors. Finally, our full system gains 1.1% improvement due to reweighting, and achieves our best results.
6 Conclusion and Future Work
In this work, we have explored how to improve VQA performance by comparing competing explanations for each answer candidate. We present two sets of competing explanations, generated and retrieved explanations. Our approach first helps the system learn better visual and question representations, and also reweight the original answer predictions based on the competing explanations. As a result, our VQA system avoids taking shortcuts and is able to handle difficult visual questions better, improving results on the challenging VQA-X dataset. We also show that our approach generates better textual explanations by additionally conditioning on the retrieved explanations for similar questions. In the future, we would like to combine different sorts of explanations (e.g. both generated and retrieved ones) together to better train VQA systems.
7 Broader Impact
The focus of this work is to learn a more sophisticated VQA system to better answer complex visual questions that require more reasoning and commonsense knowledge. By using the competing explanations for each potential answer candidates, we encourage the system to use the same rationales as humans, preventing the system from taking short-cuts during inference by only focusing on answer priors and simple visual perceptual concepts.
Meanwhile, our VQA system also associates each question with either retrieved or generated explanations to illustrate the rationale behind the answer, providing users with more insight and understanding to the system’s reasoning that provide more transparency and help engender trust.
Hopefully, both increased accuracy and better explanation will help improve VQA systems for important socially-beneficial applications, including among others, aids for the visually impaired and improved analysis and interpretation of medical imagery.
Though the proposed systems attempt to generate “faithful” explanations that are biased to focus on aspects of the image that are explicitly attended to by the underlying VQA network, it also tries to mimic human explanations and may generate rationales that do not completely faithfully reflect all of the details of the system operation. Therefore, the system might generate incorrect answers but reasonable explanations, which could potentially convince users to accept erroneous conclusions, which could have significant negative impacts.
In addition, it is possible that the retrieved or generated explanations could introduce new undesirable biases ( gender bias) when reweighting the original predictions, particularly if such biases are present in the human explanations used to train the system. Careful vetting of the training explanations to prevent such biases is desirable but would be difficult and costly.
- (1) A. Agrawal, D. Batra, D. Parikh, and A. Kembhavi. Don’t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering. In CVPR, 2018.
P. Anderson, B. Fernando, M. Johnson, and S. Gould.
Spice: Semantic Propositional image caption evaluation.
European Conference on Computer Vision, pages 382–398, 2016.
P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang.
Bottom-Up and Top-Down Attention for Image Captioning and VQA.In CVPR, 2018.
- (4) S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015.
- (5) S. Banerjee and A. Lavie. Meteor: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72, 2005.
- (6) H. Ben-Younes, R. Cadene, M. Cord, and N. Thome. MUTAN: Multimodal Tucker Fusion for Visual Question Answering. In ICCV, 2017.
R. Cadene, H. Ben-Younes, M. Cord, and N. Thome.
Murel: Multimodal relational reasoning for visual question answering.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1989–1998, 2019.
- (8) X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick. Microsoft COCO Captions: Data Collection and Evaluation Server. arXiv preprint arXiv:1504.00325, 2015.
- (9) K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. In EMNLP, 2014.
- (10) A. Das, H. Agrawal, L. Zitnick, D. Parikh, and D. Batra. Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? In Computer Vision and Image Understanding, 2017.
- (11) J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- (12) A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. EMNLP, 2016.
- (13) C. Gan, Y. Li, H. Li, C. Sun, and B. Gong. Vqs: Linking segmentations to questions and answers for supervised attention in vqa and question-focused semantic segmentation. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
- (14) K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
- (15) D. A. Hudson and C. D. Manning. Gqa: a new dataset for compositional question answering over real-world images. arXiv preprint arXiv:1902.09506, 2019.
- (16) Y. Jiang, V. Natarajan, X. Chen, M. Rohrbach, D. Batra, and D. Parikh. Pythia v0. 1: the Winning Entry to the VQA Challenge 2018. arXiv preprint arXiv:1807.09956, 2018.
- (17) J.-H. Kim, J. Jun, and B.-T. Zhang. Bilinear Attention Networks. In NeurIPS, 2018.
- (18) D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015.
- (19) R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. IJCV, 2017.
- (20) Q. Li, Q. Tao, S. Joty, J. Cai, and J. Luo. VQA-E: Explaining, Elaborating, and Enhancing Your Answers for Visual Questions. ECCV, 2018.
- (21) C.-Y. Lin. Rouge: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out, 2004.
- (22) B. Liu, Z. Huang, Z. Zeng, Z. Chen, and J. Fu. Learning rich image region representation for visual question answering. arXiv preprint arXiv:1910.13077, 2019.
- (23) J. Lu, D. Batra, D. Parikh, and S. Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13–23, 2019.
- (24) K. Marino, M. Rastegari, A. Farhadi, and R. Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3195–3204, 2019.
- (25) K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics.
- (26) D. H. Park, L. A. Hendricks, Z. Akata, A. Rohrbach, B. Schiele, T. Darrell, and M. Rohrbach. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. In CVPR, 2018.
- (27) J. Pennington, R. Socher, and C. Manning. Glove: Global Vectors for Word Representation. In EMNLP, 2014.
- (28) T. Qiao, J. Dong, and D. Xu. Exploring Human-Like Attention Supervision in Visual Question Answering. In AAAI, 2018.
- (29) S. Ramakrishnan, A. Agrawal, and S. Lee. Overcoming Language Priors in Visual Question Answering with Adversarial Regularization. In NeurIPS, 2018.
- (30) S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks. In NIPS, 2015.
A. Rohrbach, L. A. Hendricks, K. Burns, T. Darrell, and K. Saenko.
Object hallucination in image captioning.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035–4045, 2018.
- (32) A. S. Ross, M. C. Hughes, and F. Doshi-Velez. Right for the Right Reasons: Training Differentiable Models by Constraining Their Explanations. In IJCAI, 2017.
- (33) R. R. Selvaraju, S. Lee, Y. Shen, H. Jin, D. Batra, and D. Parikh. Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded. In ICCV, 2019.
- (34) A. Singh, V. Natarajan, M. Shah, Y. Jiang, X. Chen, D. Batra, D. Parikh, and M. Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8317–8326, 2019.
- (35) H. Tan and M. Bansal. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019.
- (36) A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
- (37) R. Vedantam, C. Lawrence Zitnick, and D. Parikh. Cider: Consensus-Based Image Description Evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4566–4575, 2015.
- (38) J. Wu and R. J. Mooney. Faithful Multimodal Explanation for Visual Question Answering. In ACL BlackboxNLP Workshop, 2019.
- (39) J. Wu and R. J. Mooney. Self-Critical Reasoning for Robust Visual Question Answering. arXiv preprint arXiv:1905.09998, 2019.
- (40) Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
- (41) Y. Zhang, J. C. Niebles, and A. Soto. Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining. In WACV, 2019.