Component Analysis for Visual Question Answering Architectures

02/12/2020 ∙ by Camila Kolling, et al. ∙ PUCRS 0

Recent research advances in Computer Vision and Natural Language Processing have introduced novel tasks that are paving the way for solving AI-complete problems. One of those tasks is called Visual Question Answering (VQA). A VQA system must take an image and a free-form, open-ended natural language question about the image, and produce a natural language answer as the output. Such a task has drawn great attention from the scientific community, which generated a plethora of approaches that aim to improve the VQA predictive accuracy. Most of them comprise three major components: (i) independent representation learning of images and questions; (ii) feature fusion so the model can use information from both sources to answer visual questions; and (iii) the generation of the correct answer in natural language. With so many approaches being recently introduced, it became unclear the real contribution of each component for the ultimate performance of the model. The main goal of this paper is to provide a comprehensive analysis regarding the impact of each component in VQA models. Our extensive set of experiments cover both visual and textual elements, as well as the combination of these representations in form of fusion and attention mechanisms. Our major contribution is to identify core components for training VQA models so as to maximize their predictive performance.



There are no comments yet.


page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recent research advances in Computer Vision (CV) and Natural Language Processing (NLP) introduced several tasks that are quite challenging to be solved, the so-called AI-complete problems. Most of those tasks require systems that understand information from multiple sources, i.e., semantics from visual and textual data, in order to provide some kind of reasoning

. For instance, image captioning 

[14, 16, 10] presents itself as a hard task to solve, though it is actually challenging to quantitatively evaluate models on that task, and that recent studies [3] have raised questions on its AI-completeness.

The Visual Question Answering (VQA) [3] task was introduced as an attempt to solve that issue: to be an actual AI-complete problem whose performance is easy to evaluate. It requires a system that receives as input an image and a free-form, open-ended, natural-language question to produce a natural-language answer as the output [3]. It is a multidisciplinary topic that is gaining popularity by encompassing CV and NLP into a single architecture, what is usually regarded as a multimodal model [40, 47, 48]. There are many real-world applications for models trained for Visual Question Answering, such as automatic surveillance video queries [42] and visually-impaired aiding [7, 27].

Models trained for VQA are required to understand the semantics from images while finding relationships with the asked question. Therefore, those models must present a deep understanding of the image to properly perform inference and produce a reasonable answer to the visual question [50]. In addition, it is much easier to evaluate this task since there is a finite set of possible answers for each image-question pair.

Traditionally, VQA approaches comprise three major steps: (i) representation learning of the image and the question; (ii) projection of a single multimodal representation through fusion and attention modules that are capable of leveraging both visual and textual information; and (iii) the generation of the natural language answer to the question at hand. This task often requires sophisticated models that are able to understand a question expressed in text, identify relevant elements of the image, and evaluate how these two inputs correlate.

Given the current interest of the scientific community in VQA, many recent advances try to improve individual components such as the image encoder, the question representation, or the fusion and attention strategies to better leverage both information sources. With so many approaches currently being introduced at the same time, it becomes unclear the real contribution and importance of each component within the proposed models. Thus, the main goal of this work is to understand the impact of each component on a proposed baseline architecture, which draws inspiration from the pioneer VQA model [3] (Fig. 1). Each component within that architecture is then systematically tested, allowing us to understand its impact on the system’s final performance through a thorough set of experiments and ablation analyses.

Fig. 1: Baseline architecture proposed for the experimental setup.

More specifically, we observe the impact of: (i) pre-trained word embeddings [35, 34], recurrent [25] and transformer-based sentence encoders [13]

as question representation strategies; (ii) distinct convolutional neural networks used for visual feature extraction 

[36, 39, 20]; and (iii) standard fusion strategies, as well as the importance of two main attention mechanisms [29, 2]. We notice that even using a relatively simple baseline architecture, our best models are competitive to the (maybe overly-complex) state-of-the-art models [6, 9]. Given the experimental nature of this work, we have trained over 130 neural network models, accounting for more than 600 GPU processing hours. We expect our findings to be useful as guidelines for training novel VQA models, and that they serve as a basis for the development of future architectures that seek to maximize predictive performance.

Ii Related Work

The task of VAQ has gained attention since Antol et al. [3] presented a large-scale dataset with open-ended questions. Many of the developed VQA models employ a very similar architecture [3, 17, 18, 31, 32, 33, 49]

: they represent images with features from pre-trained convolutional neural networks; they use word embeddings or recurrent neural networks to represent questions and/or answers; and they combine those features in a classification model over possible answers.

Despite their wide adoption, RNN-based models suffer from their limited representation power [12, 45, 46, 44]. Some recent approaches have investigated the application of the Transformer model [43] to tasks that incorporate visual and textual knowledge, as image captioning [12].

Attention-based methods are also being continuously investigated since they enable reasoning by focusing on relevant objects or regions in original input features. They allow models to pay attention on important parts of visual or textual inputs at each step of a task. Visual attention models focus on small regions within an image to extract important features. A number of methods have adopted visual attention to benefit visual question answering

[49, 53, 38].

Recently, dynamic memory networks [49] integrate an attention mechanism with a memory module, and multimodal bilinear pooling [17, 6, 54] is exploited to expressively combine multimodal features and predict attention over the image. These methods commonly employ visual attention to find critical regions, but textual attention has been rarely incorporated into VQA systems.

While all the aforementioned approaches have exploited those kind of mechanisms, in this paper we study the impact of such choices specifically for the task of VQA, and create a simple yet effective model. Burns et al. [8] conducted experiments comparing different word embeddings, language models, and embedding augmentation steps on five multimodal tasks: image-sentence retrieval, image captioning, visual question answering, phrase grounding, and text-to-clip retrieval. While their work focuses on textual experiments, our experiments cover both visual and textual elements, as well as the combination of these representations in form of fusion and attention mechanisms. To the best of our knowledge, this is the first paper that provides a comprehensive analyses on the impact of each major component within a VQA architecture.

Iii Impact of VQA Components

In this section we first introduce the baseline approach, with default image and text encoders, alongside a pre-defined fusion strategy. That base approach is inspired by the pioneer of Antol et al. on VQA [3]. To understand the importance of each component, we update the base architecture according to each component we are investigating.

In our baseline model we replace the VGG network from [2] by a Faster RCNN pre-trained in the Visual Genome dataset [26]. The default text encoding is given by the last hidden-state of a Bidirectional LSTM network, instead of the concatenation of the last hidden-state and memory cell used in the original work. Fig. 1 illustrates the proposed baseline architecture, which is subdivided into three major segments: independent feature extraction from (1) images and (2) questions, as well as (3) the fusion mechanism responsible to learn cross-modal features.

The default text encoder (denoted by the pink rectangle in Fig. 1

) employed in this work comprises a randomly initialized word-embedding module that takes a tokenized question and returns a continuum vector for each token. Those vectors are used to feed an LSTM network. The last hidden-state is used as the question encoding, which is projected with a linear layer into a

-dimensional space so it can be fused along to the visual features. As the default option for the LSTM network, we use a single layer with hidden units. Given that this text encoding approach is fully trainable, we hereby name it Learnable Word Embedding (LWE).

For the question encoding, we explore pre-trained and randomly initialized word-embeddings in various settings, including Word2Vec (W2V) [34] and GloVe [35]. We also explore the use of hidden-states of Skip-Thoughts Vector [25] and BERT [13] as replacements for word-embeddings and sentence encoding approaches.

Regarding the visual feature extraction (depicted as the green rectangle in Fig. 1), we decided to use the pre-computed features proposed in [2]. Such an architecture employs a ResNet-101 with a Faster-RCNN  [36] fine-tuned on the Visual Genome dataset. This is our option due to the fact that using pre-computed features is far more computationally efficient, allowing us to train several models with distinct configurations. Moreover, several recent approaches [6, 9, 4] employ that same strategy as well, making it easier to provide fair comparison to the state-of-the-art approaches. In this study we also provide an analysis regarding the use of other convolutional neural networks, so we can demonstrate the impact of the visual representation for VQA. We perform experiments with two additional networks widely used for the task at hand, namely VGG-16 [39] and ReSNet-101 [20].

Given the multimodal nature of the problem we are dealing with, it is quite challenging to train proper image and question encoders so as to capture relevant semantic information from both of them. Nevertheless, another essential aspect of the architecture is the component that merges them altogether, allowing for the model to generate answers based on both information sources [15]. The process of multimodal fusion consists itself in a research area with many approaches being recently proposed [6, 5, 17, 23]

. The fusion module receives the extracted image and query features, and provides multimodal features that theoretically present information that allows the system to answer to the visual question. There are many fusion strategies that can either assume quite simple forms, such as vector multiplication or concatenation, or be really complex, involving multilayered neural networks, tensor decomposition, and bi-linear pooling, just to name a few.

Following [3], we adopt the element-wise vector multiplication (also referred as Hadamard product) as the default fusion strategy. This approach requires the feature representations to be fused to have the same dimensionality. Therefore, we project them using a fully-connected layer to reduce their dimension from to . After being fused together, the multimodal features are finally passed through a fully-connected layer that provides scores (logits

) further converted into probabilities via a softmax function (

). We want to maximize the probability of the correct answer given the image and the provided question . Our models are trained to choose within a set comprised by the most frequent answers extracted from both training and validation sets of the VQA v2.0 dataset [19].

Iv Experimental Setup

Iv-a Dataset

For conducting this study we decided to use the VQA v2.0 dataset [19]. It is one of the largest and most frequently used datasets for training and evaluation of models in this task, being the official dataset used in yearly challenges hosted by mainstream computer vision venues 111VQA Challenge: This dataset enhances the original one [3] by alleviating bias problems within the data and increasing the original number of instances.

VQA v2.0 contains over images from MSCOCO [30], over 1 million questions and million answers. In addition, it has at least two questions per image, which prevents the model from answering the question without considering the input image.

We follow VQA v2.0 standards and adopt the official provided splits allowing for fair comparison with other approaches. The splits we use are Validation, Test-Dev, Test-Standard. Validation is used to for model ablation and hyper-parameter optimization purposes; Test-Dev is used for debugging and validation of experiments, allowing unlimited submission on the evaluation server [52]; Test-Standard is the default set for assessing state-of-the-art results and has limited monthly submission to the evaluation server [52]; Test-Reserve is used to protect against possible excessive network adjustments on the available test sets, i.e., overfitting; Test-Challenge determines the winners of the VQA Challenge.

In this work, results of the ablation experiments are reported on the Validation set, which is the default option used for this kind of experiment. In some experiments we also report the training set accuracy to verify evidence of overfitting due to excessive model complexity. Training data has a total of questions labeled with million answers, while the Test-Dev has a total of

questions. Note that the validation size is about 4-fold larger than ImageNet’s, which contains about

samples. Therefore, one must keep in mind that even small performance gaps might indicate quite significant results improvement. For instance, 1% accuracy gains depict

additional instances being correctly classified. We submit the predictions of our best models to the online evaluation servers 

[52] so as to obtain results for the Test-Standard split, allowing for a fair comparison to state-of-the-art approaches.

Iv-B Evaluation Metric

Free and open-ended questions result in a diverse set of possible answers [3]. For some questions, a simple yes or no answer may be sufficient. Other questions, however, may require more complex answers. In addition, it is worth noticing that multiple answers may be considered correct, such as gray and light gray. Therefore, VQA v2.0 provides ten ground-truth answers for each question. These answers were collected from ten different randomly-chosen humans.

The evaluation metric used to measure model performance in the open-ended Visual Question Answering task is a particular kind of accuracy. For each question in the input dataset, the model’s most likely response is compared to the ten possible answers provided by humans in the dataset associated with that question

[3], and evaluated according to Equation 1. In this approach, the prediction is considered totally correct only if at least out of people provided that same answer.

Fig. 2: Overall validation accuracy improvement () over the baseline architecture. Models denoted with * present fixed word-embedding representations, i.e., they are not updated via back-propagation.

Iv-C Hyper-parameters

As in [6] we train our models in a classification-based manner, in which we minimize the cross-entropy loss calculated with an image-question-answer triplet sampled from the training set. We optimize the parameters of all VQA models using Adamax [24] optimizer with a base learning rate of , with exception of BERT [13]

in which we apply a 10-fold reduction as suggested in the original paper. We used a learning rate warm-up schedule in which we halve the base learning rate and linearly increase it until the fourth epoch where it reaches twice its base value. It remains the same until the tenth epoch, where we start applying a 25% decay every two epochs. Gradients are calculated using batch sizes of

instances, and we train all models for 20 epochs.

V Experimental Analysis

In this section we show the experimental analysis for each component in the baseline VQA model. We also provide a summary of our findings regarding the impact of each part. Finally, we train a model with all the components that provide top results and compare it against state-of-the-art approaches.

V-a Text Encoder

In our first experiment, we analyze the impact of different embeddings for the textual representation of the questions. To this end, we evaluate: (i) the impact of word-embeddings (pre-trained, or trained from scratch); and (ii) the role of the temporal encoding function, i.e., distinct RNN types, as well as pre-trained sentence encoders (e.g., Skip-Thoughts, BERT).

The word-embedding strategies we evaluate are Learnable Word Embedding (randomly initialized and trained from scratch), Word2Vec [34], and GloVe [35]. We also use word-level representations from widely used sentence embeddings strategies, namely Skip-Thoughts [25] and BERT [13]. To do so, we use the hidden-states from the Skip-thoughts GRU network, while for BERT we use the activations of the last layer as word-level information. Those vectors feed an RNN that encodes the temporal sequence into a single global vector. Different types of RNNs are also investigated for encoding textual representation, including LSTM [21], Bidirectional LSTM [37], GRU [11], and Bidirectional GRU. For bidirectional architectures we concatenate both forward and backward hidden-states so as to aggregate information from both directions. Those approaches are also compared to a linear strategy, where we use a fully-connected layer followed by a global average pooling on the temporal dimension. The linear strategy discards any order information so we can demonstrate the role of the recurrent network as a temporal encoder to improve model performance.

Figure 2 shows the performance variation of different types of word-embeddings, recurrent networks, initialization strategies, and the effect of fine-tuning the textual encoder. Clearly, the linear layer is outperformed by any type of recurrent layer. When using Skip-Thoughts the difference reaches , which accounts for almost instances that the linear model mistakenly labeled. The only case in which the linear approach performed well is when trained with BERT. That is expected since Transformer-based architectures employ several attention layers that present the advantage of achieving the total receptive field size in all layers. While doing so, BERT also encodes temporal information with special positional vectors that allow for learning temporal relations. Hence, it is easier for the model to encode order information within word-level vectors without using recurrent layers.

For the Skip-Thoughts vector model, considering that its original architecture is based on GRUs, we evaluate both the randomly initialized and the pre-trained GRU of the original model, described as [GRU] and [GRU (skip)], respectively. We noticed that both options present virtually the same performance. In fact, GRU trained from scratch performed better than its pre-trained version.

Analyzing the results obtained with pre-trained word embeddings, it is clear that GloVe obtained consistently better results than the Word2Vec counterpart. We believe that GloVe vectors perform better given that they capture not only local context statistics as in Word2Vec, but they also incorporate global statistics such as co-occurrence of words.

One can also observe that the use of different RNNs models inflicts minor effects on the results. It might be more advisable to use GRU networks since they halve the number of trainable parameters when compared to the LSTMs, albeit being faster and consistently presenting top results. Note also that the best results for Skip-Thoughts, Word2Vec, and GloVe were all quite similar, without any major variation regarding accuracy.

The best overall result is achieved when using BERT to extract the textual features. BERT versions using either the linear layer or the RNNs outperformed all other pre-trained embeddings and sentence encoders. In addition, the overall training accuracy for BERT models is not so high compared to all other approaches. That might be an indication that BERT models are less prone to overfit training data, and therefore present better generalization ability.

Fig. 3: Overall accuracy vs. number of parameters trade-off analysis. Circled markers denote two-layered RNNs. Number of parameters increases due to the number of hidden units within the RNN. In this experiment we vary .

Results make it clear that when using BERT, one must fine-tune it for achieving top performance. Figure 2 shows that it is possible to achieve a to accuracy improvement when updating BERT weights with of the base learning rate. Moreover, Figure 3 shows that the use of a pre-training strategy is helpful, once Skip-thoughts and Bert outperform trainable word-embeddings in most of the evaluated settings. Is also make clear that using a single-layered RNNs provide best results, and are far more efficient in terms of parameters.

V-B Image Encoder

Experiments in this section analyze the visual feature extraction layers. The baseline uses the Faster-RCNN [36] network, and we will also experiment with other pre-trained neural networks to encode image information so we can observe their impact on predictive performance. Additionally to Faster-RCNN, we experiment with two widely used networks for VQA, namely ResNet-101 [20] and VGG-16 [39].

Embedding RNN Network Training Validation
BERT GRU Faster 79.34 58.88
ResNet-101 76.14 56.09
VGG-16 65.59 53.49
TABLE I: Impact of the network used for visual feature extraction.

Table I illustrates the result of this experiment. Intuitively, visual features provide a larger impact on model’s performance. The accuracy difference between the best and the worst performing approaches is . That difference accounts for roughly validation set instances. VGG-16 visual features presented the worst accuracy, but that was expected since it is the oldest network used in this study. In addition, it is only sixteen layers deep, and it has been shown that the depth of the network is quite important to hierarchically encode complex structures. Moreover, VGG-16 architecture encodes all the information in a dimensional vector that is extracted after the second fully-connected layer at the end. That vector encodes little to none spatial information, which makes it almost impossible for the network to answer questions on the spatial positioning of objects.

ResNet-101 obtained intermediate results, placing itself at the second place as a visual feature extractor. It is a much deeper network than VGG-16 and it achieves much better results on ImageNet, which shows the difference of the the learning capacity of both networks. ResNet-101 provides information encoded in dimensional vectors, extracted from the global average pooling layer, which also summarizes spatial information into a fixed-sized representation.

The best result as a visual feature extractor was achieved by the Faster-RCNN fine-tuned on the Visual Genome dataset. Such a network employs a ResNet-152 as backbone for training an RPN-based object detector. In addition, given that it was fine-tuned on the Visual Genome dataset, it allows for the training of robust models suited for general feature extraction. Hence, differently from the previous ResNet and VGG approaches, the Faster-RCNN approach is trained to detect objects, and therefore one can use it to extract features from the most relevant image regions. Each region is encoded as a dimensional vector. They contain rich information regarding regions and objects, since object detectors often operate over high-dimensional images, instead of resized ones (e.g., ) as in typical classification networks. Hence, even after applying global pooling over regions, the network still has access to spatial information because of the pre-extracted regions of interest from each image.

Embedding RNN Fusion Training Validation
BERT GRU Mult 78.28 58.75
Concat 67.85 55.07
Sum 68.21 54.93
TABLE II: Experiment using different fusion strategies.

V-C Fusion strategy

In order to analyze the impact that the different fusion methods have on the network performance, three simple fusion mechanisms were analyzed: element-wise multiplication, concatenation, and summation of the textual and visual features.

The choice of the fusion component is essential in VQA architectures, since its output generates multi-modal features used for answering the given visual question. The resulting multi-modal vector is projected into a

-dimensional label space, which provides a probability distribution over each possible answer to the question at hand


Table II presents the experimental results with the fusion strategies. The best result is obtained using the element-wise multiplication. Such an approach functions as a filtering strategy that is able to scale down the importance of irrelevant dimensions from the visual-question feature vectors. In other words, vector dimensions with high cross-modal affinity will have their magnitudes increased, differently from the uncorrelated ones that will have their values reduced. Summation does provide the worst results overall, closely followed by the concatenation operator. Moreover, among all the fusion strategies used in this study, multiplication seems to ease the training process as it presents a much higher training set accuracy ( improvement) as well.

V-D Attention Mechanism

Finally, we analyze the impact of different attention mechanisms, such as Top-Down Attention [2] and Co-Attention [29]. These mechanisms are used to provide distinct image representations according to the asked questions. Attention allows the model to focus on the most relevant visual information required to generate proper answers to the given questions. Hence, it is possible to generate several distinct representations of the same image, which also has a data augmentation effect.

V-D1 Top-Down Attention

Top-down attention, as the name suggests, uses global features from questions to weight local visual information. The global textual features are selected from the last internal state of the RNN, and the image features are extracted from the Faster-RCNN, where represents the number of regions extracted from the image. In the present work we used . The question features are linearly projected so as to reduce its dimension to , which is the size used in the original paper [2]. Image features are concatenated with the textual features, generating a matrix of dimensions . Features resulting from that concatenation are first non-linearly projected with a trainable weight matrix generating a novel multimodal representation for each image region:


Therefore, such a layer learns image-question relations, generating

features that are transformed by an activation function

. Often,

is ReLU 

[1], Tanh [28], or Gated Tanh [51]. The latter employs both the logistic Sigmoid and Tanh, in a gating scheme . A second fully-connected layer is employed to summarize the -dimensional vectors into values per region (). It is usual to use a small value for such as . The role of is to allow the model to produce distinct attention maps, which is useful for understanding complex sentences that require distinct viewpoints. Values produced by this layer are normalized with a softmax function applied on the columns of the matrix, as follows.


It generates an attention mask used to weight image regions, producing the image vector , as shown in Equation 4.


Note that when , the dimensionality of the visual features increases -fold. Hence, , which we reshape to be a vector, which constitutes the final question-aware image representation.

V-D2 Co-Attention

Unlike the top-down attention mechanism, co-attention is based on the computation of local similarities between all questions words and image regions. It expects two inputs: an image feature matrix , such that each image feature vector encodes an image region out of ; and a set of word-level features . Both and are normalized to have unit norm, so their multiplication

results in the cosine similarity matrix used as guidance for generating the filtered image features. A context feature matrix

is given by:


Finally, is normalized with a softmax function, and the regions are summed so as to generate a -sized vector to represent relevant visual features based on question :

Embedding RNN Attention Training Validation
BERT GRU - 78.20 58.75
Co-Attention 71.10 58.54
Co-Attention (L2 norm) 86.03 64.03
Top Down 82.64 62.37
Top Down (ReLU) 87.02 64.12
TABLE III: Experiment using different attention mechanisms.

Table III depicts the results obtained by adding the attention mechanisms to the baseline model. For these experiments we used only element-wise multiplication as fusion strategy, given that it presented the best performance in our previous experiments. We observe that attention is a crucial mechanism for VQA, leading to an accuracy improvement.

The best performing attention approach was top-down attention with ReLU activation, followed closely by co-attention. We noticed that when using Gated Tanh within top-down attention, results degraded 2%. In addition, experiments show that normalization is quite important in co-attention, providing an improvement of almost .

Vi Findings Summary

Model VQA2.0 Test-Dev VQA2.0 Test-Std
All Yes/No Num. Other All Yes/No Num. Other
MCB* [17] - - - - 62.27 78.82 38.28 53.36
ReasonNet* [22] - - - - 64.61 78.86 41.98 57.39
Tips&Tricks* [41] 65.32 81.82 44.21 56.05 65.67 82.20 43.90 56.26
block[6] 67.58 83.60 47.33 58.51 67.92 83.98 46.77 58.79
BERT-GRU-Faster-TopDown 67.16 84.76 44.82 57.23 67.28 84.75 44.90 57.20
BERT-GRU-Faster-CoAttention 67.18 84.85 45.92 56.84 67.39 85.00 46.20 56.91
TABLE IV: Comparison of the models on VQA2 Test-Standard set. The models were trained on the union of VQA 2.0 trainval split and VisualGenome [26] train split. All is the overall OpenEnded accuracy (higher is better). Yes/No, Numbers, and Others are subsets that correspond to answers types. * scores reported from [6].

The experiments presented in Section V-A have shown that the best text encoder approach is fine-tuning a pre-trained BERT model with a GRU network trained from scratch.

In Section V-B we performed experiments for analyzing the impact of pre-trained networks to extract visual features, among them Faster-RCNN, ResNet-101, and VGG-16. The best result was using a Faster-RCNN, reaching a improvement in the overall accuracy.

We analyzed different ways to perform multimodal feature fusion in Section V-C. In this sense, the fusion mechanism that obtained the best result was the element-wise product. It provides higher overall accuracy when compared to the other fusion approaches.

Finally, in Section V-D we have studied two main attention mechanisms and their variations. They aim to provide question-aware image representation by attending to the most important spatial features. The top performing mechanism is the top-down attention with the ReLU activation function, which provided an overall accuracy improvement when compared to the base architecture.

Vii Comparison to state-of-the-art methods

After evaluating individually each component in a typical VQA architecture, our goal in this section is to compare the approach that combines the best performing components into a single model with the current state-of-the-art in VQA. Our comparison involves the following VQA models: Deeper-lstm-q [3], MCB [17], ReasonNet [22], Tips&Tricks [41], and the recent block [6].

Tables IV and V show that our best architecture outperforms all competitors but block, in both Test-Standard (Table IV) and Test-Dev sets (Table V). Despite block presenting a marginal advantage in accuracy, we have shown in this paper that by carefully analyzing each individual component we are capable of generating a method, without any bells and whistles, that is on par with much more complex methods. For instance, block and MCB require 18M and 32M parameters respectively for the fusion scheme alone, while our fusion approach is parameter-free. Moreover, our model performs far better than [17], [22], and [41], which are also arguably much more complex methods.

Model VQA2.0 Test-Dev
All Yes/No Num. Other
Deeper-lstm-q [3] 51.95 70.42 32.28 40.64
MCB* [17] 61.23 79.73 39.13 50.45
block[6] 66.41 82.86 44.76 57.30
BERT-GRU-Faster-CoAttention 65.84 83.66 44.36 55.50
BERT-GRU-Faster-TopDown 66.02 83.72 44.88 55.77
TABLE V: Comparison of the models on VQA2 test-dev set. All is the overall OpenEnded accuracy (higher is better). Yes/No, Numbers, and Others are subsets that correspond to answers types. * scores reported from [6].

Viii Conclusion

In this study we observed the actual impact of several components within VQA models. We have shown that transformer-based encoders together with GRU models provide the best performance for question representation. Notably, we demonstrated that using pre-trained text representations provide consistent performance improvements across several hyper-parameter configurations. We have also shown that using an object detector fine-tuned with external data provides large improvements in accuracy. Our experiments have demonstrated that even simple fusion strategies can achieve performance on par with the state-of-the-art. Moreover, we have shown that attention mechanisms are paramount for learning top performing networks, once they allow producing question-aware image representations that are capable of encoding spatial relations. It became clear that top-down is the preferred attention method, given its results with ReLU activation. Moreover, tt is now clear that some configurations used in some architectures (e.g., additional RNN layers) are actually irrelevant and can be removed altogether without harming accuracy. For future work, we expect to expand this study in two main ways: (i) cover additional datasets, such as Visual Genome [26]; and (ii) study in an exhaustive fashion how distinct components interact with each other, instead of observing their impact alone on the classification performance.


This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nivel Superior – Brasil (CAPES) – Finance Code 001. We also would like to thank FAPERGS for funding this research. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the graphics cards used for this research.


  • [1] A. F. Agarap (2018) Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375. Cited by: §V-D1.
  • [2] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang (2018) Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, pp. 6077–6086. Cited by: §I, §III, §III, §V-D1, §V-D.
  • [3] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh (2015) Vqa: visual question answering. In Proceedings of the IEEE international conference on computer vision, pp. 2425–2433. Cited by: §I, §I, §I, §II, §III, §III, §IV-A, §IV-B, §IV-B, TABLE V, §VII.
  • [4] Y. Bai, J. Fu, T. Zhao, and T. Mei (2018) Deep attention neural tensor network for visual question answering. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 20–35. Cited by: §III.
  • [5] H. Ben-Younes, R. Cadene, M. Cord, and N. Thome (2017) Mutan: multimodal tucker fusion for visual question answering. In Proceedings of the IEEE international conference on computer vision, pp. 2612–2620. Cited by: §III.
  • [6] H. Ben-Younes, R. Cadene, N. Thome, and M. Cord (2019) Block: bilinear superdiagonal fusion for visual question answering and visual relationship detection. arXiv preprint arXiv:1902.00038. Cited by: §I, §II, §III, §III, §IV-C, TABLE IV, TABLE V, §VII.
  • [7] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. Miller, R. Miller, et al. (2010) VizWiz: nearly real-time answers to visual questions. In Proceedings of the 23nd ACM Symposium on User Interface Software and Technology, pp. 333–342. Cited by: §I.
  • [8] A. Burns, R. Tan, K. Saenko, S. Sclaroff, and B. A. Plummer (2019) Language features matter: effective language representations for vision-language tasks. arXiv preprint arXiv:1908.06327. Cited by: §II.
  • [9] R. Cadene, H. Ben-Younes, M. Cord, and N. Thome (2019) Murel: multimodal relational reasoning for visual question answering. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 1989–1998. Cited by: §I, §III.
  • [10] X. Chen and C. Lawrence Zitnick (2015) Mind’s eye: a recurrent visual representation for image caption generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2422–2431. Cited by: §I.
  • [11] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Cited by: §V-A.
  • [12] M. Cornia, M. Stefanini, L. Baraldi, and R. Cucchiara (2019) M2: meshed-memory transformer for image captioning. arXiv preprint arXiv:1912.08226. Cited by: §II.
  • [13] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §I, §III, §IV-C, §V-A.
  • [14] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell (2015) Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2625–2634. Cited by: §I.
  • [15] B. Duke and G. W. Taylor (2018) Generalized hadamard-product fusion operators for visual question answering. In 2018 15th Conference on Computer and Robot Vision (CRV), pp. 39–46. Cited by: §III, §V-C.
  • [16] H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, et al. (2015) From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1473–1482. Cited by: §I.
  • [17] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach (2016) Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Cited by: §II, §II, §III, TABLE IV, TABLE V, §VII, §VII.
  • [18] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell (2016) Compact bilinear pooling. In CVPR, pp. 317–326. Cited by: §II.
  • [19] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh (2017) Making the v in vqa matter: elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6904–6913. Cited by: §III, §IV-A.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §I, §III, §V-B.
  • [21] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §V-A.
  • [22] I. Ilievski and J. Feng (2017) Multimodal learning and reasoning for visual question answering. In NIPS 2017, pp. 551–562. Cited by: TABLE IV, §VII, §VII.
  • [23] J. Kim, K. On, W. Lim, J. Kim, J. Ha, and B. Zhang (2016) Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325. Cited by: §III.
  • [24] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-C.
  • [25] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler (2015) Skip-thought vectors. In Advances in neural information processing systems, pp. 3294–3302. Cited by: §I, §III, §V-A.
  • [26] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, et al. (2017) Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123 (1), pp. 32–73. Cited by: §III, TABLE IV, §VIII.
  • [27] W. S. Lasecki, Y. Zhong, and J. P. Bigham (2014) Increasing the bandwidth of crowdsourced visual question answering to better support blind users. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility, pp. 263–264. Cited by: §I.
  • [28] Y. A. LeCun, L. Bottou, G. B. Orr, and K. Müller (2012) Efficient backprop. In Neural networks: Tricks of the trade, pp. 9–48. Cited by: §V-D1.
  • [29] K. Lee, X. Chen, G. Hua, H. Hu, and X. He (2018) Stacked cross attention for image-text matching. In ECCV, pp. 201–216. Cited by: §I, §V-D.
  • [30] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §IV-A.
  • [31] J. Lu, J. Yang, D. Batra, and D. Parikh (2016) Hierarchical question-image co-attention for visual question answering. In NIPS, pp. 289–297. Cited by: §II.
  • [32] M. Malinowski, M. Rohrbach, and M. Fritz (2015)

    Ask your neurons: a neural-based approach to answering questions about images

    In ICCV, pp. 1–9. Cited by: §II.
  • [33] A. Mallya and S. Lazebnik (2016) Learning models for actions and person-object interactions with transfer to question answering. In ECCV, pp. 414–428. Cited by: §II.
  • [34] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013)

    Efficient estimation of word representations in vector space

    arXiv preprint arXiv:1301.3781. Cited by: §I, §III, §V-A.
  • [35] J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §I, §III, §V-A.
  • [36] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §I, §III, §V-B.
  • [37] M. Schuster and K. K. Paliwal (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. Cited by: §V-A.
  • [38] K. J. Shih, S. Singh, and D. Hoiem (2016) Where to look: focus regions for visual question answering. In CVPR, pp. 4613–4621. Cited by: §II.
  • [39] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §I, §III, §V-B.
  • [40] J. Singh, V. Ying, and A. Nutkiewicz (2018) Attention on attention: architectures for visual question answering (vqa). arXiv preprint arXiv:1803.07724. Cited by: §I.
  • [41] D. Teney, P. Anderson, X. He, and A. van den Hengel (2018) Tips and tricks for visual question answering: learnings from the 2017 challenge. In CVPR 2018, pp. 4223–4232. Cited by: TABLE IV, §VII, §VII.
  • [42] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. Zhu (2014) Joint video and text parsing for understanding events and answering queries. IEEE MultiMedia 21 (2), pp. 42–70. Cited by: §I.
  • [43] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In NIPS, pp. 5998–6008. Cited by: §II.
  • [44] J. Wehrmann and R. C. Barros (2017) Convolutions through time for multi-label movie genre classification. In SAC 2017, pp. 114–119. Cited by: §II.
  • [45] J. Wehrmann and R. C. Barros (2018) Bidirectional retrieval made simple. In CVPR 2018, pp. 7718–7726. Cited by: §II.
  • [46] J. Wehrmann, C. Kolling, and R. C. Barros (2017) Fast and efficient text classification with class-based embeddings. In IJCNN 2019, pp. 2384–2391. Cited by: §II.
  • [47] J. Wehrmann, C. Kolling, and R. C. Barros (2018) Adaptive cross-modal embeddings for image-text alignment. In AAAI 2020, pp. 7718–7726. Cited by: §I.
  • [48] J. Wehrmann, D. M. Souza, M. A. Lopes, and R. C. Barros (2019) Language-agnostic visual-semantic embeddings. In ICCV 2019, Cited by: §I.
  • [49] C. Xiong, S. Merity, and R. Socher (2016) Dynamic memory networks for visual and textual question answering. In ICML, pp. 2397–2406. Cited by: §II, §II, §II.
  • [50] H. Xu and K. Saenko (2016) Ask, attend and answer: exploring question-guided spatial attention for visual question answering. In European Conference on Computer Vision, pp. 451–466. Cited by: §I.
  • [51] W. Xue and T. Li (2018)

    Aspect based sentiment analysis with gated convolutional networks

    arXiv preprint arXiv:1805.07043. Cited by: §V-D1.
  • [52] D. Yadav, R. Jain, H. Agrawal, P. Chattopadhyay, T. Singh, A. Jain, S. B. Singh, S. Lee, and D. Batra (2019) EvalAI: towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570. Cited by: §IV-A, §IV-A.
  • [53] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola (2016) Stacked attention networks for image question answering. In CVPR, pp. 21–29. Cited by: §II.
  • [54] Z. Yu, J. Yu, C. Xiang, J. Fan, and D. Tao (2018) Beyond bilinear: generalized multimodal factorized high-order pooling for visual question answering. IEEE transactions on neural networks and learning systems 29 (12), pp. 5947–5959. Cited by: §II.