Zero-Shot Visual Question Answering

11/17/2016 ∙ by Damien Teney, et al. ∙ The University of Adelaide 0

Part of the appeal of Visual Question Answering (VQA) is its promise to answer new questions about previously unseen images. Most current methods demand training questions that illustrate every possible concept, and will therefore never achieve this capability, since the volume of required training data would be prohibitive. Answering general questions about images requires methods capable of Zero-Shot VQA, that is, methods able to answer questions beyond the scope of the training questions. We propose a new evaluation protocol for VQA methods which measures their ability to perform Zero-Shot VQA, and in doing so highlights significant practical deficiencies of current approaches, some of which are masked by the biases in current datasets. We propose and evaluate several strategies for achieving Zero-Shot VQA, including methods based on pretrained word embeddings, object classifiers with semantic embeddings, and test-time retrieval of example images. Our extensive experiments are intended to serve as baselines for Zero-Shot VQA, and they also achieve state-of-the-art performance in the standard VQA evaluation setting.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 12

page 13

page 14

page 15

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The task of Visual Question Answering (VQA) spans the fields of computer vision and natural language processing by requiring an algorithm to answer a previously unseen text question about an image. The recent interest 

[13, 3, 30]

, in part, reflects the enthusiasm for VQA as an indicator of progress towards deep scene understanding, which is the overarching goal of computer vision 

[7, 14]

. The ability to answer truly general questions about images would also constitute an concrete step towards real Artificial Intelligence.

Figure 1: All test questions in our evaluation setting include words unseen in training examples, and used in the test question itself and/or in multiple-choice answers. This setting evaluates the capabilities of a VQA algorithm for generalization beyond its training examples. We demonstrate the benefit of additional sources of information, via pretrained intermediate representations (e.g. word embeddings and object detections) and during test-time (exemplar retrieval).

A number of VQA datasets have been introduced, and a variety of methods have demonstrated impressive (yet possibly converging) results (see [30] for a survey). The training of most current VQA methods relies on a dataset of {question,image,answer} tuples illustrating all question types applied to all items of interest, in all situations of interest. No finite set of exemplars, however, can cover the diversity of the world that an ideal VQA system should be prepared to consider. A secondary problem with this approach is that the incentive to perform well on benchmark datasets that do not encourage addressing rare, or novel, words and concepts. Most current methods are therefore designed to best learn – and often overfit – dataset biases. For example, it is common to consider a vocabulary limited to only the most frequent words and answers in the dataset. That practice thus completely discards rare concepts, let alone those that do not appear in the training set at all.

As an example, the training question How many giraffes are in the image ? is currently taken as an opportunity to learn to count giraffes specifically. We propose here that the opportunity is instead to learn to count. An ideal VQA system should therefore be able to generalize and answer questions about objects and situations that are not present in the VQA training set. We label this capability Zero-Shot VQA, inspired by the task of zero-shot classification. Our first contribution is an evaluation setting for VQA in which all test instances (questions and answers) include words not present in the training set. Our experiments in this setting expose common weaknesses of current VQA systems, namely poor generalization and over-reliance on dataset biases.

We show that most VQA datasets contain strong biases that render the interpretation and comparison of performances difficult. A small number of frequent words constitute a large fraction of the correct answers, and exploiting these statistical regularities achieves deceptively strong performance [3, 11, 34]. For example, most questions starting with How many… have two or three as their correct answer, and rarely zero or seventeen. Although such biases are actually present in the questions that humans ask, VQA methods that overfit to these biases may improve on the benchmarks without making any significant progress towards visual scene understanding. For example, the state-of-the-art method in [11] achieves an impressive accuracy of almost 65% correct answers on the Visual7W dataset. However, the authors also train a similar, but blind model, that answers the question without analyising the image, that achieves 56% accuracy. This second figure is really more illuminating than the first, and one of the primary implications is that current methods for evaluating VQA performance are not a particularly good measure of a method’s ability to understand visual scenes.

The contributions of this paper are summarized as follows.

  1. [topsep=2pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,label=0),leftmargin=3.0ex]

  2. We define the Zero-Shot Visual Question Answering (ZS-VQA) problem, and propose a corresponding evaluation setting where each test instance contains one or several unseen words, i.e. words not present in any training instance.

  3. We propose a dataset that focuses exclusively on this setting based on the Visual7W dataset [36], of which we define new training and test splits.

  4. We show that the paradigm followed by current VQA methods performs poorly in this setting, as correct answers cannot be as easily guessed by learning dataset biases.

  5. We describe and evaluate extensively a set of strategies for ZS-VQA, including incorporating auxiliary data at training and test time. They result in large improvements for ZS-VQA, and also in state-of-the-art performance in the standard VQA setting (on the original splits of the Visual7W dataset).

2 Related work

The task of visual question answering has received increasing interest since the seminal paper of Antol et al[3]. Most recent methods are variations on the idea of a joint embedding

of the image and the question using a deep neural network. The image and the question are passed respectively through a convolutional (CNN) and a recurrent neural network (

e.g. an LSTM). They produce representations of the image and the question in a joint space, which can then be fed together into a classifier over an output vocabulary of possible answers. Consult [30] for a recent survey of the literature.

Most VQA systems are trained end-to-end, i.e. with supervision solely for their final output, on closed datasets of images, questions, and their correct answers. Several such datasets are available and they have increased in size and quality [3, 12, 13, 22, 36]. They remain however expensive to produce and are thus necessarily of limited size. This drives an increased interest in using additional sources of data.

What color are the barricades ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] pink black red orange Where is her caretaker ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] on blue towel holding infant right infant under umbrella
What are they using to draw ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] markers chalk pencil mechanical pencil What are they eating ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] pizza chicken parmesan calzones italian food
What animal is in the picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] puppy bulldog dog pitbull What kind of birds are they ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] owls herons ravens sparrows
What does the semi truck say on its side ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] peregrino’s voss arrowhead crystal geyser What are the structures in the background of the photo ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] cabins building storage bunkers houses
What fruit is visible ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] banana pear cantaloupe orange Where is this scene taking place ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] on boat at zoo in castle at crossroad
Figure 2: Test questions from the proposed zero-shot test split of the Visual7W dataset. Each instance contains one or more unseen words (in blue boldface), i.e. not used in any training question or answer. Tick marks indicate the correct answers among the given multiple choices.

On the language side, the word embeddings, i.e

. the vectors used to represent the words, can be pretrained on a language modeling task 

[19, 17]. They capture semantics by mapping words of similar meaning to similar representations. Such pretrained embeddings showed benefit for VQA for example in [23, 6, 11, 24]. They can be pretrained unsupervised on large corpora and incorporate words not necessarily present in the training questions/answers. This simple strategy can enable VQA systems to generalize to words not present in training questions. The syntactic structure of language for VQA has received much less attention, but recent work suggest that explicit parsing can bring useful information [24].

On the image side

, the common representation uses features produced within a convolutional neural network (CNN) pretrained for image classification. As is the case for pretrained word embeddings, this leverages the larger amounts of data available for the pretraining task. Most VQA methods do not use the actual output of the classifier but its hidden features. An exception is 

[28], where the authors use language as an explicit intermediate representation for VQA. They represent an image as a set of recognized attributes, actions, and objects. In this paper, we evaluate both traditional CNN features and explicit object detections.

Finally, a few methods consider the test-time retrieval of additional data from knowledge bases [31, 27, 29]. Importantly, this data is not incorporated within the learned weights of the network. Only the behaviour for retrieving and incorporating external information is learned, and can then be applied at test time to concepts unseen during training. We experiment with a similar principle by retrieving additional visual exemplars at test time. In comparison, the information from knowledge bases in [31, 27, 29] is purely textual in nature.

The presence of strong biases and the long-tailed distributions of words is a known issue in VQA datasets [3, 12, 36, 34] and more generally in natural language [1, 9]. Zhang et al. address the related evaluation issues with a balanced dataset of binary (yes/no) questions. It contains two versions of each question, with slightly different images that elicit opposite answers. Their evaluation setting better capture a system ability to focus of the fine, meaningful details of the scene. However, it is only feasible for binary questions and synthetic (clip art) images. A similar motivation led to the zero-shot setting proposed in this paper, which has the advantage of being applicable to real images.

Let us finally note a recent surge of interest in better handling of rare and unknown words in various natural language applications [9, 32, 5].

                       Standard splits                            Zero-shot splits    
Training Validation Test Training Validation Test
Number of questions 69,817 28,020 42,031 63,128 10,651 10,559
Number of images 14,366 5,678 8,609 15,616 7,937 7,920
Question types   (what, where, when, 48% 16%  5% 48% 16%  5% 48% 16%  5% 46% 17%  5% 52% 11%  3% 52% 12%  3%
                           who, why, how) 10%  6% 15% 10%  6% 15% 10%  6% 15% 10%  6% 16% 10% 13% 10% 10% 14% 10%
Number of words unseen in training  2,640  3,962  6,880 6,847
(non-disjoint sets) (disjoint sets)
Number of instances with 1 unseen word  2,360  8%  3,606  9% 10,651  100% 10,559  100%
   —appearing in the question    352  8%*15%   528  9%*15%  2,053  19%  2,092  20%
   —appearing in the correct answer    448  8%*19%   740  9%*21%  2,362  22%  2,295  22%
   —appearing in other (incorrect) choices  2,064  8%*87% 3,174  9%*88%  9,306  87% 9,181  87%
Table 1: Comparison of the original splits of the Visual7W dataset [36] with the proposed zero-shot splits. The proposed version removes a large number of words out of the training set, reserving them as unseen words at test time. At least one of them appears in every test and validation instance (bold numbers), either in the question itself, in the correct answer, or in other (incorrect) multiple choices.

3 Dataset for Zero-Shot VQA

We propose a dataset for VQA with a “zero-shot” aspect with respect to the questions and answers, but not to the visual concepts in the associated images. For example, we consider the question How many zebras are in this image ? to be zero-shot if no training question involves zebras. Images containing zebras may, however, appear in the training set (with questions involving other elements of those images) or be used to train auxiliary components, e.g

. an image classifier that recognizes zebras. This distinction reflects the fact that CNNs pre-trained on ImageNet 

[4] are commonly used in existing VQA methods, and the fact that VQA is the task that we are actually interested in.

3.1 Repurposing the Visual7W dataset

In multiple-choice VQA, each training or test instance is a tuple of an image, a question, and multiple answer choices (four in the dataset considered here). The question and answers are given as text in natural language. Exactly one of the answers is marked as correct, and is used for supervised training and for evaluation. A dataset is partitioned into training, evaluation, and test splits.

The words used in the questions and answers of VQA datasets follow a long-tail distribution typical in natural language [12, 36]. In other words, most questions and answers are made of words from a small vocabulary, but a large number of other words appear very infrequently. The typical strategy in VQA methods is to focus on a limited vocabulary and a limited set of possible answers. This makes the training practically easier, and the performance penalty remains reasonable since the rare words arise in only a small fraction of the test instances. It however implies a fundamental limitation to the restricted set of words and answers.

The Zero-Shot VQA dataset we propose is formed by defining new training, validation, and test splits for the the “telling” task of the Visual7W dataset [36]. Visual7W is itself a subset of the Visual Genome dataset [12] the highest quality dataset for VQA currently available, in terms of size, answer distribution, human performance, and the quality of the multiple choice answers.

We define the new splits such that every validation and test instance is zero-shot question, which we define as using at least one word that was not present in any training instance. The zero-shot instances can be further broken down according to whether unseen words appears in the question itself, in the correct answer, or in the other (incorrect) answers. These three sets are not mutually exclusive, as multiple unseen words can appear in the question and its answers. An analysis of the original splits of the Visual7W shows that only 9% of test questions qualify as zero-shot (see Table 1).

3.2 Building zero-shot splits

To build our new splits, we hold out two distinct subsets of the words used throughout the whole dataset and reserve them for the validation and test splits, respectively. The words in the held-out subsets are randomly selected from those which appear less than 20 times over the whole dataset. This ensures that these unseen words are semantically rich, as opposed to common verbs or stop words. These words typically describe fine-grained categories and very specific concepts (see examples in Fig. 2 and in the supplementary material). The validation and test splits are formed from all instances containing at least one word from their reserved set, ensuring no overlap between sets. The training set is composed of all remaining instances, making sure, as in the original splits, to keep the images disjoint between the training and validation/test sets so as not to encourage overfitting. An analysis of the resulting splits is given in Table 1. Note that we preserve the other qualities of the original dataset, e.g. in the approximate distribution of question types.

We also annotate the test instances as to whether they contain unseen words in the question itself, in the correct answer, or in the other (incorrect) answers. We recommend reporting accuracy over the whole test set and on those non-disjoint subsets. We provide those same annotations for the standard splits, making it possible to report performance on zero-shot questions (albeit on a small number of them) of a method trained on the standard splits. Splits and annotations are available from the authors’ website.

Figure 3: Our neural network for VQA follows a straightforward architecture to evaluate the impact of various features representing the input question, image, and multiple-choice answers. We obtain their respective fixed-size vector representations , , and by concatenating bag-of-words of the different features. The three representations are passed through non-linear mappings (weights

followed by non-linearities not shown) and combined with multiplicative or order interactions. A final logistic regression over the combined features produces a score for each candidate answer.

4 Methods for zero-shot VQA

We consider a neural network for VQA with straightforward architecture. Our main objective is to evaluate additional features and pretrained representations of the inputs (image, question, and candidate answers). A simple architecture lets us evaluate these in relative isolation. Our method particularly does not include an attention mechanism, in contrast to many current approaches. The application of attention to the proposed features has no single obvious implementation and may warrant another research study of its own. Note also that each of the proposed improvements is evaluated on the basis of a relatively simple implementation. The goal is not to obtain the single best performing model, but to guide future research to areas with the most promise and provide reference performances of basic implementations.

4.1 Baseline method

Our network architecture is similar to baselines evaluated in other studies of VQA [3, 11, 35]. The overall principle (see Fig. 1) is to map the inputs, i.e. a question, an image, and candidate answers, to vector representations in a common space. The mappings to produce these representations are learned such that interactions (e.g. distances, products, order comparisons, …) between elements in this space capture semantic compatibility.

Our baseline represents the question with a bag of words (BoW). Each word is represented by a fixed-length vector with a look-up table that associates every possible word to a learned vector (unknown words at test time receive an empty vector of zeros). The BoW representation is the average of all non-empty vectors of the question words. We refer to this representation as the learned word embedding. Additional features, described below, are concatenated where required (Fig. 3), giving the final question features . Each candidate answer is treated similarly, using a BoW and optionally concatenated with additional features, giving the answer features  for each multiple choice . The image is represented with global (image-wide) features of dimension 2048 extracted from the last pooling layer of a ResNet-152 [10]

pretrained for image recognition on ImageNet. Note that this common practice is already a form of transfer learning, as opposed to the baseline language representation which learns the word embeddings from scratch. The CNN features are optionally concatenated with additional features described below, giving the image features 

.

The features , , and are combined in two stages with multiplicative interactions, first between the question and image representations, then with the candidate answers:

(1)
(2)

with and learned weights and biases,

a ReLU, and

the Hadamard (element-wise) product. Each candidate answer then receives a score obtained with a logistic regression using the combined features

(3)

with learned weights and biases , and being the logistic function. The score represents the compatibility between the input question, image, and a candidate answer. All weights, biases, and embeddings are trained end-to-end to minimize the a cross-entropy loss, using +1 labels for the correct candidate answers and 0 for the incorrect multiple choices.

4.2 Improved language representations

Pretrained word embeddings

We compare the learned word embeddings (trained end-to-end within the VQA system) to embeddings pretrained on a language modeling task. This common practice [19, 17] has two advantages. First, the pretrained embeddings reflect word co-occurrences, and have shown empirically to capture complex semantic relationships in their vector space. Second, the pretraining task is unsupervised and embeddings can be learned from very large amounts of data, covering a much richer vocabulary than a VQA training set. Concretely, we evaluate GloVe embeddings of various dimensions {50,100,200,300} pretrained on Wikipedia and the Gigaword newswire corpus.

Sharing embeddings across stems

We propose sharing a embeddings across words with a same stem (e.g. flower, flowers, and flowering). We hypothesize that the semantic meaning of words is often more important in the context of VQA than verb conjugation or plural forms of nouns. The procedure reduces the number of unique embeddings to be learned (e.g. from 22k to 17k words in the original V7W training set). Moreover, novel words at test time that have appeared in another form or declension during training can now be associated with a relevant embedding. An obvious drawback of the approach is to potentially associate a same representation to multiple words of different meanings, e.g. runner, ran, runs, and runnable. This exacerbates the issue of polysemy already present with standard word embeddings. A set of homonyms are indeed mapped a single – thus necessarily ambiguous – representation. Concretely, we replace every word in the input question and/or answer by its stem, obtained either with the classical Porter algorithm [20], or with the dictionary-based algorithm of the Stanford CoreNLP library [16].

Sharing embeddings between questions and answers

Our baseline implementation learns independent embeddings for words in questions and answers. One may hypothesize that both inputs could benefit from similar representations, and we compare independent embeddings versus a common shared one. The latter reduces the number of parameters to learn, but it forces the semantics of questions and answers to be represented identically.

Test-time exemplar retrieval

A hallmark feature of Z.S.–capable methods is their extensibility to novel concepts without retraining. We implement such a capability by retrieving, at test-time, exemplar images from the web for all words (known or unknown) in the test questions and/or answers. Concretely, we build an additional representation of the question and/or of the candidate answers by retrieving the top- images of each of its words from Google Images (=1 to 4). We extract global CNN features from the images (as described above for the input image) and average them over words and exemplars. The resulting vector of dimension 2048 is referred to as a visual embedding.

4.3 Improved image representations

Explicit object detection

In addition to the global CNN features to represent the input image, we consider using explicit detections of objects in the scene. We obtain candidate detection from the YOLO method pretrained on Pascal VOC [21] as set of detections scores, each with the class of the detected object. We keep all detections above a certain threshold (varied throughout multiple experiments as to vary recall). We turn the set of detections into a fixed-size vector with a similar bag of words as for our questions (see above): we associate the possible classes with a learned embedding (i.e. a look-up table) and sum these embeddings over all detections.

Semantic object class embeddings

The detections presented above do not asociate any semantic prior on the classes considered by the detector. Those classes are however known by their name, and we experiment associating the detections with the pretrained GloVe word embedding (as used for the language model, see above) of their recognized class. We simply replace the look-up table with the pretrained embeddings of the words corresponding to class names. We refer to this representation as semantic class embeddings.

Order embeddings

We experiment with the idea of imposing an order between the representation of question/image and of candidate answers, as proposed by Vendrov et al. for other vision and language tasks [25]. Whereas our baseline uses a symmetric product to relate and , the idea of an order embedding is to place a hierarchy between the two modalities by measuring their compatibility with an antisymmetric operation (consult [25] for details). Practically, we replace Eq. 2 with

(4)
(5)

This imposes a partial order over the spaces of and , the candidate answers being placed higher in the hierarchy (with smaller absolute coordinates) and thus deemed more general than a particular pair of question/image. Crucially, we experiment with a reversed ordering by swapping and in the above, which results in a dramatically lower performance (see Section 5.2).

5 Experiments

We conduct extensive experiments on both the original and the zero shot splits of the Visual7W dataset. As hypothesized in our premise, we observe different behaviors in the two cases, and the proposed improvements have different impact on the overall average performance of the two settings. Each experiment below considers one variation at a time of the baseline model including pretrained word embeddings of dimension 300, unless otherwise noted. Pretrained word embeddings are common practice and have a large positive impact, and they thus constitute our de facto reference for fair comparisons of additional improvements. Implementation details are provided in the supplementary material.

         Zero-shot splits  
Standard splits All Z.S. Quest. Z.S. Ans. Z.S. Choices
Chance (lower bound) 25.0 25.0 25.0 25.0 25.0
Human (upper bound) 96.0 95.7 94.2 92.6 95.4
LSTM Q+I [15] 52.1
LSTM-Att. [36] 55.6
MCB [6] 62.2
Jabri et al. [11] 64.8
Baseline model
(1)  Learned word emb. d.300 59.5 47.3 43.0 36.7 47.5
(2)  Pretr. w. emb. d.300, l.r. 0.4 64.7 54.1 48.3 40.3 54.8
      Masking input (Q, I, A) 62.7 52.6 47.7 37.7 53.0
      Masking input (Q,  I , A) 56.6 48.3 43.5 36.1 48.7
      Masking input (Q,  I , A) 52.9 46.5 43.8 35.3 46.7
Improvements over (2)
(3)  Word stemming, CoreNLP 64.6 54.9 49.2 41.3 55.4
(4)  Order embeddings 65.4 55.3 48.6 32.5 56.1
(5)  Data augmentation, ratio 0.5 64.9 54.7 48.1 39.6 55.4
(6)  Visual emb. 4 exemplars 63.8 54.8 48.6 38.1 55.4
(7)  Object det. theshold 64.8 54.1 48.2 39.6 54.7
(8)  Obj. det. with class emb. 64.8 54.6 48.5 39.2 55.3
(3) + (4) 65.5 54.8 48.1 35.8 55.5
(3) + (4) + augm. 2.0 65.7 55.2 47.4 33.6 56.1
(3) + (4) + (6) 64.4 55.3 50.3 40.1 55.6
(3) + (4) + (6) + (8) 64.6 55.8 49.9 40.0 56.5
(3) + (4) + (6) + (8) + augm 0.5 63.5 56.0 49.5 36.7 56.8
Table 2: Quantitative results on the standard and zero-shot (Z.S.) splits of the Visual7W dataset (average accuracy in %). The proposed improvements generally have a larger impact in the Z.S. setting. In combination, all the proposed improvements significantly outperform the state-of-the-art.

5.1 Masking inputs

We first obtain an indication of the difficulty of the datasets by training a model with limited input, masking the question and/or the image. This forces the model to rely on dataset biases. Indeed, when masking both the question and the image, the only input is the set of multiple-choice answers, and the model can only learn to pick the common ones seen during training. As observed before [11], this strategy is sufficient to achieve a high performance with the standard splits. It is far less effective in the Z.S. setting, giving for example, when masking the question, 62.7% vs 52.6% in the standard and Z.S. settings, respectively (see Table 2 and Fig. 4, bottom right). In other words, answers in the Z.S. setting cannot be as easily guessed.

5.2 Improved representations

Figure 4: Individual impact of each proposed improvement on the standard (std.) and zero-shot (Z.S.) splits of the Visual7W dataset (average accuracy in %; note the different vertical scale on the bottom row). See discussion in Section 5.2.

Pretrained word embeddings

We compare pretrained GloVe word embeddings of dimension 50 to 300 and an embedding learned from scratch of dimension 300 (noted “0” in the plot). Pretrained word embeddings over learned ones have the largest impact of all tested improvements in both the standard and Z.S. settings (Fig. 4, bottom left). There is an appreciable correlation between accuracy and the dimension of the embedding. Pretrained embeddings are more beneficial to represent the candidate answers than the questions, which can be explained by the larger amount of data (i.e. question words) available to learn the latter. We evaluate fine-tuning the pretrained embeddings with a relative learning rate between 0 and 1 to the other network parameters. Some fine-tuning always proves beneficial, but the Z.S. setting favors a smaller learning rate (Fig. 4, top left). We suppose that fine-tuning rate may otherwise significantly alter the embeddings of frequent training words. The remaining of the model then co-adapts, but the embeddings of rare and zero-shot words will however no be updated as much, leading to a negative performance impact in the Z.S. setting. Finally, we oberve that fine-tuning a common shared embedding between question and answer performs worse than independent ones. We conclude that the information captured by pretrained embeddings is relevant but not perfectly adequate for VQA, and that a same representation cannot capture the ideal semantics from both questions and answers.

Sharing embeddings across word stems

We obtain a clear benefit from sharing embeddings across words of common stem (Fig. 4, top center). The procedure reduces the number of unique embeddings by about , which regularizes their learning, and addresses some of the novel words at test time by mapping them to known stems. The observed impact is indeed larger in the Z.S. setting than on the standard one. The quality of the stemming algorithm does matter. The classical rule-based Porter algorithm performs worse than our baseline, and the improvements are obtained with a modern algorithm [16].

Test-time exemplar retrieval

We evaluate the proposed visual embeddings for representing the question, the candidate answers, or both. We obtain a net advantage in the Z.S. setting (Fig. 4, top right), correlated with the number of retrieved examples. The benefit is only appreciable when including visual embeddings for both the question and the answers. This indicates that the network may not succeed in learning to correlate visual features of the input image and of the visual embeddings, but only between the visual embeddings of the question and answers. A possible culprit is the different nature of top retrievals from Google and of the images in the Visual7W dataset. Surprisingly, visual embeddings impact performance negatively in the standard setting. We suspect that the language cues and dataset biases that can be exploited in the standard setting are more reliable than the visual embeddings. The inclusion of the latter during inference results in a negative impact frequent enough to hurt overall performance. In the Z.S. setting, the balance is shifted to more test cases that benefit from visual embeddings, resulting in a net benefit on overall performance.

Our straightforward implementation for using exemplars only hints at potential benefits. Obvious extensions include applying it only to prominent words (images for does or will are likely uninformative) and retrieving exemplars of complete expressions (images of blue jay are more informative than images of blue and jay). Test-time use of novel exemplars is akin to the setting of one-shot and few-shot recognition methods (e.g[26]) which could be adapted here.

Explicit object detections

We use detections at different levels of recall from YOLO [21] by varying a threshold on the minimum detection score (a specific model is trained for a specific threshold). The optimal trheshold lies in a tight range (Fig. 4, middle left). A low recall misses important objects, while too high a recall can overwhelm the VQA system with irrelevant detections (false positives). An open research question is how to better integrate those detections, possible with attention mechanisms. At the optimal threshold, we obtain a minor improvement with the proposed semantic class embeddings.

Order embeddings

The proposed order embeddings significantly improves over a symmetric interaction between features. Crucially, we verify that the improvement is caused by the actual order imposed on the embeddings and not merely by the different interactions. To do so, we replace the proposed order ( above than ) by its reverse, which results in performance well below the baseline (Fig. 4, middle center).

Data augmentation

We propose a simple form of data augmentation with additional training examples of incorrect answers. Our model ultimately measures the compatibility between a question/image and a candidate answer, and the intuition is to expand training to more combinations, drawn randomly within mini-batches to form additional incorrect candidate answers. The procedure proves beneficial (Fig. 4, middle right), with a larger relative improvement in the Z.S. setting. The augmentation ratio correspond to the fraction of additional pairs of question / candidate answer (originally four per question).

5.3 Comparison with the state-of-the-art

We finally evaluate a model incorporating all proposed improvements (see Table 2). It achieves best performance overall in both the standard and Z.S. splits. The relative gains from combined improvements are not strictly cumulative, which indicates some overlap in the capability brought in by each. Part of the individual gains is likely attributable to increased model capacity, the benefit of which saturates at some point. On the standard splits, our best model clearly surpasses the existing state-of-the-art on this dataset [11]. We also trained our baseline and best models on reduced training data (random subsets). We appreciate a smooth drop-off in performance, especially in the Z.S. setting with our best method (Fig. 4, bottom center). This indicates good generalization, which, as argued in the introduction, should be a chief objective of VQA systems.

6 Conclusions

This paper defined a setting of visual question answering where questions and answers contain words that were not seen during training. We rearranged the Visual7W dataset to allow an evaluation that focuses exclusively on such test cases. This setting requires more generalization capabilities and leads to a more honest evaluation of deep image understanding. This setting also motivates alternative strategies. We showed that additional, auxiliary data, used for pretraining language visual representations as well as during test time was beneficial, not only for ZS-VQA, but in the traditional setting as well. The extensions of those strategies constitute promising directions for future research.

References

Supplementary material

Appendix A Implementation details

We provide below practical details of our implementation of the proposed method.

  1. [topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=2.0ex]

  2. All weights and biases in our neural networks are optimized with Adadelta [33]

    , using mini-batches of 512 questions. We do not use dropout and avoid overfitting with early stopping by monitoring accuracy on the validation set. The accuracy reported in the experiments is thus measured on the test set at the epoch of highest performance on the validation set.

  3. All weights except pretrained embeddings are initialized randomly as proposed by Glorot et al. [8].

  4. The pretrained word embeddings are GloVe vectors [19] trained for 6 billion words on Wikipedia and on the Gigaword newswire corpus. Those are publicly available from the GloVe authors [18].

  5. The CNN features extracted from the input images and from the exemplar retrieved from Google are obtained from the last pooling layer of a residual network (ResNet). The network is a 152-layer model trained on ImageNet and publicly available from the ResNet authors

    [10].

  6. The dimension of pretrained word embeddings is varied from 50 to 300 as noted in the experiments. The dimension of learned word embeddings is set to 300. The dimension of CNN features of the input image, and of the visual embeddings from retrieved exemplars (produced by ResNet) is 2048.

  7. The dimension of the combined representations and is 4096 and 2048 for the standard and zero-shot splits, respectively. Those were chosen among by cross validation of the baseline model.

  8. Experiments on exemplar retrieval use the top– to top– images returned by Google for single words, limiting the search to color photographs. This avoids results such as logos or clip art. Note that the procedure is applied similarly during training, i.e. we retrieve images for all words of in the training questions and answers. Once the images are downloaded and their CNN features extracted, they are cached locally for efficiency.

  9. Experiments on masking inputs (questions and/or images) are performed by randomly swapping those inputs within mini-batches during the training. This solution was chosen to avoid causing side effects, and preferred over removing the inputs entirely (and reducing the network capacity) or setting them to zero or other arbitrary values.

  10. Plots reporting performance on the “best model” use the two models highlighted in bold in Table  2 for the standard and Z.S. settings.

Appendix B Examples of test questions

We provide below additional examples from the proposed zero-shot test split of the Visual7W dataset, in the same format as in Fig. 2.

How would you hear this cat coming ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] bell on his collar meowing walking on piano walking through water What animals are these ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] giraffes elephants zebras emus
What is in the background ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] meadows lakes trees hills What kind of picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] black white photoshopped sepia tone color
When was this photo taken ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] july december august during say What has happened to the truck over time ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] fell apart paint came off dirty rusting
What time of day is it ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] lunchtime daytime morning teatime Where is the cow ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] petting zoo barn milking pen field
Where is this picture taken ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] swimmimg pool playground volleyball court tennis court What has many archways ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] house garden building bridge
Where are the giraffes ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] in jungle on reserve on plains in zoo Where is the number 06 ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] on back bottom middle bottom left upper right
What is written on the train ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] csx 1017 zap love What gender dominates this picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] men both transgender women
What is in the background ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] forests sea window mountains Why is she on the side of the road ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] walking hitchhiking broken down selling food
Who is in the picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] leprechauns captain ship woman elephant How does the bathroom look ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] discusting messy clean dirty
What brown object is around the cow ’s neck ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] rope rope bell twine What colors are the zebra ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] milticolored grey black white striped
What is the hand like ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] bruised browned tanned darkened What is the suitcase for ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] carrying diamonds trip international man mystery donating clothes drive
What sport are they displaying ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] snowboarding skiing sledding bogging What color is the car ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] silver white redl black
How is the photo ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] fuzzy underdeveloped clear tilted Where is the water ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] behind birds in pool in birdbath in pond
What time of day is it ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] midnight dusk daytime 6:48 pm Why is the truck in a ditch ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] avoided crash wheels came off h deer driver fell asleep
Who is crossing the intersection ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] policman chicken old woman woman man What is white ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] wedding dress childs skin snow clock’s face
What time is it ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] 11:09 12:09 12:14 12:05 What does the white sign say ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] mott’s turn right keep left upstairs
What is printed on the road ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] white arrows white line numbers spraypainted by utility company double yellow lines Where is the drawer unit ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] behind bed in front bed left bed right bed
Who captured this photo ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] mario bertoli anthony bourdain guy fieri photographer Why is the girl on the horse ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] get somewhere fun pleasure ride
What does the sign say ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] yield compact cars only speed bumps ahead slow children at play How is the man dressed ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] in jeans t shirt in work su in uniform in sweatsu
When was this picture taken ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] while was raining during evening at celebration during waking hours What kind of tree is shown ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] weeping willow maple oak cherry
Where was this picture taken ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] at beach at pond at gulf at ocean What does the bottom sign say ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] fifth ave caution bigelow ave n 450 lesly st
How are the scissors arranged ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] on top each other close together next each other overlapping 1 another What is on the woman ’s hands ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] dirt wedding ring gloves moisturizing lotion
What is the green plant for ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] decoration elephant eat cooking seasoning What is on the desk ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] lunchbox weekly planner laptop empty pencil holder
Why is it a rounded picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] panoramic view cut way camera lens fitted frame What type of photo is shown ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] outside inside unfocused black white
What is in the foreground ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] atv train car bike Who has longer wool ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] unsheered sheep cold weather sheep adult sheep better fabric shop
Who is shown ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] man crowd protestors child few people What colors contrast in this picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] gray black blue white tan emerald
What drink is advertised on the truck ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] pepsi vita coco mountain dew coke cola Where is there a bird ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] flying over ocean in birdcage in branches on ground
What are the yaks eating ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] hay grass feed clover What condition is the old stove ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] some chips on enamel perfect dangerous leak antique
What does the monitor in the upper left of the photo say ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] no pain no gain just do rock bike rock boat What is the number of busts in the room ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] 3 1 4 2
What does the sign say ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] stop wet paint pedestrian walking nothing but bumps How do you know it ’s an outdoor scene ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] i can hear birds sunny plenty deep shadows sun out
What type of boat is in the picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] rowboat canoe fishing boat yacht What color are the giraffe spots ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] brown tanish brown black reddish brown
What is on the sink ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] spoon scrub brush cup plate What color are the double-deck buses ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] yellow green red white
Who is playing ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] borg evert rafael nadal sampras Who is riding on the elephant ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] monkey sultan some kids man
Who has their suit jackets buttoned ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] all men some men 3 men only 1 man Who has a sewing kit ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] everyday person tailor traveler seamstress
Why is there a fire ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] ambiance cold roast marshmelloes burning yard waste Where are smudges of dirt ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] gloves shoes shirt pants
Who is standing on the tennis court ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] ball boy umpire tennis player opponent Where is this picture taken ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] soccer game big wheels demo swimming competition tennis match
What is on the plate ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] dog food scraps cat food food Who is in this picture ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] women kids preachers men
What is the table made of ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] plastic mahogany wood lucite What is the status of the tv ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] paused on off broken
What is the woman doing ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] hitting tennis ball hitting hockey puck throwing football shooting basketball What kind of video game is it ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] soccer basketball golf mine craft
What has tassels ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] drapery valance western style vest graduation cap fancy blouse How is the place ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] bushy rocky clear hilly
Why is this photo illuminated ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] sunlight photo effects moon stream effects What kind of bus is this ? [topsep=0.4ex,itemsep=-1ex,partopsep=1ex,parsep=1.5ex,label=,leftmargin=3.2ex] double decker 1 moves red 1 big 1