The Year in Review 2017 | open guide to natural language processing
When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover shortcomings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.READ FULL TEXT VIEW PDF
A video-grounded dialogue system is required to understand both dialogue...
Visual events are a composition of temporal actions involving actors
Reasoning is an important ability that we learn from a very early age. Y...
A vexing problem in artificial intelligence is reasoning about events th...
Although neural models have performed impressively well on various tasks...
We introduce the new task of Acoustic Question Answering (AQA) to promot...
Achieving artificial visual reasoning - the ability to answer image-rela...
The Year in Review 2017 | open guide to natural language processing
A long-standing goal of artificial intelligence research is to develop systems that can reason and answer questions about visual information. Recently, several datasets have been introduced to study this problem [antol15, gao15, krishna16, malinowski14, ren15, yu15, zhu15]. Each of these Visual Question Answering (VQA) datasets contains challenging natural language questions about images. Correctly answering these questions requires perceptual abilities such as recognizing objects, attributes, and spatial relationships as well as higher-level skills such as counting, performing logical inference, making comparisons, or leveraging commonsense world knowledge [ray16]. Numerous methods have attacked these problems [andreas16b, andreas16, fukui16, hiatt16, yang16], but many show only marginal improvements over strong baselines [antol15, jabri16, zhou15]. Unfortunately, our ability to understand the limitations of these methods is impeded by the inherent complexity of the VQA task. Are methods hampered by failures in recognition, poor reasoning, lack of commonsense knowledge, or something else?
The difficulty of understanding a system’s competences is exemplified by Clever Hans, a 1900s era horse who appeared to be able to answer arithmetic questions. Careful observation revealed that Hans was correctly “answering” questions by reacting to cues read off his human observers [pfungst11]. Statistical learning systems, like those used for VQA, may develop similar “cheating” approaches to superficially “solve” tasks without learning the underlying reasoning processes [sturm14, sturm16]. For instance, a statistical learner may correctly answer the question “What covers the ground?” not because it understands the scene but because biased datasets often ask questions about the ground when it is snow-covered [agrawal16, zhang16]. How can we determine whether a system is capable of sophisticated reasoning and not just exploiting biases of the world, similar to Clever Hans?
In this paper we propose a diagnostic dataset for studying the ability of VQA systems to perform visual reasoning. We refer to this dataset as the Compositional Language and Elementary Visual Reasoning diagnostics dataset (CLEVR; pronounced as clever in homage to Hans). CLEVR contains 100k rendered images and about one million automatically-generated questions, of which 853k are unique. It has challenging images and questions that test visual reasoning abilities such as counting, comparing, logical reasoning, and storing information in memory, as illustrated in Figure 1.
We designed CLEVR with the explicit goal of enabling detailed analysis of visual reasoning. Our images depict simple 3D shapes; this simplifies recognition and allows us to focus on reasoning skills. We ensure that the information in each image is complete and exclusive so that external information sources, such as commonsense knowledge, cannot increase the chance of correctly answering questions. We minimize question-conditional bias via rejection sampling within families of related questions, and avoid degenerate questions that are seemingly complex but contain simple shortcuts to the correct answer. Finally, we use structured ground-truth representations for both images and questions: images are annotated with ground-truth object positions and attributes, and questions are represented as functional programs that can be executed to answer the question (see Section 3). These representations facilitate in-depth analyses not possible with traditional VQA datasets.
These design choices also mean that while images in CLEVR may be visually simple, its questions are complex and require a range of reasoning skills. For instance, factorized representations may be required to generalize to unseen combinations of objects and attributes. Tasks such as counting or comparing may require short-term memory [hochreiter97] or attending to specific objects [hiatt16, yang16]. Questions that combine multiple subtasks in diverse ways may require compositional systems [andreas16b, andreas16] to answer.
We use CLEVR to analyze a suite of VQA models and discover weaknesses that are not widely known. For example, we find that current state-of-the-art VQA models struggle on tasks requiring short term memory, such as comparing the attributes of objects, or compositional reasoning, such as recognizing novel attribute combinations. These observations point to novel avenues for further research.
Finally, we stress that accuracy on CLEVR is not an end goal in itself: a hand-crafted system with explicit knowledge of the CLEVR universe might work well, but will not generalize to real-world settings. Therefore CLEVR should be used in conjunction with other VQA datasets in order to study the reasoning abilities of general VQA systems.
The CLEVR dataset, as well as code for generating new images and questions, will be made publicly available.
In recent years, a range of benchmarks for visual understanding have been proposed, including datasets for image captioning[chen15, farhadi10, lin14, young14], referring to objects [kazemzadeh14], relational graph prediction [krishna16], and visual Turing tests [geman15, malinowski14b]. CLEVR, our diagnostic dataset, is most closely related to benchmarks for visual question answering [antol15, gao15, krishna16, malinowski14, ren15, TapaswiCVPR16, yu15, zhu15], as it involves answering natural-language questions about images. The two main differences between CLEVR and other VQA datasets are that: (1) CLEVR minimizes biases of prior VQA datasets that can be used by learning systems to answer questions correctly without visual reasoning and (2) CLEVR’s synthetic nature and detailed annotations facilitate in-depth analyses of reasoning abilities that are impossible with existing datasets.
Prior work has attempted to mitigate biases in VQA datasets in simple cases such as yes/no questions [geman15, zhang16], but it is difficult to apply such bias-reduction approaches to more complex questions without a high-quality semantic representation of both questions and answers. In CLEVR, this semantic representation is provided by the functional program underlying each image-question pair, and biases are largely eliminated via sampling. Winograd schemas [levesque11] are another approach for controlling bias in question answering: these questions are carefully designed to be ambiguous based on syntax alone and require commonsense knowledge. Unfortunately this approach does not scale gracefully: the first phase of the 2016 Winograd Schema Challenge consists of just 60 hand-designed questions. CLEVR is also related to the bAbI question answering tasks [weston16] in that it aims to diagnose a set of clearly defined competences of a system, but CLEVR focuses on visual reasoning whereas bAbI is purely textual.
We are also not the first to consider synthetic data for studying (visual) reasoning. SHRDLU performed simple, interactive visual reasoning with the goal of moving specific objects in the visual scene [winograd72]; this study was one of the first to demonstrate the brittleness of manually programmed semantic understanding. The pioneering DAQUAR dataset [malinowski15] contains both synthetic and human-written questions, but they only generate 420 synthetic questions using eight text templates. VQA [antol15] contains 150,000 natural-language questions about abstract scenes [zitnick13], but these questions do not control for question-conditional bias and are not equipped with functional program representations. CLEVR is similar in spirit to the SHAPES dataset [andreas16], but it is more complex and varied both in terms of visual content and question variety and complexity: SHAPES contains 15,616 total questions with just 244 unique questions while CLEVR contains nearly a million questions of which 853,554 are unique.
CLEVR provides a dataset that requires complex reasoning to solve and that can be used to conduct rich diagnostics to better understand the visual reasoning capabilities of VQA systems. This requires tight control over the dataset, which we achieve by using synthetic images and automatically generated questions. The images have associated ground-truth object locations and attributes, and the questions have an associated machine-readable form. These ground-truth structures allow us to analyze models based on, for example: question type, question topology (chain vs. tree), question length, and various forms of relationships between objects. Figure 2 gives a brief overview of the main components of CLEVR, which we describe in detail below.
The CLEVR universe contains three object shapes (cube, sphere, and cylinder) that come in two absolute sizes (small and large), two materials (shiny “metal” and matte “rubber”), and eight colors. Objects are spatially related via four relationships: “left”, “right”, “behind”, and “in front”. The semantics of these prepositions are complex and depend not only on relative object positions but also on camera viewpoint and context. We found that generating questions that invoke spatial relationships with semantic accord was difficult. Instead we rely on a simple and unambiguous definition: projecting the camera viewpoint vector onto the ground plane defines the “behind” vector, and one object is behind another if its ground-plane position is further along the “behind” vector. The other relationships are similarly defined. Figure2 (left) illustrates the objects, attributes, and spatial relationships in CLEVR. The CLEVR universe also includes one non-spatial relationship type that we refer to as the same-attribute relation. Two objects are in this relationship if they have equal attribute values for a specified attribute.
Scenes are represented as collections of objects annotated with shape, size, color, material, and position on the ground-plane. A scene can also be represented by a scene graph [johnson15, krishna16], where nodes are objects annotated with attributes and edges connect spatially related objects. A scene graph contains all ground-truth information for an image and could be used to replace the vision component of a VQA system with perfect sight.
CLEVR images are generated by randomly sampling a scene graph and rendering it using Blender [Blender]. Every scene contains between three and ten objects with random shapes, sizes, materials, colors, and positions. When placing objects we ensure that no objects intersect, that all objects are at least partially visible, and that there are small horizontal and vertical margins between the image-plane centers of each pair of objects; this helps reduce ambiguity in spatial relationships. In each image the positions of the lights and camera are randomly jittered.
Each question in CLEVR is associated with a functional program that can be executed on an image’s scene graph, yielding the answer to the question. Functional programs are built from simple basic functions that correspond to elementary operations of visual reasoning such as querying object attributes, counting sets of objects, or comparing values. As shown in Figure 2, complex questions can be represented by compositions of these simple building blocks. Full details about each basic function can be found in the supplementary material.
As we will see in Section 4, representing questions as functional programs enables rich analysis that would be impossible with natural-language questions. A question’s functional program tells us exactly which reasoning abilities are required to solve it, allowing us to compare performance on questions requiring different types of reasoning.
We must overcome several key challenges to generate a VQA dataset using functional programs. Functional building blocks can be used to construct an infinite number of possible functional programs, and we must decide which program structures to consider. We also need a method for converting functional programs to natural language in a way that minimizes question-conditional bias. We solve these problems using question families.
A question family contains a template for constructing functional programs and several text templates providing multiple ways of expressing these programs in natural language. For example, the question “How many red things are there?” can be formed by instantiating the text template “How many C M things are there?”, binding the parameters C and M (with types “color” and “material”) to the values red and nil. The functional program count(filter_color(red, scene())) for this question can be formed by instantiating the associated program template
with the same values, using the convention that functions taking a nil input are removed after instantiation.
CLEVR contains a total of 90 question families, each with a single program template and an average of four text templates. Text templates were generated by manually writing one or two templates per family and then crowdsourcing question rewrites. To further increase language diversity we use a set of synonyms for each shape, color, and material. With up to 19 parameters per template, a small number of families can generate a huge number of unique questions; Figure 3 shows that of the nearly one million questions in CLEVR, more than 853k are unique. CLEVR can easily be extended by adding new question families.
Generating a question for an image is conceptually simple: we choose a question family, select values for each of its template parameters, execute the resulting program on the image’s scene graph to find the answer, and use one of the text templates from the question family to generate the final natural-language question.
However, many combinations of values give rise to questions which are either ill-posed or degenerate. The question “What color is the cube to the right of the sphere?” would be ill-posed if there were many cubes right of the sphere, or degenerate if there were only one cube in the scene since the reference to the sphere would then be unnecessary. Avoiding such ill-posed and degenerate questions is critical to ensure the correctness and complexity of our questions.
A naïve solution is to randomly sample combinations of values and reject those which lead to ill-posed or degenerate questions. However, the number of possible configurations for a question family is exponential in its number of parameters, and most of them are undesirable. This makes brute-force search intractable for our complex question families.
Instead, we employ a depth-first search to find valid values for instantiating question families. At each step of the search, we use ground-truth scene information to prune large swaths of the search space which are guaranteed to produce undesirable questions; for example we need not entertain questions of the form “What color is the S to the R of the sphere” for scenes that do not contain spheres.
Finally, we use rejection sampling to produce an approximately uniform answer distribution for each question family; this helps minimize question-conditional bias since all questions from the same family share linguistic structure.
VQA models typically represent images with features from pretrained CNNs and use word embeddings or recurrent networks to represent questions and/or answers. Models may train recurrent networks for answer generation [gao15, malinowski15, wu16]
, multiclass classifiers over common answers[antol15, hiatt16, ma15, ren15, zhou15, zhu15], or binary classifiers on image-question-answer triples [fukui16, jabri16, shih16]. Many methods incorporate attention over the image [fukui16, shih16, yang16, zhu15, xu16] or question [hiatt16]. Some methods incorporate memory [xiong2016dynamic] or dynamic network architectures [andreas16b, andreas16].
Experimenting with all methods is logistically challenging, so we reproduced a
representative subset of methods: baselines that do not look at the image
(Q-type mode, LSTM), a simple baseline (CNN+BoW) that performs near
state-of-the-art [jabri16, zhou15], and more sophisticated methods using
recurrent networks (CNN+LSTM), sophisticated feature pooling
(CNN+LSTM+MCB), and spatial attention (CNN+LSTM+SA).111We performed initial
experiments with dynamic module networks [andreas16b] but its parsing heuristics
did not generalize to the complex questions in CLEVR so it did not work out-of-the-box;
see supplementary material.
but its parsing heuristics did not generalize to the complex questions in CLEVR so it did not work out-of-the-box; see supplementary material.These are described in detail below.
Q-type mode: Similar to the “per Q-type prior” method in [antol15], this baseline predicts the most frequent training-set answer for each question’s type.
LSTM: Similar to “LSTM Q” in [antol15], the question is processed with learned word embeddings followed by a word-level LSTM [hochreiter97]
. The final LSTM hidden state is passed to a multi-layer perceptron (MLP) that predicts a distribution over answers. This method uses no image information so it can only model question-conditional bias.
CNN+BoW: Following [zhou15], the question is encoded by averaging word vectors for each word in the question and the image is encoded using features from a convolutional network (CNN). The question and image features are concatenated and passed to a MLP which predicts a distribution over answers. We use word vectors trained on the GoogleNews corpus [mikolov13]; these are not fine-tuned during training.
CNN+LSTM: As above, images and questions are encoded using CNN features and final LSTM hidden states, respectively. These features are concatenated and passed to an MLP that predicts an answer distribution.
CNN+LSTM+MCB: Images and questions are encoded as above, but instead of concatenation, their features are pooled using compact multimodal pooling (MCB) [fukui16, gao2016compact].
CNN+LSTM+SA: Again, the question and image are encoded using a CNN and LSTM, respectively. Following [yang16], these representations are combined using one or more rounds of soft spatial attention and the final answer distribution is predicted with an MLP.
Human: We used Mechanical Turk to collect human responses for 5500 random questions from the test set, taking a majority vote among three workers for each question.
Our CNNs are ResNet-101 models pretrained on ImageNet[he16] that are not finetuned; images are resized to
prior to feature extraction. CNN+LSTM+SA extracts features from the last layer of the
-dimensional features. All other methods extract features from the final average pooling layer, giving 2048-dimensional features. LSTMs use one or two layers with 512 or 1024 units per layer. MLPs use ReLU functions and dropout[srivastava14]; they have one or two hidden layers with between 1024 and 8192 units per layer. All models are trained using Adam [kingma14].
CLEVR is split into train, validation, and test sets (see Figure 3
). We tuned hyperparameters (learning rate, dropout, word vector size, number and size of LSTM and MLP layers) independently per model based on the validation error. All experiments were designed on the validation set; after finalizing the design we ran each model once on the test set.All experimental findings generalized from the validation set to the test set.
We can use the program representation of questions to analyze model performance on different forms of reasoning. We first evaluate performance on each question type, defined as the outermost function in the program. Figure 4 shows results and detailed findings are discussed below.
Querying attributes: Query questions ask about an attribute of a particular object (e.g. “What color is the thing right of the red sphere?”). The CLEVR world has two sizes, eight colors, two materials, and three shapes. On questions asking about these different attributes, Q-type mode and LSTM obtain accuracies close to 50%, 12.5%, 50%, and 33.3% respectively, showing that the dataset has minimal question-conditional bias for these questions. CNN+LSTM+SA substantially outperforms all other models on these questions; its attention mechanism may help it focus on the target object and identify its attributes.
Comparing attributes: Attribute comparison questions ask whether two objects have the same value for some attribute (e.g. “Is the cube the same size as the sphere?”). The only valid answers are “yes” and “no”. Q-Type mode and LSTM achieve accuracies close to 50%, confirming there is no dataset bias for these questions. Unlike attribute-query questions, attribute-comparison questions require a limited form of memory: models must identify the attributes of two objects and keep them in memory to compare them. Interestingly, none of the models are able to do so: all models have an accuracy of approximately 50%. This is also true for the CNN+LSTM+SA model, suggesting that its attention mechanism is not capable of attending to two objects at once to compare them. This illustrates how CLEVR can reveal limitations of models and motivate follow-up research, e.g
., augmenting attention models with explicit memory.
Existence: Existence questions ask whether a certain type of object is present (e.g., “Are there any cubes to the right of the red thing?”). The 50% accuracy of Q-Type mode shows that both answers are a priori equally likely, but the LSTM result of 60% does suggest a question-conditional bias. There may be correlations between question length and answer: questions with more filtering operations (e.g., “large red cube” vs. “red cube”) may be more likely to have “no” as the answer. Such biases may be present even with uniform answer distributions per question family, since questions from the same family may have different numbers of filtering functions. CNN+LSTM(+SA) outperforms LSTM, but its performance is still quite low.
Counting: Counting questions ask for the number of objects fulfilling some conditions (e.g. “How many red cubes are there?”
); valid answers range from zero to ten. Images have three and ten objects and counting questions refer to subsets of objects, so ensuring a uniform answer distribution is very challenging; our rejection sampler therefore pushes towards a uniform distribution for these questions rather than enforcing it as a hard constraint. This results in a question-conditional bias, reflected in the 35% and 42% accuracies achieved by Q-type mode and LSTM. CNN+LSTM(+MCB) performs on par with LSTM, suggesting that CNN features contain little information relevant to counting. CNN+LSTM+SA performs slightly better, but at 52% its absolute performance is low.
Integer comparison: Integer comparison questions ask which of two object sets is larger (e.g. “Are there fewer cubes than red things?”); this requires counting, memory, and comparing integer quantities. The answer distribution is unbiased (see Q-Type mode) but a set’s size may correlate with the length of its description, explaining the gap between LSTM and Q-type mode. CNN+BoW performs no better than chance: BoW mixes the words describing each set, making it impossible for the learner to discriminate between them. CNN+LSTM+SA outperforms LSTM on “less” and “more” questions, but no model outperforms LSTM on “equal” questions. Most models perform better on “less” than “more” due to asymmetric question families.
CLEVR questions contain two types of relationships: spatial and same-attribute (see Section 3). We can compare the relative difficulty of these two types by comparing model performance on questions with a single spatial relationship and questions with a single same-attribute relationship; results are shown in Figure 5. On query-attribute and counting questions we see that same-attribute questions are generally more difficult; the gap between CNN+LSTM+SA on spatial and same-relate query questions is particularly large (93% vs. 78%). Same-attribute relationships may require a model to keep attributes of one object “in memory” for comparison, suggesting again that models augmented with explicit memory may perform better on these questions.
We next evaluate model performance on different question topologies: chain-structured questions vs. tree-structured questions with two branches joined by a logical AND (see Figure 2). In Figure 6, we compare performance on chain-structured questions with two spatial relationships vs. tree-structured questions with one relationship along each branch. On query questions, CNN+LSTM+SA shows a large gap between chain and tree questions (92% vs. 74%); on count questions, CNN+LSTM+SA slightly outperforms LSTM on chain questions (55% vs. 49%) but no method outperforms LSTM on tree questions. Tree questions may be more difficult since they require models to perform two subtasks in parallel before fusing their results.
Intuitively, longer questions should be harder since they involve more reasoning steps. We define a question’s size to be the number of functions in its program, and in Figure 7 (bottom left) we show accuracy on query-attribute questions as a function of question size.222We exclude questions with same-attribute relations since their max size is 10, introducing unwanted correlations between size and difficulty. Excluded questions show the same trends (see supplementary material). Surprisingly accuracy appears unrelated to question size.
However, many questions can be correctly answered even when some subtasks are not solved correctly. For example, the question in Figure 7 (top) can be answered correctly without identifying the correct large blue cylinder, because all large objects left of a cylinder are cylinders.
To quantify this effect, we define the effective question of an image-question pair: we prune functions from the question’s program to find the smallest program that, when executed on the scene graph for the question’s image, gives the same answer as the original question.333Pruned questions may be ill-posed (Section 3) so they are executed with modified semantics; see supplementary material for details. A question’s effective size is the size of its effective question. Questions whose effective size is smaller than their actual size need not be degenerate. The question in Figure 7 is not degenerate because the entire question is needed to resolve its object references (there are two blue cylinders and two rubber cylinders), but it has a small effective size since it can be correctly answered without resolving those references.
In Figure 7 (bottom), we show accuracy on query questions as a function of effective question size. The error rate of all models increases with effective question size, suggesting that models struggle with long reasoning chains.
We expect that questions with more spatial relationships should be more challenging since they require longer chains of reasoning. The top set of plots in Figure 8 shows accuracy on chain-structured questions with different numbers of relationships.444We restrict to chain-structured questions to avoid unwanted correlations between question topology and number of relationships. Across all three question types, CNN+LSTM+SA shows a significant drop in accuracy for questions with one or more spatial relationship; other models are largely unaffected by spatial relationships.
Spatial relationships force models to reason about objects’ relative positions. However, as shown in Figure 8, some questions can be answered using absolute spatial reasoning. In this question the purple cube can be found by simply looking in the bottom half of the image; reasoning about its position relative to the metal sphere is unnecessary.
Questions only requiring absolute spatial reasoning can be identified by modifying the semantics of spatial relationship functions in their programs: instead of returning sets of objects related to the input object, they ignore their input object and return the set of objects in the half of the image corresponding to the relationship. A question only requires absolute spatial reasoning if executing its program with these modified semantics does not change its answer.
The bottommost plots of Figure 8 show accuracy on chain-structured questions with different number of relationships, excluding questions that can be answered with absolute spatial reasoning. On query questions, CNN+LSTM+SA performs significantly worse when absolute spatial reasoning is excluded; on count questions no model outperforms LSTM, and on exist questions no model outperforms Q-type mode. These results suggest that models have not learned the semantics of spatial relationships.
Practical VQA systems should perform well on images and questions that contain novel combinations of attributes not seen during training. To do so models might need to learn disentangled representations for attributes, for example learning separate representations for color and shape instead of memorizing all possible color/shape combinations.
We can use CLEVR to test the ability of VQA models to perform such compositional generalization. We synthesize two new versions of CLEVR: in Condition A all cubes are gray, blue, brown, or yellow and all cylinders are red, green, purple, or cyan; in Condition B these shapes swap color palettes. Both conditions contain spheres of all eight colors.
We retrain models on Condition A and compare their performance when testing on Condition A (A A) and testing on Condition B (A B). In Figure 9 we show accuracy on query-color and query-material questions, separating questions asking about spheres (which are the same in A and B) and cubes/cylinders (which change from A to B).
Between AA and AB, all models perform about the same when asked about the color of spheres, but perform much worse when asked about the color of cubes or cylinders; CNN+LSTM+SA drops from 85% to 51%. Models seem to learn strong biases about the colors of objects and cannot overcome these biases when conditions change.
When asked about the material of cubes and cylinders, CNN+LSTM+SA shows a smaller gap between AA and AB (90% vs 81%); other models show no gap. Having seen metal cubes and red metal objects during training, models can understand the material of red metal cubes.
This paper has introduced CLEVR, a dataset designed to aid in diagnostic evaluation of visual question answering (VQA) systems by minimizing dataset bias and providing rich ground-truth representations for both images and questions. Our experiments demonstrate that CLEVR facilitates in-depth analysis not possible with other VQA datasets: our question representations allow us to slice the dataset along different axes (question type, relationship type, question topology, etc.), and comparing performance along these different axes allows us to better understand the reasoning capabilities of VQA systems. Our analysis has revealed several key shortcomings of current VQA systems:
Short-term memory: All systems we tested performed poorly in situations requiring short-term memory, including attribute comparison and integer equality questions (Section 4.2), same-attribute relationships (Section 4.3), and tree-structured questions (Section 4.4). Attribute comparison questions are of particular interest, since models can successfully identity attributes of objects but struggle to compare attributes.
Spatial Relationships: Models fail to learn the true semantics of spatial relationships, instead relying on absolute image position (Section 4.6).
Disentangled Representations: By training and testing models on different data distributions (Section 4.7) we argue that models do not learn representations that properly disentangle object attributes; they seem to learn strong biases from the training data and cannot overcome these biases when conditions change.
Our study also shows cases where current VQA systems are successful. In particular, spatial attention [yang16] allows models to focus on objects and identify their attributes even on questions requiring multiple steps of reasoning.
These observations present clear avenues for future work on VQA. We plan to use CLEVR to study models with explicit short-term memory, facilitating comparisons between values [graves16, joulin15b, weston15, xiong2016dynamic]; explore approaches that encourage learning disentangled representations [bengio14]; and investigate methods that compile custom network architectures for different patterns of reasoning [andreas16b, andreas16]. We hope that diagnostic datasets like CLEVR will help guide future research in VQA and enable rapid progress on this important task.
As described in Section 3 and shown in Figure 2, each question in CLEVR is associated with a functional program built from a set of basic functions. In this section we detail the semantics of these basic functional building blocks.
Our basic functional building blocks operate on values of the following types:
Object: A single object in the scene.
ObjectSet: A set of zero or more objects in the scene.
Integer: An integer between 0 and 10 (inclusive).
Boolean: Either yes or no.
Size: One of large or small.
Color: One of gray, red, blue, green, brown, purple, cyan, or yellow.
Shape: One of cube, sphere, or cylinder.
Material: One of rubber or metal.
Relation: One of left, right, in front, or behind.
The functional program representations of questions are built from the following set of basic building blocks. Each of these functions takes the image’s scene graph as an additional implicit input.
Returns the set of all objects in the scene.
If the input is a singleton set, then return it as a standalone Object; otherwise raise an exception and flag the question as ill-posed (See Section 3).
Return all objects in the scene that have the specified spatial relation to the input object. For example if the input object is a red cube and the input relation is left, then return the set of all objects in the scene that are left of the red cube.
Returns the size of the input set.
Returns yes if the input set is nonempty and no if it is empty.
Filtering functions: These functions filter the input objects by some attribute, returning the subset of input objects that match the input attribute. For example calling filter_size with the first input small will return the set of all small objects in the second input.
Query functions: These functions return the specified attribute of the input object; for example calling query_color on a red object returns red.
Returns the intersection of the two input sets.
Returns the union of the two input sets.
Same-attribute relations: These functions return the set of objects that have the same attribute value as the input object, not including the input object. For example calling same_shape on a cube returns the set of all cubes in the scene, excluding the query cube.
Integer comparison: Checks whether the two integer inputs are equal, or whether the first is less than or greater than the second, returning either yes or no.
Attribute comparison: These functions return yes if their inputs are equal and no if they are not equal.
In Section 4.5 we note that some questions can be correctly answered without correctly resolving all intermediate object references, and define a question’s effective question to quantitatively measure this effect.
For any question we can compute its effective question by pruning functions from the question’s program; the effective question is the smallest such pruned program that, when executed on the scene graph for the question’s image, gives the same answer as the original question.
Some pruned questions may be ill-posed, meaning that some object references do not refer to a unique object. For example, consider the question “What color is the cube behind the cylinder?”; its associated program is
Imagine executing this program on the scene shown in Figure 10. The innermost filter_shape gives a set containing the cylinder, the relate returns a set containing just the large cube in the back, the outer filter_shape does nothing, and the query_color returns brown.
This question is not ill-posed (Section 3) because the reference to “the cube” cannot be resolved without the rest of the question; however this question’s effective size is less than its actual size because the question can be correctly answered without resolving this object reference correctly.
To compute the effective question, we attempt to prune functions from this program. Starting from the innermost function and working out, whenever we find a function whose input type is Object or ObjectSet, we construct a pruned question by replacing that function’s input with a scene function and executing it. The smallest such pruned program that gives the same answer as the original program is the effective question.
Pruned questions may be ill-posed, so we execute them with modified semantics. The output type of the unique function is changed from Object to ObjectSet, and it simply returns its input set. All functions taking an Object input are modified to take an ObjectSet input instead by mapping the original function over its input set and flattening the resulting set; thus the relate functions return the set of objects in the scene that have the specified relationship with any of the input objects, and the query functions return sets of values rather than single values.
Therefore for this example question we consider the following sequence of pruned programs. First we prune the inner filter_shape function:
The relate function now returns the set of objects which are behind some object, so it returns the large cube and the cylinder (since it is behind the small cube). The filter_shape function removes the cylinder, and the query_color returns a singleton set containing brown.
Next we prune the inner unique function:
Since unique computes the identity for pruned questions, execution is the same as above.
Next we prune the relate function:
Now the filter_shape returns the set of both cubes, but since both are brown the query_color still returns a singleton set containing brown.
Next we prune the filter_shape function:
Now the query_color receives the set of all three input objects, so it returns a set containing brown and gray, which is different from the original question.
The effective question is therefore:
Figure 7 of the main paper shows model accuracy on query-attribute questions as a function of actual and effective question size, excluding questions with same-attribute relationships. Questions with same-attribute relationships have a maximum question size of 10 but questions without same-attribute relationships have a maximum size of 20; combining these questions thus leads to unwanted correlations between question size and difficulty.
In Figure 11 we show model accuracy vs. actual and effective question size for questions with same-attribute relationships. Similar to Figure 7, we see that model accuracy either remains constant or increases as actual question size increases, but all models show a clear decrease in accuracy as effective question size increases.
Module networks [andreas16b, andreas16] are a novel approach to visual question answering where a set of differentiable modules are used to assemble a custom network architecture to answer each question. Each module is responsible for performing a specific function such as finding a particular type of object, describing the current object of attention, or performing a logical and operation to merge attention masks. This approach seems like a natural fit for the rich, compositional questions in CLEVR; unfortunately we found that parsing heuristics tuned for the VQA dataset did not generalize to the longer, more complex questions in CLEVR.
Dynamic module networks [andreas16b] generate network architectures by performing a dependency parse of the question, using a set of heuristics to compute a set of layout fragments, combining these fragments to create candidate layouts, and ranking the candidate layouts using an MLP.
For some questions, the heuristics are unable to produce any layout fragments; in this case, the system uses a simple default network architecture as a fallback for answering that question. On a random sample of 10,000 questions from the VQA dataset [antol15], we found that dynamic module networks resorted to default architecture for 7.8% of questions; on a random sample of 10,000 questions from CLEVR, the default network architecture was used for 28.9% of questions. This suggests that the same parsing heuristics used for VQA do not apply to the questions in CLEVR; therefore the method of [andreas16b] did not work out-of-the box on CLEVR.
The remaining pages show randomly selected images and questions from CLEVR. Each question is annotated with its answer, question type, and size. Recall from Section 3 that a question’s type is the outermost function in the question’s functional program, and a question’s size is the number of functions in its program.
Acknowledgments We thank Deepak Pathak, Piotr Dollár, Ranjay Krishna, Animesh Garg, and Danfei Xu for helpful comments and discussion.