Understand, Compose and Respond - Answering Visual Questions by a Composition of Abstract Procedures

10/25/2018 ∙ by Ben Zion Vatashsky, et al. ∙ Weizmann Institute of Science 2

An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

page 14

page 21

page 22

page 23

page 27

page 28

page 29

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human ability to answer a question related to an image is remarkable in several ways. Given a single image, a large number of different questions can be answered about it. Answering these questions may require the detection and analysis of subtle, non-salient cues. Prior information and data obtained through experience are also incorporated into the process, to enable answering the question, which may be highly complex. The answering process itself is open to reasoning, allowing for example elaborations on the answer, or explaining how it was reached. In the last few years, the problem of image question-answering by a machine was addressed by many studies [Teney, Anderson, He,  HengelTeney et al.2017a, Pandhre  SodhaniPandhre  Sodhani2017, Wu, Teney, Wang, Shen, Dick,  HengelWu et al.2016a, Kafle  KananKafle  Kanan2016]

, mostly by treating the problem as an end-to-end multi-class training problem. In these methods, image representation is based on the last convolutional layer of a pre-trained Convolutional Neural Network (CNN)

[Hochreiter  SchmidhuberHochreiter  Schmidhuber1997]

. It is fused with the question features (mostly represented using a Recurrent Neural Network (RNN), e.g. LSTM

[LeCun, Bottou, Bengio,  HaffnerLeCun et al.1998]) to generate embedded features that are used to predict the answer from common answers of a training set.

Though current existing methods show statistical success on the trained datasets (e.g. VQA [Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015], VQA v2 [Goyal, Khot, Summers-Stay, Batra,  ParikhGoyal et al.2017], CLEVR [Johnson, Hariharan, van der Maaten, Fei-Fei, Zitnick,  GirshickJohnson et al.2017a]), they do so by exploiting biases of the questions and the specific datasets [Xu, Chen, Liu, Rohrbach, Darell,  SongXu et al.2017, Agrawal, Batra,  ParikhAgrawal et al.2016]

. The human abilities and understanding mentioned above are missing from these methods. Casting the problem into an end-to-end multi-class problem, makes it practically impossible to obtain “human like” understanding of the question and the answering process itself, a process that for humans can be broken into meaningful pieces, which are used to provide elaborations and analysis of the answer. An Additional characteristic of the human answering process is the use of modular independent structures, where novel detection abilities may be integrated in the process. For example, learning to identify a new object class allows integrating this object into a variety of questions without requiring an additional training procedure. Finally, the question may guide the answering procedure to focus on specific and subtle details that may be lost in a general features extraction. Such abilities are missing from current machine answering algorithms

In the approach described below, we develop a framework that proceeds along the following steps. It generates a meaningful representation of the question, maps the question representation into a corresponding answering procedure, and applies the answering procedure to the image. The answering process is determined by the question itself and the details of its composition. Our scheme includes a representation of the query’s meaning, in which the query is broken into its components. The individual components are handled by procedures that correspond to the type of the component, using existing visual estimators. These procedures are then combined together to provide the final answer. The entire process and its components, including the required visual estimations (such as classification, detection, segmentation and others) their order and combination, depend on the question and are structured to produce an appropriate response. This process does not require any question answering training and is not biased towards the statistics of a specific visual question answering dataset, as current end-to-end approaches.

Our scheme is focused on visual aspects of the image and not on specific domain knowledge. Although we utilize external knowledge sources, it is mainly to assist in understanding the question, and not as a fundamental information source for the answer. A relevant question for our system is a question that can be answered by any human (who understand the question) with an intact visual system, but without depending on specific domain knowledge. For example, the question ”What famous book did the man in the picture write?”, requires domain-specific knowledge for answering the question (who is the man? what books did he write? which book is famous?). Such questions are not in the scope of this work but they could be answered, using a richer knowledge base.

The system we propose and describe in this work handles a wide range of questions about images, without training on any questions (zero-shot learning). We concentrate on designing a general process for this task and not on fitting results to the statistics of a specific dataset, as current end-to-end approaches. Our system uses many existing methods for different visual tasks, such as detection, classification, segmentation, or extracting objects’ properties and relations. In some cases novel detection methods were developed, however this is not a main focus of the work, as our system is modular, enabling ‘plugging in’ new detectors to enhance its capabilities.

1.1 The structure of questions

A central aspect of our scheme is that different questions share a similar structure or sub-components with similar structure. For instance, the following questions have components with a common structure:

What kind of pants is the person on the bed wearing? person on bed
Is the giraffe behind a fence? giraffe behind fence

The part with common structure can be represented as:

There exist of class and of class , such that r()

Such structures may serve as building blocks for a compositional question representation. All components with similar structures can be handled by the same procedure, performing part of the answering task. In our analysis, questions could be represented by a combination of a few types of structures which we refer to as ”basic patterns”. These patterns are short parametric logical phrases that represent an atomic segment of the question structure. Each basic pattern dictates a particular implementation scheme utilizing a pool of implemented building blocks. The combination of basic patterns determines the entire procedure of answering the question. One advantage of such a scheme is that it is modular, allowing the addition of building blocks to increase the scope of the scheme, with no dependency on the statistics of a specific visual questions’ dataset. A second advantage is that the coverage of queries grows exponentially with the number of building blocks without the need to encounter such queries as training examples. Additional advantage is ”understanding” capabilities. The basic meaningful components breaks the process and allows a separate analysis of each component, including reasons of failure and explanations.

The aspect of questions’ coverage is also addressed in other directions. Such a direction is increasing the recognizable vocabulary of the question using commonsense knowledge.

1.2 Utilizing commonsense knowledge

In many cases answering a question requires integration of prior commonsense knowledge, especially about semantic relations between concepts. For example when answering the question ’What animal is this?’ detection capabilities of specific animals (e.g. horse, dog, cat) will not suffice, since the answer requires the general notion of ‘animal’ and which particular instances belong to it. However, a query to an external knowledge database (e.g. ConceptNet [Speer  HavasiSpeer  Havasi2013]), may provide subcategories of ’animal’. Consequently, specific detectors can be activated to seek these specific recognizable animal types. These knowledge databases are mostly based on information extracted from the internet and include commonsense information about the world. Querying such a database allows the completion of missing information such as semantic connections between object’s classes (e.g. synonym, superordinate, subordinate) as in the example above, the typical usage of different objects, and more. Integrating this type of information is important when answering questions asked by humans, as it is common knowledge and treated as universally available.

2 Related Work

Visual Question answering has developed dramatically in the last few years [Pandhre  SodhaniPandhre  Sodhani2017, Wu, Teney, Wang, Shen, Dick,  HengelWu et al.2016a, Kafle  KananKafle  Kanan2016]. Practically all current works are based on casting the problem into a multi class classification problem, where image features, retrieved by a Convolutional Neural Network, are fused with question features (mostly extracted by Recurrent Neural Network) and used to predict one of the common training set answers, mostly short and succinct answers. These methods have the advantage of not requiring to incorporate a complicated parsing and understanding process and may present decent results when trained and tested on current existing datasets, yet they lack some important human characteristics like using a compositional process, utilizing existing and meaningful sub processes. Using meaningful sub processes allows humans to focus on different aspects and scopes according to the specific task, utilize existing abilities and modularly integrate novel ones, understand limitation and provide elaborations including suggestions of alternatives.

Incorporating the question information is largely addressed by seeking mechanisms for image-language features fusion. A large focus in this line of works was in simplifying bilinear pulling (which is based on the outer product of the two feature vectors) by reducing dimensionality of the features

[Fukui, Park, Yang, Rohrbach, Darrell,  RohrbachFukui et al.2016] or a low rank factorization [Ben-younes, Cadene, Cord,  ThomeBen-younes et al.2017, Yu, Yu, Fan,  TaoYu et al.2017].

In order to extract image information that is more informative to the question and avoid the noise of irrelevant image areas, many works incorporated attention mechanisms. During the attention stage image areas, that are considered more relevant, are multiplied by higher weights and contribute more to answering the question. Attention may be stacked for multiple stages [Yang, He, Gao, Deng,  SmolaYang et al.2016] with the motivation of refining it for complicated questions. Extracting relevant areas was also performed by integrating regions of detected objects related to question words [Ilievski, Yan,  FengIlievski et al.2016]. The attention concept was also extended to include both image features and the question representation [Lu, Yang, Batra,  ParikhLu et al.2016b], where both attention types effect each other. Additional attention mechanisms utilize CRF [Zhu, Zhao, Huang, Tu,  MaZhu et al.2017], consider all word-region interactions [Nguyen  OkataniNguyen  Okatani2018], incorporate correlations between image, question and candidate answer [Schwartz, Schwing,  HazanSchwartz et al.2017] and combine grid based and object detection based regions [Lu, Li, Zhang, Wang,  WangLu et al.2018, Anderson, He, Buehler, Teney, Johnson, Gould,  ZhangAnderson et al.2017].

Combining results of meaningful tasks (other than using pre-trained networks as visual features) such as object detection was in the focus of several additional works. One such work uses object and attribute recognition tasks for proposed regions and combines them with corresponding representations from question and candidate answer [Gupta, Shih, Singh,  HoiemGupta et al.2017]. The use of visual concepts (object class and attributes) of attended regions and comparing them to extracted concepts from the question was proposed as well [Agrawal, Batra, Parikh,  KembhaviAgrawal et al.2018]. In another work concatenating pairs of vectors representing two detected objects and their properties with the encoded question was used to allow relation reasoning [Desta, Chen,  KornutaDesta et al.2018]. Objects and relations between them was utilized in a work that used graph representation for both the image (synthetic images) and the question [Teney, Liu,  van den HengelTeney et al.2017b]. For the image graph objects were the nodes and edges were the spatial relations between them and for the question graph words were the nodes and their dependencies were the edges. Representations were merged in an attention-like mechanism to fuse the features and predict the answer.

A work that uses ”facts” extracted from the image, including scene type, detected objects, properties and relations [Wang, Wu, Shen,  van den HengelWang et al.2017], combined them with co-attention mechanism into a fused feature vector and used attention weights to provide the contributing facts. Facts extraction is not guided by the question and may result in low contribution to questions on non salient details. In addition, any modification of the ”facts” detectors would force a full training of the answering module. Providing reasoning was also addressed by merging the answer with the most relevant image caption [Li, Tao, Joty, Cai,  LuoLi et al.2018b]. Image caption based reasoning (comparing relations extracted from parsed caption and question) was also used to allow answer modification based on Probabilistic Soft Logic (providing contributing relations as evidence) [Aditya, Yang,  BaralAditya et al.2018]

. In some cases the representation was based on image caption where relevant words (based on image caption data), a sentence describing the image and the question were fused to feed the answer classifier

[Li, Fu, Yu, Mei,  LuoLi et al.2018a].

The compositionality concept in visual question answering was addressed by the Neural module Network (NMN) works that compose a dynamic network out of trained modules. The original layout of these modules is based on the dependency parsing of the question [Andreas, Rohrbach, Darrell,  KleinAndreas et al.2016b] and was also enhanced to include learning for the selection of the layout [Andreas, Rohrbach, Darrell,  KleinAndreas et al.2016a]. Following the release of the CLEVR dataset [Johnson, Hariharan, van der Maaten, Fei-Fei, Zitnick,  GirshickJohnson et al.2017a], which includes annotations for the answering programs, the layout was also learned in a supervised manner according to these programs [Johnson, Hariharan, van der Maaten, Hoffman, Fei-Fei, Zitnick,  GirshickJohnson et al.2017b, Hu, Andreas, Rohrbach, Darrell,  SaenkoHu et al.2017]. It is important to note that even though meaningful programs may be learned and corresponding modules are assigned, the modules are not trained to perform any independent meaningful task and its learned function is only to serve as a component for the question answering network trained for a specific dataset. This means that there is no flexibility and options to incorporate exiting methods as in our approach or modularly modify and improve the modules. NMN requires a large amount of question-answer examples, unlike our approach, that requires none. As NMN provide answers by a classification, no elaboration or limitation aware answers (e.g. ’Unknown class: scissors’) are possible. In addition there is no utilization of commonsense information.

Another aspect of question answering related to our work is integrating external prior knowledge. One approach focused on questions that require external knowledge in addition to the image. This was addressed by querying knowledge databases according to visual concepts (objects, image scenes and image attributes) detected in the image. The query was generated either by mapping the question to a template [Wang, Wu, Shen, Hengel,  DickWang et al.2015] or, in a modified version, by a learned mapping [Wang, Wu, Shen, Hengel,  DickWang et al.2016]. Another approach merged external knowledge, extracted using detected image attributes, with image representation (detected attributes and generated captions) and question [Wu, Wang, Shen, Dick,  van den HengelWu et al.2016b]. Integrating external knowledge using a Dynamic Memory Network [Xiong, Merity,  SocherXiong et al.2016] was proposed where knowledge base queries are based on detected objects and question keywords [Li, Su,  ZhuLi et al.2017].

The common to all the above approaches is that they cast the problem as one learning problem (mostly end-to-end, multi class classification), tailored for a specific datasest. Incorporation of compositionality, reasoning, attention mechanisms, external knowledge and visual detection tasks can all be described as part of ”improved” feature extraction for the final classification task. No meaningful, independent tasks are used in the answering process, as naturally done by humans. When humans learn to identify a new object, property or relation, they can immediately incorporate it in their answering mechanism. Such modularity does not exist in current visual question answering systems, where each change requires a full retrain of the answering system. It is also evident that while existing methods may provide reasonable statistical results on existing datasets, it does so by exploiting inherent biases, which leads to insensitivity to full question details and images, with a tendency to fail on novel characteristics [Agrawal, Batra,  ParikhAgrawal et al.2016]. A system that builds and runs a meaningful process, tailored for the question without ”seeing” any question-answer example was not proposed as far as we know. Such a process, utilizing existing visual analyzers and external knowledge is completely modular, aware for its detection limitations and can elaborate and correct negative or ungrounded answers. Desired and important capabilities may be addressed, even if they are not statistically prominent.

3 UnCoRd Answering System

3.1 Approach Overview

Our Understand, Compose and Respond (UnCoRd) approach is based on the following observations:

  • There is a representation of the question in terms of objects, their classes, properties and relations, including quantifiers and logical connectives as well as non logical symbols: predicates and functions. The representation has an ’abstract’ structure, i.e. independent of the particular objects, classes, properties and relations that are represented as parameters. A single abstract representation can represent many different concrete questions.

    Our main thesis is that the procedure to be applied for obtaining the answer depends on the abstract structure of the question and not the particular elements. Hence, it is important to use the right kind of abstract representation, which will allow this mapping to procedures (where all questions with the same abstract structure require the same procedure). A proper parsing and mapping of the language question to its abstract representation should be obtained to use this method.

  • The question has a compositional structure: there are basic components put together in particular ways. The abstract representations are composed from ’basic patterns’ and methods for putting them together into more complex compound structures. This compound structure determines how the procedures are constructed. There are basic procedures for the basic patterns, and methods of composing from them a more complex procedure to deal with the compound abstract structures. In other words, we get a procedure for the entire question by having procedures for the basic components and a procedure to put them together.

We would like our system to meet the following criteria:

  • Answer correctly and efficiently.

  • “Understanding” the question, in the sense of:

    • Breaking the answering procedure into a set of simple visual tasks.

    • Identify which tasks it can perform and what are its limitations. Indicate if something is missing or unknown.

    • Ability to explain and reason - elaboration of the answering process using the image and intermediate results, including error correction and alternative suggestion.

  • Modularity and robustness: handling questions and image categories of various types, not limited by a training set.

  • Though not using a human psychology model, the ability to handle questions that people answer easily (and may be ”hard” for computers) is desired, e.g. ’odd man out’.

A question can be seen as a statement about the image that the answering system tries to make true or refute. Making the statement true requires an assignment of the particular classes, properties and relations to the image. Their identification in the image is based on pre-trained classifiers and detectors. The recognizable set is modular and can be increased by adding new detectors or switching to stronger ones. Logical operations will be used to generate logic sentences with a formulation that fits first order logic (including functions) with some extensions.

The answering procedure is generated according to the input question in the following manner:

Question Question representation procedure

A proper representation is fundamental to allow a successful mapping of the question into the answering routine. This representation should be concise and support generating the same procedure when applied to similar structured questions with different choices of classes, properties and relations. To obtain that, the visual elements (object classes, object properties and object relations) would be parameters, integrated using logic operations (e.g. ) and quantifiers (e.g. ) into basic logic patterns corresponding to specific structures. These patterns are combined and merged to compose a more complicated structures that create the representation of the question and can be mapped to the answering procedure.

We use a directed graph to describe the question which is a natural choice in our case and allows diverse compositions of substructures. In this graph each node represents an object entity and its description (e.g. a list of required properties). These nodes are linked by the graph edges which represents relation between objects. The graph is divided into small segments that relate either to one node and correspond to part of its information (e.g. object class and one property) or to an edge and the two classes of the nodes it connects. Each of these graph segments matches a basic pattern that is handled by a corresponding procedure, using the specific visual elements of this substructure. The graph representation allows to decompose the answering procedure into a set of elementary procedures and put them together to generate a modular answering procedure. The elementary procedures invoke visual analyzers, which are the basic modules of the process. Each class, property and relation, has a visual analyzer to establish it. More general visual operations that serve more than one particular visual element (e.g. depth estimation) are activated according to need and their results are available to all basic procedures. The overall routine is obtained by applying these procedures and operations at an appropriate order, to appropriate objects, where the amount of required assignments per object are set by the quantifier of the corresponding node. The visual elements may have ’types’, such as classes that can be basic or subordinate (i.e. basic with additional properties), properties that may be comparative (e.g. ’older than’) and relations which can be symmetric (e.g. ’beside’) or not.

The entire process of answering a visual question is described in Figure 1. It starts by receiving the input language question and mapping it to a graph representation. The next stage is running a recursive procedure that follows the graph and invokes the procedures associated with the basic structures, using the specific visual elements as inputs. After the results are obtained, the answer is returned.

Map into a graph representation

question

Run a recursive procedure following the graph

image

Answer
Figure 1: A diagram for the process of answering a visual question

Questions with a simple structure (e.g. ”Is there a red car?”) can be represented by matching one specific pattern to a question. This covers a wide range of questions, however by allowing a composition of simple patterns, into a more complicated structures, the quantity of supported questions is raised substantially (from 60% to 90%, according to an analysis of 542 questions on images asked freely by people and using a set of 12 patterns). This composition is done using a graph. For example in the question ”Is there a red car to the right of the yellow bus?” there are two parts with a simple structure ”Is there an object of class with a property ?” connected by the relation ”to the right of”, which corresponds to another simple structure: ”Is there an object of class and an object of class that have the relation between them?”. The graph representing the question is:

: bus : yellow

: car : red

’to the right of’

When a specific question is given, the question is parsed and mapped to a directed graph, where the visual elements are its parameters. This graph corresponds to a logic expression that is composed of simple expressions, that may share the object variables. Some of the parametric visual elements are variables that require estimation based on the image. Once the variables are estimated, the logic expression is evaluated (as true or false) and the query is answered accordingly. The formulation of the logic expression fit first order logic (including functions) with some extensions (e.g. a variable-sized set of arguments or outputs for some functions).

Each simple logic expression is related to a basic pattern, which corresponds to a basic procedure. The basic procedure obtains an answer to the expression by activating visual analyzers according to the types of object classes, properties and relations (which are inputs to the basic procedure). Such a system will have the ability of constant improvement by adding detectors for new classes, properties and relations according to requirements. Similar characteristics are also evident in human learning, where new learned details are integrated into the existing mechanism of world perception.

The UnCoRd system is implemented following the approach described above. It answers visual questions using a composed process that follows the graph representation of the question, activating real world visual analyzers. This system is described in the following section.

3.2 System Description

3.2.1 Mapping to a Directed Graph

One of the system’s main tasks is to translate the query, given in natural language, into an abstract representation which will then be mapped into a procedure (the first step, described in Figure 1). We first use the START parser [KatzKatz1988, KatzKatz1997] for transforming the question into a set of ternary expressions of the form [subject relation object]. For example the ternary expressions representing the question ”Is there a red car?” are [car be null] and [car has_property red].

The generated set of ternary expressions is used for the generation of a graph representation, where nodes represent objects and edges represent relations between objects. The node include all of the object’s requirements according to the question, mainly its class, properties that may be required (e.g. ’red’) or queried (e.g. ’what color’) and quantifiers that are not the default existence quantifier (e.g. ’all’, ’two’). The directed edges correspond to relations between objects where the edge direction implies the direction of relation. Each edge is also assigned a direction of progress for the answering procedure. It is instantiated as the relation direction, but may be modified according to initial object detection to enhance detection abilities (see Section 3.2.2 for details). An example for a mapping of a question to a directed graph can be seen in Figure 2.

: car

: grass : green

: cat : {small, red} : ’all’

’on’

’behind’

: child : tall : 2

’look_at’
Figure 2: An example for the directed graph representing the question: ’Are the two tall children looking at all the red small cats that are on the green grass and behind the car?’, where is the object’s class, is a list of its required properties and is the required quantifier.

The graph representation is used to fit an answering procedure for each particular question. Fragments of information are extracted from subgraphs that include up to two connected nodes. A graph fragment includes a subset of elements (classes, properties, property functions and relations) that has a mapping to one of a few basic logic patterns. This mapping, combined with the particular accompanying visual elements defines a logic expression that selects and guides a component of the answering procedure. For example a fragment of the node’s class and a required property is mapped to the pattern . The specific class and property define the particular logic expression that should be checked. Such mappings are done for the entire graph, where each fragment of it is mapped into a basic logic pattern and specific visual elements. These simple logic expressions, joined using logic operations, constitute one logic expression that represents the entire question.

Each basic logic patterns has a dedicated procedure that performs the evaluation required to confirm or refute it, using visual analysis according to the image. The procedure provide an answer according to an accompanying query.

We use the following notations for describing the basic logic patterns:

Objects
A class, evaluated for object (as True/False), e.g. ’person’, ’boy’, ’bird’, ’train’.
A predicate property (predicate of arity 1), evaluated for object , (as True/False), e.g. ’blue’, ’male’, ’big’.
A property function. Returns properties of a specific type, e.g. ’color’, ’age’, ’size’.
A global property function for a subset of objects of the same class: . Returns properties of a specific type, e.g. ’quantity’, ’difference’, ’similarity’.
A predicate property, constrained to possible return values of (e.g. ).
One of the possible values returned by (e.g. , where ).
Relation between objects and (predicate of arity 2), e.g. below and in the same manner .
?- A query, the requested answer.

Objects (or other elements) starting with a capital letter (e.g. ) are unknown elements (variables) that should be estimated according to the image.

The particular used patterns were selected since they provide a small, simple and basic set that can naturally compose the logic representation of the question. This small set provides a high flexibility in composing a wide variety of logic expressions using the different visual elements. From a conducted survey and other checks it was evident that this set is empirically sufficient to represent the set of analyzed queries.

Following are the basic logic patterns that are mapped to basic procedures in the question answering process (followed by their corresponding graph fragment). The quantifier may be replaced by other quantifiers (e.g. , ).

  • Property Existence:

    : :

    Examples: ’Is there a brown bear?’ (query for validity with a specific object class)
    Examples: ’What is the purple object?’ (unknown and queried object class)

    An example for a modification due to a quantifier parameter: , e.g. ’Are all bears brown?’

  • Function Property:

    : :

    Example: ’what color is the chair?’

  • Property of a Set:

    : :

    Example: ’How many planes are in the photo?’

  • Object Existence:

    :

    Examples: ’Is this a dog?’
    Examples: ’What is it?’

  • Relation Existence:

    :

    :

    Examples: ’Is the man looking at the children?’ (validity query)
    Examples: ’What is on top of the television?’ (query for one of the classes)

The combination and composition of these patterns has a powerful representation capabilities and provides a mapping to a set of basic procedures that constitute the full answering procedure. The procedure composition of “real-world” visual tasks allows both the use of existing detectors, including separate improvement of each task and explaining, elaborating and correcting questions.

As mentioned above, modified quantifiers may be added to nodes according to amount of objects required in the questions (see Figure 2). These quantifiers may be either numbers (e.g. ’Are there three guys?’) or ’all’ for entire group of objects. Setting the group may be according to subtle phrase differences which affect the answering procedure flow and results as can be seen in Figure 3.

Figure 3: An example for ’all’ quantifier: The question in (a) requires all ’dog’ objects to be both black and small, hence the first dog that is not black renders the logic phrase false and the answer is “no” (failed object and reason are marked in the image). The question in (b) requires only that the black dogs would be small, hence all dogs are checked for color, and the size of the black ones is verified to be small. Since it is true, the answer is “yes”.

The graph naturally represents objects, their properties and binary connections between them. Though this covers a wide variety of questions, using global image information and some extensions to the basic graph increase the support to additional attributes. Property of a group is an example for such an extension. Properties that uses global information are ’closest’ and ’size’ (which is relative to other objects).

Specific implementations for complicated attributes may be added as a dedicated tasks or by a preprocessing, braking it into graph plausible segments. An example for such an implementation in our system is ’odd man out’ (e.g. “How is one cow not like the others?”), where the relations ’diff_<>’ and ’sim_<>’ (for different and similar values of property correspondingly) are used to check and compare the properties of objects. An example is given in Figure 4. The ’similarity’ attribute (queries for a property that is similar for all objects in the group) is handled in the same manner.


Q: What difference does one bird have?
A: color (yellow), object center: (95, 325)
Figure 4: An ’odd man out’ question for objects of class ’bird’. This is a complicated attribute that requires special treatment and mapping to the graph representation. Bounding boxes for birds with common property (red) and for ’odd man out’ bird (yellow) are marked (in red and yellow correspondingly). [Object detection is based on faster R-CNN + DeepLab].

The main building blocks of the question representation are the visual elements: object classes, object properties and object relations.

  • Object Classes Object class is the category of object required by the question. It does not necessarily match the used object detector. To enlarge the coverage of supported object classes we define a few categories of object classes and handle them accordingly.

    • Basic Classes

      These are the classes specifically covered by the main multi-class object detector. We currently use instance segmentation by mask R-CNN [He, Gkioxari, Dollár,  GirshickHe et al.2017] for the 80 classes of COCO dataset [Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár,  ZitnickLin et al.2014]. Having the segmented object is very useful as this accuracy is required in many cases (e.g. for the relation ’touch’). Other detection methods are also integrated and may be used instead. In many of the Figures, object detection is based on faster R-CNN [Ren, He, Girshick,  SunRen et al.2015] complemented by DeepLab’s semantic segmentation [Chen, Papandreou, Kokkinos, Murphy,  YuilleChen et al.2015, Papandreou, Chen, Murphy,  YuillePapandreou et al.2015, Krähenbühl  KoltunKrähenbühl  Koltun2011] (for the 20 classes of PASCAL VOC dataset [Everingham, Eslami, Van Gool, Williams, Winn,  ZissermanEveringham et al.2014]).

    • Subordinate Classes

      When the requested class is a sub-group of a basic class, an object of this basic class should be detected and then additional properties are checked. It is used for the ’person’ subordinate classes (e.g. ’woman’), where face detection is activated

      [Mathias, Benenson, Pedersoli,  Van GoolMathias et al.2014] for the detected ’person’ objects, followed by age and gender classifier [Levi  HassnerLevi  Hassner2015] on the results (an example is demonstrated in Figure 5).

    • Superordinate Classes

      Each category of a superordinate class includes a few basic classes (for example furniture, animal). To check this, we use ConceptNet [Speer  HavasiSpeer  Havasi2013], which is a commonsense knowledge database, based on data extracted from the internet (see also section 3.2.2). It includes concepts and predefined relations between them. We use the relations: ’InstanceOf’, ’IsA’, ’MadeOf’ and ’PartOf’ with the requested class, and keep the results that fit our basic classes list. The detected objects of these classes are retrieved and used for the rest of the procedure. Also if the query is for the type of the requested superordinate class, the name of the detected basic class is given as an answer (see Figure 5 for an example).

    • Similar Classes A class that has a synonym or a very similar class in the basic classes set may be also searched as this corresponding class. These correspondences are extracted using the ’Synonym’ and ’SimilarTo’ relations in ConceptNet.

    • A Group of Objects

      To identify a class that represents a group of objects (possibly of different optional basic classes), the ConceptNet relation ’MemberOf’ is used (e.g. flock bird, sheep; fleet bus, ship…). A quantity requirement is added of at least two objects (demonstrated in Figure 5).

    • Sub Objects

      Some objects are part of a ’known’ objects and can be extracted according to the detection of the host object and additional processing. We apply human pose estimation [Chen  YuilleChen  Yuille2014] to obtain the different body parts when requested (e.g. ’left/right hand’, ’left/right foot’). Relative areas of objects (e.g. ’the middle of the bus’) are also treated as sub objects. In these cases left and right are different than other uses of left/right as a location property (e.g. ’the left box’). A ’shirt’ is also treated as a sub object, corresponding to the torso area, provided by human pose estimation results (an example is given in Figure 5).

      Figure 5: Examples for detection of different object class types. From left to right: subordinate class (person man), subordinate class (dog animal), group (multiple birds flock), sub object (person shirt). [Object detection is based on faster R-CNN + DeepLab].
  • Object Properties

    Objects have various visual properties. We differentiate between the binary properties (e.g. ’red’) and a function property that returns the property of the object from a specific category (e.g. ’color’). Table 1 describes the used set of properties divided (most of them) to groups of function properties.

    Properties’ Group Predicate Properties color/colors 11 colors (e.g. ’black’, ’blue’,…) age111Requires face detection. ages and ages inequalities (based on 8 age groups) gender female/male location222Binary spatial properties are treated either as relative to other objects (e.g.’the right’), or as global (e.g.’top’). (e.g. where) spatial image location (e.g. ’bottom (of the image)’) relative location333Doesn’t have a function property. location relative to other objects (e.g. ’the left dog’) type subclass (when available) size ’small’, ’big’, ’average’ quantity444A property of an objects’ set number of objects difference (odd man out) no direct binary property similarity no direct binary property
    Table 1: Table of supported properties for single objects and objects’ sets.
  • Object Relations

    Relations between two objects are represented by the directed graph edges. Detection of relations varies and require “simple” information for some (e.g. ’to the right of’) and complicated visual features for others (e.g. ’wearing’). We combine specific rule based detection for some relations and a deep neural network for others.

Since relations are also used as an attention for object detection (3.2.2), inverse relations are matched to each relation, when possible. This way, attention can be used for both directions of the relation.

3.2.2 Recursive Procedure

The final stage of answering the question is activating a recursive procedure to follow the graph nodes and edges, invoke the relevant basic procedures and integrate all the information to provide the answer. A basic scheme of the procedure is given in Figure 6 and in Algorithm 1.

Object-wise Analysis

object detection

set cur_node

get objects

check

check

check

check

check

check

detect daughter

detect daughter

detect daughter

get

check

get

question graph

Extended Image

R, G, B

Depth

Color

External Knowledge

Working Memory

Figure 6: A scheme of the recursive answering procedure. At each step the current node (cur_node) is set and the objects are examined according to node’s requirements. If succeeded, a new cur_node is set (according to a relation or next global parent node) and the function is called again to handle the subgraph starting from it. The required visual elements: : object class, : an object property, : function property, : property of a set, : a relation. The daughter object detection is activated only when none was detected in previous stages. Note that the estimated maps of depth and color names are calculated by the procedure according to needs.

The first step is a preliminary object detection, carried out by applying instance segmentation [He, Gkioxari, Dollár,  GirshickHe et al.2017] on the image. Then, a recursive function () is invoked for node handling (starting at a global parent node). It runs specific procedures that activate visual analyzers to check the requirements (properties, relations) and fetch required information (function property). The retrieved objects that fulfill the requirements are coupled to the corresponding question objects, so that next checks would be held on the same objects. The number of required objects is mainly according to quantifiers. Once a node checks are completed, the same function () is invoked for next node. Next node is determined according to relation (graph edge) or next global parent node. Once all nodes are queried, the checks for entire set are activated (if needed). Answers are provided by all basic procedures and final answer is set according to precedence (e.g. queried property type has a priority over binary answers).

Input: question_graph, image
Result: Answer to question
initialization: run object detection, ;
begin
       Node parameters: : properties, : relations, : function property, : property of a set, : candidate objects555According to object detection and previous checks;
       for obj in obj do
             if  empty() then
                   for  in  do
                         [] = ;
                         if success then break end
                   end for
                  if  success then
                         if #possible_objs <#required_objs666According to quantifiers and other requirements then
                              return
                        else
                              continue
                         end if
                        
                   end if
                  
             end if
            if empty() then end if empty() then
                   if exist(next_parent_node) then
                         ;
                         Run [] = getGraphAnswer;
                        
                   end if
                  
            else
                   for  in  do
                         if empty() then Run detectObjsUsingRel() end for  in (candidate objects for daughter nodes) do
                               [] = ;
                               if success then
                                     777Either to daughter node or next global parent node;
                                     Run [] = getGraphAnswer;
                                     if success (#success_objs_d == #required_objs_d) then break end
                               end if
                              if success (#possible_objs_d <#required_objs_d) then break end
                         end for
                        if success then break end
                   end for
                  
             end if
            if success then break end
       end for
      if success empty() then end Return
end
Algorithm 1 Answering procedure according to graph
Working Memory

The global information gathered through the answering process is stored in a ”Working Memory” component. It stores the calculations that may be required at several stages of the process. This information is calculated only if needed and includes objects and their retrieved data, depth map, current node, currently used objects and more.

Common Knowledge

When a person is answering a visual question, there is an important role to prior common knowledge. This includes connection between classes, famous brands and logos, knowing the role and characteristics of objects and actions, anticipation of the future, knowing to ignore details and more.

Some of the issues related to prior commonsense knowledge are addressed by our system. The main uses of prior knowledge are common relations in images (using the Visual Genome dataset [Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al.Krishna et al.2017]) and commonsense knowledge on categories of objects, as well as connections between them (using ConceptNet [Speer  HavasiSpeer  Havasi2013]).

  • Visual Genome Dataset

    The Visual Genome dataset [Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al.Krishna et al.2017] contains (among many others) annotations for objects and binary relations between them for a set of 108077 images. Common relations involving specific objects are extracted from this dataset (by demand) and used as prior knowledge to assist detection. It allows refining the search area when an object is not detected in the initial detection as described below and demonstrated in Figure 7.

  • ConceptNet To obtain general commonsense knowledge we use ConceptNet database (version 5) [Speer  HavasiSpeer  Havasi2013]. The source of information for this database is the internet (results from additional databases are also incorporated). It allows querying for concepts and relations between them of the form:

    concept1 - relation concept2   (e.g. horse - IsA animal)

    The query is performed by providing two of the triplet [relation, concept1, concept2] and querying for the third. These common knowledge relations provide complement capabilities for answering ’real world’ questions in which such common knowledge is assumed. We currently use ConceptNet mainly to extend understanding of objects’ classes (e.g. superordinate classes, similar classes) as described for example in section 3.2.1. Examples for questions are given in Figure 5 for connections between classes.

Guided Object Detection

A question may refer to specific objects in the image that may be hard to detect (e.g. due to size, occlusion, clutter). When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. We use relations with detected objects as an attention source. Two sources for such an attention are used.

  • Attention by common relations: The source for this attention is from the Visual Genome dataset [Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al.Krishna et al.2017], where objects and relations between them are annotated in images. When a requested object is not detected on the first attempt (searching the entire image), additional attempts are made. These attempts focus on regions where the object has a higher probability to be found. This is done using the annotation of the Visual Genome dataset [Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al.Krishna et al.2017], where objects and relations between them are annotated in images (see also section 3.2.2). We seek the most common relation of the requested object (with an object from our known classes’ set) and a corresponding relative location. Then, if the other object is found we activate the object detector on the relevant area. An additional search area is obtained by the relation’s spatial constraints. An example of using common relations as an attention is given in Figure 7.

    Figure 7: Attention by common relations in answering the question “Is there a bottle?” (a) Initial object detection on entire image did not detect the bottle. (b) An additional detection attempt is performed on search areas extracted by the common relation ’bottle-on-diningtable (table)’. [Object detection is based on faster R-CNN + DeepLab].
  • Attention by question relations: The question itself may include relations that can assist detection by focussing on relevant areas. Since the processing is according to the question graph representation, relation edge directions are modified from detected to undetected objects. This allows using relations with a verified detected object as a detection guidance for undetected objects in the same manner described above. The usage of this type of attention is demonstrated in Figure 8.

    Figure 8: Attention by question relations in answering the question ”Is there a clock above the refrigerator?” and ”Is there a refrigerator below the clock?” (a) Initial object detection on entire image did not detect the clock. (b) An additional detection attempt is performed on search areas extracted by the question relation ’clock-above-refrigerator’.

3.3 ”Understanding” Capabilities

Having a system that breaks the visual answering task into real world sub tasks has many advantages. Other than abilities of modular modifications and improvements, the meaningful, compositional process is utilized and leveraged to provide information derived from internal processing. Failure reasons and verified alternatives are provided, as well as elaborations on detected objects.

3.3.1 Provide Alternatives/Corrections

When the logic expression representing the question is not valid for the given image, alternatives for the failed part are searched, such that a close expression may be validated and provided as a supplement to the answer. The checks include alternative objects, relations and also properties according to the following:

  • For failed object classes alternative classes are checked.

  • Real properties are specified for objects with failed properties.

  • For failed relations alternative relations are checked.

  • Additional attempts with close person’s subordinate classes (e.g. when failed to classify a person as a woman, other sub-person classes are checked).

Examples are given in Figure 9 (note that some include multiple rounds of attempts).

Figure 9: Answer alternative examples [Object detection is based on faster R-CNN + DeepLab].

3.3.2 Answer Elaboration

During the answering process, related information may be accumulated for verifying the logical expression representing the question. This information is provided as part of the answer, explaining and elaborating it. The following supplementals are included:

  • If object detection was by a related class (e.g. synonym, parts of a group, subordinate classes), it is specified in the answer (including numbers of each subclass).

  • The hint relation used as an attention for object detection, is indicated (if used).

  • If queried function properties (e.g. color) are different for different relevant objects, property for each object is specified.

Some examples can be seen in Figure 10.

Figure 10: Answer elaboration examples [Object detection is based on faster R-CNN + DeepLab].

3.3.3 Integration in Related Applications

As the answering process accumulates real ”knowledge” related to the image, it may be saved and used for extended applications. One of them may be a discourse on the image, where follow up questions may be answered. Additional application may be correction of image caption [Bernardi, Cakici, Elliott, Erdem, Erdem, Ikizler-Cinbis, Keller, Muscat, Plank, et al.Bernardi et al.2016], where caption may be transformed into a question and the answer may verify it or correct it (as described in Section 3.3.1). An example for image caption correction is given in Figure 11.

Figure 11: Example for image caption correction. Image caption is the result of the NeuralTalk model [Karpathy  Fei-FeiKarpathy  Fei-Fei2015]

4 Results Analysis

Our system is currently limited by the visual elements it is able to recognize. It is not trained or optimized for any visual question answering dataset. Since our goals include question “understanding” and modularity, we first focus in basic capabilities that will be developed with time to be more comprehensive. We’ve checked our system for various aspects and specific examples and provide an analysis. We’ve examined graph representation for a random set of questions to see current status as well as potential. The performance of our full system was checked on a wide set of examples. We analyze sources of failures and present examples for correct and incorrect answers.

4.1 Question Representation

First we check the representation capabilities of our system. To do that we’ve sampled randomly 100 questions from the VQA dataset [Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015] and checked their graph representation. Results are given in Table 2.

Current Potential Fit 72 100 No fit Vocabulary 12 Other 14 Unparsed 2
Table 2: Representation results on a random set of 100 questions from the VQA dataset [Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015]. The vocabulary no fit cases are miss representation due to fail in phrases recognition. ’Unparsed’ are questions that START couldn’t parse. The ’Potential’ column represent questions that may be represented by the graph.

It is not always clear whether a representation is accurate, as in some cases a representation may fit the language structure but less accurate for the actual meaning. For example a simple representation of the question “Is this picture in focus?” may be:

: focus

: picture

in

However, ’in focus’ represents a single element and should be recognized as such. This demonstrates the importance of vocabulary knowledge. In another example, the following questions have a similar structure:

Are they all wearing the same color?
Are they all wearing the same pants?

However, ’color’ and ’pants’ belong to two different types of visual elements and hence questions should have different representations.

Sometimes minor phrasing changes have a substantial effect on parsing and representation. The variation in phrasing may also include grammar inaccuracies and typos. This sensitivity reduce the consistency of the representation and adds noise and inaccuracies to the system.

For the two ”Unparsed” questions in our representation test, simple corrections lead to successes. The corrections are (original corrected):

What season do these toy’s represent? What season do these toys represent?
Where are these items at? Where are these items?

There are other cases where a minor phrasing change corrects the representation, as can be seen in Figure 12.

: room

: picture   : type

’related_to’

: room   : type

: picture

’related_to’
(a) What room is this a picture of? (b) What room is it a picture of?
Figure 12: Minor phrasing changes may effect START parser results and hence the graph representation. In this example replacement of the word ’this’ with ’it’ (keeping the question’s meaning) changes representation to be a correct one.

Additional parsing limitation is no indication coordinating conjunctions (’or’, ’and’) between phrases. Hence both are treated as ’and’.

As mentioned before, since the questions are free form, they may involve slang, typos or wrong grammar. The question meaning may even be not clear. For example the question ’How is the table design?’ may be the correct intended question. However it may be that the intended question is “How is the table designed?”.

All the questions sampled in this analysis can be potentially represented using the suggested graph representation. This demonstrates that in general our scheme has a very high representation capabilities. However some require identification of complicated properties and related terms e.g. “Is the refrigerator capacity greater than 22 cubic feet?” (similar comparisons of property’s quantity already exist for age). The issue of adding description levels rises for complicated properties that may have a natural representation using properties of properties, e.g.

Is this the normal use for the object holding the flowers?
How is the table designed?
Where do these animals originate?

In some cases it may be reasonable to alter the exact meaning into a more reasonable one to handle, e.g

Does this truck have all of its original parts? Are all the parts of this truck original?

In other checks performed, there were (very few) cases where relations between multiple objects of different types were required (e.g. ’Does this image contain more mountain, sky or grass?’). A support for such cases may be added in the future.

4.2 Question Answering

Our current implementation is obviously limited by the number of recognizable visual elements, queried both explicitly and implicitly. It does not include any training or adaptation to any Visual Question Answering dataset. Also, some implementations maybe incomplete or arbitrary, e.g. ’location’, which implementation is relative to image. Answers are, however mostly self aware. When running on the VQA [Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015] dataset most answers indicate the unfamiliar visual element which prevents answering (e.g. ”Unknown class: linoleum”).

Examples with proper answers are shown in Figure 13. It includes the use of ConceptNet [Speer  HavasiSpeer  Havasi2013] in some cases to obtain prior knowledge regarding related classes (e.g. subclasses) and other commonsense knowledge. For example ’ride’ subclasses: {’bicycle’, ’bus’, ’boat’, ’motorbike’, ’train’}, ’transportation’ subclasses: {’train’, ’boat’, ’bicycle’}, ’animal’ subclasses: {’dog’, ’horse’, ’cat’, ’bird’, ’sheep’, ’cow’}.

Figure 13: Examples for correct answers from the VQA dataset[Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015] (short answers). [Object detection is based on faster R-CNN + DeepLab].

Examples with wrong answers are shown in Figure 14. The reasons for failures include detection failures, unknown visual elements, missing prior knowledge and other assumptions.

Figure 14: Examples for incorrect answers from the VQA dataset[Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015] (short answers). [Object detection is based on faster R-CNN + DeepLab].

Further examination of the results provides some insights regarding additional sources of failure.

One element that adds ”noise” to the system is the use of internet based external knowledge database. While providing essential information, retrieved data is also prone to errors and yields detection attempts of wrong objects. This is demonstrated by the results of queries of ’carpet’ and the relation IsA which imply that the following may be a carpet: ’Barack Obama’, ’book’, ’monitor’,’ a plastic bag’, ’a glass of water’, etc. Another example for such an error is the retrieved relation ’chair IsA door’. A partial solution is using the associated weights that indicate the strength of each result. Some results may be misleading as they may refer to different meanings of the queried words. Following are examples for such results:

train isA control’
monitor isA track’
screen_door isA door’

In some cases the intersection of retrieved classes with recognizable objects is so small, that it may cause a wrong conclusion based on a very superficial check. An example for this is the question ”Are these toys?”, where the recognizable retrieved classes are ’bicycle’, ’skateboard’, ’frisbee’, ’kite’ and ’motorcycle’ hence answering ’no’ if none of them was detected.

An interesting observation regarding the estimation of some visual elements is for the generation of color name maps [Van De Weijer, Schmid,  VerbeekVan De Weijer et al.2007]

, which is based on supervised learning (11 optional colors per pixel). When object colors are required, the map is generated for the object area in the image, and based on dominant colors the answer is provided. Retrieving object color may appear as a trivial task, as the intensity of original RGB image channels should provide the exact color of each pixel. However, using such methods fail to obtain the perceived color, as it is hardly related to levels of actual RGB channels. Hence, learning methods are incorporated to address this problem, and still there are many inaccuracies. In addition to these inaccuracies, the required process for obtaining perceived color of an object is not consistent. This can be seen in the examples of Figure

15, where inquiring for the color of a person requires different color naming and focus on specific regions. The bus example also requires specific behavior, where the windows and wheels areas of the bus should be ignored.

Figure 15: Demonstration of perceived color challenges. Each column corresponds to one example. For each example, the top image is the input image with markings of relevant results. The bottom image is a map of color names corresponding to the required object. Below the images, the question and corresponding answer are given. First column demonstrates classifications errors in the generated map of color names due to shading. Second column require ignoring the windows and wheels areas for an accurate answer. For the example of the third column, only specific area should be checked and colors should correspond to different names. [Object detection is based on faster R-CNN + DeepLab].

As previously mentioned the parser sensitivity to phrasing and other issues such as its indifference to type of phrase coordinators (’and’, ’or’) causes representation failures or misrepresentations, which results with inability to provide a correct answer. For example when ’or’ is used (e.g. ”Are the flowers yellow or white?”) the answer will be always ’no’, as both options are required to be true. Hence, we get an answer which is irrelevant to the question.

Questions may be misinterpreted due to multiple meaning of words and phrases or subtle differences. As previously discussed this mainly effects the use of external knowledge database where a wide range of concepts may be used, which may lead to an unclear meaning of a concept (e.g. ’train’- vehicle vs. learn, ’monitor’- screen vs. supervise). Such confusions happen also for the question itself. An example for a misinterpreted question is “What is the table/bus number?”, which is interpreted as “What is the number of tables/buses?”

Currently, other than enhancing object detection by attention from question relations, details from the question are not used as hints for correctness of expressions. A case where such information may be further utilized is when the query is for a property of an object. In this case there may be a prior assumption or an increase in probability that such an object exists. Of course, an automatic assumption of existence is not desirable. However, reduction in classification thresholds, additional attempts using hints and other measures may be utilized to reflect the higher probability for the existence of such an object. For example, given the question “What is the age of the man?”, the probability that a man indeed exist in the image should rise, and refuting this assumption should be performed only when the evidence is substantial.

5 Discussion and Conclusions

We have presented an approach to visual question answering that seeks to compose an answering procedure based on the ’abstract’ structure of the query. We exploit the compositional nature of the question and represent it as a directed graph with objects represented as nodes and relations as edges. Each basic component of this graph representation is mapped to a dedicated basic procedure. The collection of these basic procedures are put together, along with additional required processes, into a complex procedure for the entire query.

This procedure incorporates query details and intermediate results and stores them in the graph nodes and a working memory module. The stored information completes the guidance to the procedure and allows handling different types of visual elements. Question relations are used as an attention source to enhance object detection. Querying for external common information is also handled by the procedure in order to complete the required prior knowledge needed to answer the question.

Breaking the answering process into basic meaningful components, corresponding to basic logic patterns, enables awareness at each step to the accomplished and unaccomplished parts of the task. This includes recognizing and reporting on failures and limitations, that in many cases are corrected and provided with valid alternatives. Elaborations to the answers are provided, according to the stored information. Since the building blocks include simple real world detectors, the system is modular and its improvement is not limited.

Human abilities motivate us to examine and handle some complicated attributes that are addressed naturally by humans, even though they may hardly appear in real queries. These attributes, such as ’odd man out’, demonstrate representation challenges, that require extending the natural graph representation. Currently specific configuration is created to represent these attributes. Future upgrades may allow handling it more smoothly.

Evaluation of representation capabilities demonstrated that, even though potentially, our scheme can represent practically all queries, current state of the system is limited. The observed problems include limitations in vocabulary identification, sensitivity to phrasing and cases of grammatical similarity for different elements (e.g. ’wearing the same color’ vs. ’wearing the same pants’). Additionally, some rare representation limitations exist, such as relations between more than two objects of different classes.

Even though the recognition abilities are currently limited due to scope of existing detectors, the system is self aware and mostly reply by specifying its limitation (which may trigger an addition of the desired detectors to the system). The representation limitations discussed in 4.1 are a fundamental source of failures, which is added to incremental chances for errors of the used detectors. Our system does not exploit any language bias of the question. The answer is exclusively provided by the procedure evaluating the logic representation of the question. However, improvement is ongoing, as detectors keep improving and their scope keeps growing.

Current approaches to visual question answering use mostly end-to-end schemes that are very different than our approach. Although some methods include adaptive aspects, the optimization process is more likely to exploit language bias than the complex mechanisms required for proper answering. These methods maximize statistical results, but are likely to fail in addressing subtle, yet meaningful cases. This fits the analysis of current models, demonstrating the tendency to utilize only part of the question, provide same answers for different images and fail on novel forms. A combination of UnCoRd system and an end-to-end model may be beneficial in some cases, for example enhancing UnCoRd elaborations with ”intuitive” answer in some cases (such as unknown visual elements).

We’ve integrated and examined various aspects of answering questions on images using our answering system. Much more research and investigation is required for all these aspects, as well as others. Future research will include learning the representation mapping and making it more robust, further investigating and improving the visual elements analyzers (e.g. combine the type of object when possible for property detection) and more.


References

  • [Aditya, Yang,  BaralAditya et al.2018] Aditya, S., Yang, Y.,  Baral, C. 2018. Explicit reasoning over end-to-end neural architectures for visual question answering  In AAAI.
  • [Agrawal, Batra,  ParikhAgrawal et al.2016] Agrawal, A., Batra, D.,  Parikh, D. 2016. Analyzing the behavior of visual question answering models 

    In Conference on Empirical Methods in Natural Language Processing (EMNLP), Austin, Texas, USA.

  • [Agrawal, Batra, Parikh,  KembhaviAgrawal et al.2018] Agrawal, A., Batra, D., Parikh, D.,  Kembhavi, A. 2018. Don’t just assume; look and answer: Overcoming priors for visual question answering 

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

  • [Anderson, He, Buehler, Teney, Johnson, Gould,  ZhangAnderson et al.2017] Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S.,  Zhang, L. 2017. Bottom-up and top-down attention for image captioning and vqa.
  • [Andreas, Rohrbach, Darrell,  KleinAndreas et al.2016a] Andreas, J., Rohrbach, M., Darrell, T.,  Klein, D. 2016a. Learning to compose neural networks for question answering  In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL).
  • [Andreas, Rohrbach, Darrell,  KleinAndreas et al.2016b] Andreas, J., Rohrbach, M., Darrell, T.,  Klein, D. 2016b. Neural module networks  In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,  39–48.
  • [Antol, Agrawal, Lu, Mitchell, Batra, Zitnick,  ParikhAntol et al.2015] Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L.,  Parikh, D. 2015. Vqa: Visual question answering  In International Conference on Computer Vision (ICCV).
  • [Ben-younes, Cadene, Cord,  ThomeBen-younes et al.2017] Ben-younes, H., Cadene, R., Cord, M.,  Thome, N. 2017. Mutan: Multimodal tucker fusion for visual question answering  In The IEEE International Conference on Computer Vision (ICCV).
  • [Bernardi, Cakici, Elliott, Erdem, Erdem, Ikizler-Cinbis, Keller, Muscat, Plank, et al.Bernardi et al.2016] Bernardi, R., Cakici, R., Elliott, D., Erdem, A., Erdem, E., Ikizler-Cinbis, N., Keller, F., Muscat, A., Plank, B., et al. 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures.  J. Artif. Intell. Res.(JAIR), 55, 409–442.
  • [Chen, Papandreou, Kokkinos, Murphy,  YuilleChen et al.2015] Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K.,  Yuille, A. L. 2015. Semantic image segmentation with deep convolutional nets and fully connected crfs  In ICLR.
  • [Chen  YuilleChen  Yuille2014] Chen, X.  Yuille, A. 2014. Articulated pose estimation by a graphical model with image dependent pairwise relations  In Advances in Neural Information Processing Systems (NIPS).
  • [Dai, Zhang,  LinDai et al.2017] Dai, B., Zhang, Y.,  Lin, D. 2017. Detecting visual relationships with deep relational networks  In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • [Desta, Chen,  KornutaDesta et al.2018] Desta, M. T., Chen, L.,  Kornuta, T. 2018. Object-based reasoning in vqa  In Winter Conference on Applications of Computer Vision, WACV. IEEE.
  • [Everingham, Eslami, Van Gool, Williams, Winn,  ZissermanEveringham et al.2014] Everingham, M., Eslami, S. A., Van Gool, L., Williams, C. K., Winn, J.,  Zisserman, A. 2014. The pascal visual object classes challenge: A retrospective  International Journal of Computer Vision, 111(1), 98–136.
  • [Fukui, Park, Yang, Rohrbach, Darrell,  RohrbachFukui et al.2016] Fukui, A., Park, D. H., Yang, D., Rohrbach, A., Darrell, T.,  Rohrbach, M. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding  In Conference on Empirical Methods in Natural Language Processing (EMNLP), Austin, Texas, USA.
  • [Goyal, Khot, Summers-Stay, Batra,  ParikhGoyal et al.2017] Goyal, Y., Khot, T., Summers-Stay, D., Batra, D.,  Parikh, D. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering  In Conference on Computer Vision and Pattern Recognition (CVPR).
  • [Gupta, Shih, Singh,  HoiemGupta et al.2017] Gupta, T., Shih, K., Singh, S.,  Hoiem, D. 2017. Aligned image-word representations improve inductive transfer across vision-language tasks  In The IEEE International Conference on Computer Vision (ICCV).
  • [He, Gkioxari, Dollár,  GirshickHe et al.2017] He, K., Gkioxari, G., Dollár, P.,  Girshick, R. 2017. Mask R-CNN  In Proceedings of the International Conference on Computer Vision (ICCV).
  • [Hochreiter  SchmidhuberHochreiter  Schmidhuber1997] Hochreiter, S.  Schmidhuber, J. 1997. Long short-term memory  Neural computation, 9(8), 1735–1780.
  • [Hu, Andreas, Rohrbach, Darrell,  SaenkoHu et al.2017] Hu, R., Andreas, J., Rohrbach, M., Darrell, T.,  Saenko, K. 2017. Learning to reason: End-to-end module networks for visual question answering  In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,  804–813.
  • [Ilievski, Yan,  FengIlievski et al.2016] Ilievski, I., Yan, S.,  Feng, J. 2016.

    A focused dynamic attention model for visual question answering.

  • [Johnson, Hariharan, van der Maaten, Fei-Fei, Zitnick,  GirshickJohnson et al.2017a] Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L.,  Girshick, R. 2017a. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning  In CVPR.
  • [Johnson, Hariharan, van der Maaten, Hoffman, Fei-Fei, Zitnick,  GirshickJohnson et al.2017b] Johnson, J., Hariharan, B., van der Maaten, L., Hoffman, J., Fei-Fei, L., Zitnick, C. L.,  Girshick, R. 2017b. Inferring and executing programs for visual reasoning  In ICCV.
  • [Kafle  KananKafle  Kanan2016] Kafle, K.  Kanan, C. 2016. Visual question answering: Datasets, algorithms, and future challenges.
  • [Karpathy  Fei-FeiKarpathy  Fei-Fei2015] Karpathy, A.  Fei-Fei, L. 2015. Deep visual-semantic alignments for generating image descriptions  In Proceedings of the IEEE conference on computer vision and pattern recognition,  3128–3137.
  • [KatzKatz1988] Katz, B. 1988. Using english for indexing and retrieving  In In Proceedings of the 1st RIAO Conference on User-Oriented Content-Based Text and Image Handling (RIAO ’88).
  • [KatzKatz1997] Katz, B. 1997. Annotating the world wide web using natural language.  In RIAO,  136–159.
  • [Krähenbühl  KoltunKrähenbühl  Koltun2011] Krähenbühl, P.  Koltun, V. 2011. Efficient inference in fully connected crfs with gaussian edge potentials  In NIPS.
  • [Krishna, Zhu, Groth, Johnson, Hata, Kravitz, Chen, Kalantidis, Li, Shamma, et al.Krishna et al.2017] Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma, D. A., et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations  International Journal of Computer Vision, 123(1), 32–73.
  • [LeCun, Bottou, Bengio,  HaffnerLeCun et al.1998] LeCun, Y., Bottou, L., Bengio, Y.,  Haffner, P. 1998. Gradient-based learning applied to document recognition  Proceedings of the IEEE, 86(11), 2278–2324.
  • [Levi  HassnerLevi  Hassner2015] Levi, G.  Hassner, T. 2015. Age and gender classification using convolutional neural networks  In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) workshops.
  • [Li, Su,  ZhuLi et al.2017] Li, G., Su, H.,  Zhu, W. 2017. Incorporating external knowledge to answer open-domain visual questions with dynamic memory networks.
  • [Li, Fu, Yu, Mei,  LuoLi et al.2018a] Li, Q., Fu, J., Yu, D., Mei, T.,  Luo, J. 2018a. Tell-and-answer: Towards explainable visual question answering using attributes and captions.
  • [Li, Tao, Joty, Cai,  LuoLi et al.2018b] Li, Q., Tao, Q., Joty, S., Cai, J.,  Luo, J. 2018b. Vqa-e: Explaining, elaborating, and enhancing your answers for visual questions.
  • [Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár,  ZitnickLin et al.2014] Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P.,  Zitnick, C. L. 2014. Microsoft coco: Common objects in context  In Computer Vision–ECCV 2014,  740–755. Springer.
  • [Liu, Shen, Lin,  ReidLiu et al.2016] Liu, F., Shen, C., Lin, G.,  Reid, I. 2016. Learning depth from single monocular images using deep convolutional neural fields  IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10), 2024–2039.
  • [Lu, Krishna, Bernstein,  Fei-FeiLu et al.2016a] Lu, C., Krishna, R., Bernstein, M.,  Fei-Fei, L. 2016a. Visual relationship detection with language priors  In European Conference on Computer Vision.
  • [Lu, Yang, Batra,  ParikhLu et al.2016b] Lu, J., Yang, J., Batra, D.,  Parikh, D. 2016b. Hierarchical question-image co-attention for visual question answering  In Advances In Neural Information Processing Systems,  289–297.
  • [Lu, Li, Zhang, Wang,  WangLu et al.2018] Lu, P., Li, H., Zhang, W., Wang, J.,  Wang, X. 2018. Co-attending free-form regions and detections with multi-modal multiplicative feature embedding for visual question answering.  In AAAI.
  • [Mathias, Benenson, Pedersoli,  Van GoolMathias et al.2014] Mathias, M., Benenson, R., Pedersoli, M.,  Van Gool, L. 2014. Face detection without bells and whistles  In ECCV.
  • [Nguyen  OkataniNguyen  Okatani2018] Nguyen, D.-K.  Okatani, T. 2018. Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering.
  • [Pandhre  SodhaniPandhre  Sodhani2017] Pandhre, S.  Sodhani, S. 2017. Survey of recent advances in visual question answering.
  • [Papandreou, Chen, Murphy,  YuillePapandreou et al.2015] Papandreou, G., Chen, L.-C., Murphy, K.,  Yuille, A. L. 2015.

    Weakly- and semi-supervised learning of a dcnn for semantic image segmentation.

  • [Pennington, Socher,  ManningPennington et al.2014] Pennington, J., Socher, R.,  Manning, C. D. 2014. Glove: Global vectors for word representation  In In EMNLP.
  • [Recasens, Khosla, Vondrick,  TorralbaRecasens et al.2015] Recasens, A., Khosla, A., Vondrick, C.,  Torralba, A. 2015. Where are they looking?  In Advances in Neural Information Processing Systems (NIPS). indicates equal contribution.
  • [Ren, He, Girshick,  SunRen et al.2015] Ren, S., He, K., Girshick, R.,  Sun, J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks  In Advances in neural information processing systems,  91–99.
  • [Schwartz, Schwing,  HazanSchwartz et al.2017] Schwartz, I., Schwing, A.,  Hazan, T. 2017. High-order attention models for visual question answering  In Advances in Neural Information Processing Systems,  3667–3677.
  • [Speer  HavasiSpeer  Havasi2013] Speer, R.  Havasi, C. 2013. Conceptnet 5: A large semantic network for relational knowledge  In The People’s Web Meets NLP: Collaboratively Constructed Language Resources,  161–176. Springer Berlin Heidelberg.
  • [Teney, Anderson, He,  HengelTeney et al.2017a] Teney, D., Anderson, P., He, X.,  Hengel, A. v. d. 2017a. Tips and tricks for visual question answering: Learnings from the 2017 challenge.
  • [Teney, Liu,  van den HengelTeney et al.2017b] Teney, D., Liu, L.,  van den Hengel, A. 2017b. Graph-structured representations for visual question answering  In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [Van De Weijer, Schmid,  VerbeekVan De Weijer et al.2007] Van De Weijer, J., Schmid, C.,  Verbeek, J. 2007. Learning color names from real-world images  In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on,  1–8. IEEE.
  • [Wang, Wu, Shen, Hengel,  DickWang et al.2015] Wang, P., Wu, Q., Shen, C., Hengel, A. v. d.,  Dick, A. 2015. Explicit knowledge-based reasoning for visual question answering.
  • [Wang, Wu, Shen, Hengel,  DickWang et al.2016] Wang, P., Wu, Q., Shen, C., Hengel, A. v. d.,  Dick, A. 2016. FVQA: Fact-based visual question answering.
  • [Wang, Wu, Shen,  van den HengelWang et al.2017] Wang, P., Wu, Q., Shen, C.,  van den Hengel, A. 2017.

    The vqa-machine: Learning how to use existing vision algorithms to answer new questions 

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,  1173–1182.
  • [Wu, Teney, Wang, Shen, Dick,  HengelWu et al.2016a] Wu, Q., Teney, D., Wang, P., Shen, C., Dick, A.,  Hengel, A. v. d. 2016a. Visual question answering: A survey of methods and datasets.
  • [Wu, Wang, Shen, Dick,  van den HengelWu et al.2016b] Wu, Q., Wang, P., Shen, C., Dick, A.,  van den Hengel, A. 2016b. Ask me anything: Free-form visual question answering based on knowledge from external sources  In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [Xiong, Merity,  SocherXiong et al.2016] Xiong, C., Merity, S.,  Socher, R. 2016. Dynamic memory networks for visual and textual question answering  In International Conference on Machine Learning,  2397–2406.
  • [Xu, Chen, Liu, Rohrbach, Darell,  SongXu et al.2017] Xu, X., Chen, X., Liu, C., Rohrbach, A., Darell, T.,  Song, D. 2017. Can you fool ai with adversarial examples on a visual turing test?.
  • [Yang, He, Gao, Deng,  SmolaYang et al.2016] Yang, Z., He, X., Gao, J., Deng, L.,  Smola, A. 2016. Stacked attention networks for image question answering  In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [Yu, Yu, Fan,  TaoYu et al.2017] Yu, Z., Yu, J., Fan, J.,  Tao, D. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering  In Proc. IEEE Int. Conf. Comp. Vis,  3.
  • [Zhu, Zhao, Huang, Tu,  MaZhu et al.2017] Zhu, C., Zhao, Y., Huang, S., Tu, K.,  Ma, Y. 2017. Structured attentions for visual question answering  In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,  1291–1300.
  • [Zhu  RamananZhu  Ramanan2012] Zhu, X.  Ramanan, D. 2012. Face detection, pose estimation, and landmark localization in the wild  In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,  2879–2886. IEEE.