Learning to Compose Dynamic Tree Structures for Visual Contexts

12/05/2018 ∙ by Kaihua Tang, et al. ∙ Nanyang Technological University Columbia University 20

We propose to compose dynamic tree structures that place the objects in an image into a visual context, helping visual reasoning tasks such as scene graph generation and visual Q&A. Our visual context tree model, dubbed VCTree, has two key advantages over existing structured object representations including chains and fully-connected graphs: 1) The efficient and expressive binary tree encodes the inherent parallel/hierarchical relationships among objects, e.g., "clothes" and "pants" are usually co-occur and belong to "person"; 2) the dynamic structure varies from image to image and task to task, allowing more content-/task-specific message passing among objects. To construct a VCTree, we design a score function that calculates the task-dependent validity between each object pair, and the tree is the binary version of the maximum spanning tree from the score matrix. Then, visual contexts are encoded by bidirectional TreeLSTM and decoded by task-specific models. We develop a hybrid learning procedure which integrates end-task supervised learning and the tree structure reinforcement learning, where the former's evaluation result serves as a self-critic for the latter's structure exploration. Experimental results on two benchmarks, which require reasoning over contexts: Visual Genome for scene graph generation and VQA2.0 for visual Q&A, show that VCTree outperforms state-of-the-art results while discovering interpretable visual context structures.

READ FULL TEXT VIEW PDF

Authors

page 1

page 3

page 6

page 8

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Illustrations of different object-level visual context structures: chains [51], fully-connected graphs [46], and dynamic tree structures constructed by the proposed VCTree. For the purpose of efficient context encoding by using TreeLSTM [41], we transform the multi-branch trees (left) to the equivalent left-child right-sibling binary trees [13], where the left branches (red) indicate the hierarchical relations and right branches (blue) indicate the parallel relations. The key advantages of VCTree over chains and graphs are hierarchical, dynamic, and efficient.

Objects are not alone. They are placed in the visual context: a coherent object configuration attributed to the fact that they co-vary with each other. Extensive studies in cognitive science show that our brains inherently exploit visual contexts to understand cluttered visual scenes comprehensively [4, 6, 34]. For example, even the girl’s leg and the horse are not fully observed in Figure 1

, we can still infer “girl riding horse”. Inspired by this, modeling visual contexts is also indispensable in many modern computer vision systems. For example, state-of-the-art CNN architectures capture the context by convolutions of various receptive fields and encode it into multi-scale feature map pyramid 

[7, 26, 54]. Such pixel-level visual context (or local context [15]) arguably plays one of the key roles in closing the performance gap of the “mid-level” vision between humans and machines, such as R-CNN based object detection [26, 27, 37], instance segmentation [17, 35], and FCN based semantic segmentation [7, 8, 50].

Modeling visual contexts explicitly on the object-level has also been shown effective in “high-level” vision tasks such as image captioning [48] and visual Q&A [43]. In fact, the visual context serves as a powerful inductive bias that connects objects in a particular layout for high-level reasoning [25, 28, 43, 48]. For example, the spatial layout of “person” on “horse” is useful for determining the relationship “ride”, which is in turn informative to localize the “person” if we want to answer “who is riding on the horse?”. However, those works assume that the context is a scene graph, whose detection per se is a high-level task and not yet reliable. Without high-quality scene graphs, we have to use a prior layout structure. As shown in Figure 1, two popular structures are chains [51] and fully-connected graphs [9, 14, 24, 46, 49], where the context is encoded by sequential models such as bidirectional LSTM [18] for chains and CRF-RNN [55] for graphs.

However, these two prior structures are sub-optimal. First, chains are oversimplified and may only capture simple spatial information or co-occurrence bias; though fully-connected graphs are complete, they lack the discrimination between hierarchical relations, e.g., “helmet affiliated to head”, and parallel relations, e.g., “girl on horse”; in addition, dense connections could also lead to message passing saturation in the subsequent context encoding [46]. Second, visual contexts are inherently content-/task-driven, e.g., the object layouts should vary from content to content, question to question. Therefore, fixed chains and graphs are incompatible with the dynamic nature of visual contexts [44].

In this paper, we propose a model dubbed VCTree, pioneering to compose dynamic tree structures for encoding object-level visual context for high-level visual reasoning tasks, such as scene graph generation (SGG) and visual Q&A (VQA). Given a set of object proposals in an image (e.g., obtained from Faster-RCNN [37]), we maintain a trainable task-specific score matrix of the objects, where each entry indicates the contextual validity of the pairwise objects. Then, a maximum spanning tree can be trimmed from the score matrix, e.g., the multi-branch trees shown in Figure 1. This dynamic structure represents a “hard” hierarchical layout bias of what objects should gain more contextual information from others, e.g., objects on the person’s head are most informative given the question “what on the little girl’s head?”; while the whole person’s body is more important given the question “Is the girl sitting on the horse correctly?”. To avoid the saturation issue caused by the densely connected arbitrary number of children, we further morph the multi-branch trees to the equivalent left-child right-sibling binary trees [13], where the left branches (red) indicate the hierarchical relations and right branches (blue) indicate the parallel relations, then use TreeLSTM [41] to encode the context.

As the above VCTree construction is in a discrete and non-differentiable nature, we develop a hybrid learning strategy using REINFORCE [19, 38, 45] for tree structure exploration and supervised learning for context encoding and its subsequent tasks. In particular, the evaluation result (Recall for SGG and Accuracy for VQA) from supervised task can be exploited as a “critic” function that guide the “action” of tree construction. We evaluate VCTree on two benchmarks: Visual Genome [23] for SGG and VQA2.0 [16] for VQA. For SGG, we achieve a new state-of-the-art on all three standard tasks, i.e., Scene Graph Generation, Scene Graph Classification, and Predicate Classification; for VQA, we achieve competitive results on single model performances. In particular, VCTree helps high-level vision models fight against the dataset bias. For example, we achieve 4.1% absolute gain in proposed Mean Recall@100 metric of Predicate Classification than MOTIFS [51], and observe higher improvement in VQA2.0 balanced pair subset [42] than normal validation set. Qualitative results also show that VCTree composes interpretable structures.

2 Related Work

Visual Context Structures. Despite the consensus on the value of visual contexts, existing context models are diversified into a variety of implicit or explicit approaches. Implicit models directly encode surrounding pixels into multi-scale feature maps, e.g., dilated convolution [50] presents a efficient way to increase receptive field, applicable in various dense prediction tasks [7, 8]; feature pyramid structure [26] combines low-resolution contextual features with high-resolution detailed features, facilitating object detection with rich semantics. Explicit models incorporate contextual cues through object connections. However, such methods [24, 46, 51] group objects into fixed layouts, i.e., chains or graphs.

Learning to Compose Structures. Learning to compose structures is becoming popular in NLP for sentence representation, e.g., Cho et al[10]

applied a gated recursive convolutional neural network (grConv) to control the bottom-up feature flow for a dynamic structure; Choi 

et al[11] combines TreeLSTM with Gumbel-Softmax, allowing task-specific tree structures automatically learned from plain text. Yet, only few works compose visual structures for images. Conventional approaches construct a statistical dependency graph/tree for the entire dataset based on object categories [12] or exemplars [30]. Those statistical methods cannot put per-image objects in a context as a whole to reason over content-/task-specific fashion. Socher et al[40] constructed a bottom-up tree structure to parse images; however, their tree structure learning is supervised while ours is reinforced, which does not require tree ground-truth.

Figure 2: The framework of the proposed VCTree model. We extract visual features from proposals and construct a dynamic VCTree using the learnable score matrix. The tree structure is used to encode the object-level visual context, which will be decoded for each specific end-task. Parameters in stages (c)&(d) are trained by supervised learning, while those in stage (b) are using REINFORCE with a self-critic baseline.

Visual Reasoning Tasks. Scene Graph Generation (SGG) task is derived from Visual Relationship Detection (VRD). Early work on VRD [29] treats objects as isolated individuals, while SGG considers each image as a whole. Along with the widely used message passing mechanism [46], a variety of context models [24, 25, 32, 47]

have been exploited in SGG to fine-tune local predictions through rich global contexts, making it the best competition field for different contextual models. Visual Question Answering (VQA) as a high-level task bridges the gap between computer vision and natural language processing. State-of-the-art VQA models 

[1, 3, 42] rely on bag-of-object visual attentions which can be considered as a trivial context structure. However, we propose to learn a tree context structure that is dynamic to visual content and questions.

3 Approach

As illustrated in Figure 2, our VCTree model can be summarized into the following four steps. (a) We adopt Faster-RCNN to detect object proposals [37]. The visual feature of each proposal is presented as , concatenating a RoIAlign feature [17] and spatial feature , where 8 elements indicate the bounding box coordinates , center , and size , respectively. Note that the visual feature is not limited to bounding box; segment feature from instance segmentations [17] or panoptic segmentations [22] could also be alternatives. (b) In Section 3.1, a learnable matrix will be introduced to construct VCTree. Moreover, since the VCTree construction is discrete in nature and the score matrix is non-differentiable from the loss of end-task, we develop a hybrid learning strategy in Section 3.5. (c) In Section 3.2, we employ Bidirectional Tree LSTM (BiTreeLSTM) to encode the contextual cues using the constructed VCTree. (d) The encoded contexts will be decoded for each specific end-task detailed in Section 3.3 and Section 3.4.

3.1 VCTree Construction

VCTree construction aims to learn a score matrix , which approximates the task-dependent validity between each object pair. Two principles guide the formulation of this matrix: 1) inherent object correlations should be maintained, e.g., “man wears helmet” in Figure 2; (2) task related object pair has higher score than irrelevant ones, e.g., given question “what is on the man’s head?”, “man-helmet” pair should be more important than “man-motorcycle” and “helmet-motorcycle” pairs. Therefore, we define each element of as the product of the object correlation and the pairwise task-dependency :

(1)

where

is the sigmoid function;

is the task feature, e.g

., the question feature encoded by GRU in VQA; MLP is a multi-layer perceptron;

is the object-task correlation in VQA, which will be introduced later in Section 3.4. In SGG, the entire is set to , as we assume that each object pair contributes equally without the question prior. We pretrain on Visual Genome [23] for a reasonable binary prior if two objects are related. Yet, such a pretrained model is not perfect due to the lack of coherent graph-level constraint or question prior, so it will be further fine-tuned in Section 3.5.

Considering as a symmetric adjacency matrix, we can obtain a maximum spanning tree using the Prim’s algorithm [36], with a root (source node) satisfying . In a nutshell, as illustrated in Figure 3, we construct the tree recursively by connecting the node from the pool to the tree node if it has the most validity. Note that during the tree structure exploration in Section 3.5, each of the -th step

in the above tree construction is sampled from all possible choices in a multinomial distribution with the probability

in proportion to the validity. The resultant tree is multi-branch and is merely a sparse graph with only one kind of connection, which is still unable to discriminate the hierarchical and parallel relations in the subsequent context encoding. To this end, we convert the multi-branch tree into an equivalent binary tree, i.e., VCTree by changing non-leftmost edges into right branches as in Figure 1. In this fashion, the right branches (blue) indicate parallel contexts, and left ones (red) indicate hierarchical contexts. Such a binary tree structure achieves significant improvements in our SGG and VQA experiments compared to its multi-branch alternative.

Figure 3: The maximum spanning tree from . In each step, a node in the remaining pool is connected to the current tree, if it has the highest validity score.

3.2 TreeLSTM Context Encoding

Given the above constructed VCTree, we adopt BiTreeLSTM as our context encoder:

(2)

where is the input node feature, which will be specified in each task, and is the encoded object-level visual context. Each is the concatenated hidden states from both TreeLSTM [41] directions:

(3)
(4)

where denote the top-down and bottom-up directions, respectively; we slightly abuse the subscripts to denote the parent, left child, and right child of node . The order of the concatenation in Eq. (4

) indicates the explicit discrimination between the left and right branches in context encoding. We use zero vectors to pad all the missing branches.

3.3 Scene Graph Generation Model

Figure 4: The overview of our SGG Model. The object context feature will be used to decode object categories, and the pairwise relationship decoding jointly fuses the relation context feature, RoIAlign feature of union box, and bounding box feature, before prediction.

Now we detail the implementation of Eq. (2) and how to decode them for the SGG task as illustrated in Figure 4.

Object Context Encoding. We employ BiTreeLSTM from Eq. (2) to encode object context representation into . We set inputs of Eq. (2) to , i.e., concatenation of object visual features and embedded N-way original Faster-RCNN class probabilities, where is the embedding matrix that maps each original label distribution into .

Relation Context Encoding. We apply an additional BiTreeLSTM using the above as input to further encode the relation context .

Context Decoding. The goal of SGG is to detect objects and then predict their relationship. Similar to [51], we adopt a dynamic object prediction which can be viewed as a decoding process in a top-down direction using Eq. (3), that is, the object class of a child is dependent on its parent. Specifically, we set the input of Eq. (3) to be , where is the predicted label distribution of the ’s parent, and embeds it into

, then the output hidden is passed to a softmax classifier to achieve object label distribution

.

The relationship prediction is in a pairwise fashion. First, we collect three pairwise features for each object pair: (1) as the context feature, (2) as the bounding box pair feature, with being union box and intersection box, (3) as the RoIAlign feature [17] from the union bounding box of the object pair. All are under the same dimension . Then, we fuse them into a final pairwise feature: , before feed it into the softmax predicate classifier, where is element-wise product.

3.4 Visual Question Answering Model

Now we detail the implementation of Eq. (2) for VQA, and illustrate our VQA model in Figure 5.

Context Encoding. The context feature in VQA: , is directly encoded from the bounding box visual feature by Eq. (2).

Figure 5:

The overview of our VQA framework. It contains two multimodal attention models for visual feature and context feature. Outputs from both models will be concatenated and passed to a question-guided gate before answer prediction.

Multimodal Attention Feature. We adopt a popular attention model from previous work [1, 42] to calculate the multimodal joint feature for each question and image pair:

(5)

where is the question feature from a one-layer GRU encoding the sentence; is the attentive image feature calculated from the input feature set , is the attention weight from object-task correlation , with the output of MLP being a scalar; can be any multi-modal feature fusion function, in particular, we adopt as in [53], with and projecting into the same dimension. Therefore, we can use Eq. (5) to obtain both the multimodal visual attention feature by setting input to and multimodal contextual attention feature by setting to .

Question Guided Gate Decoding. However, the importance of and varies from question to question, e.g., “is there a dog?” only requires visual features for detection, while “is the man dressed formally?” is highly context dependent. Inspired by [39], we adopt a question guided gate to select the most related channels from . The gate vector is defined as:

(6)

where is a one-hot question type vector defined by prefixed words of questions, which is embedded into by matrix , and denotes the sigmoid function.

Finally, we fuse as the final VQA feature and feed it into the softmax classifier.

Scene Graph Generation Scene Graph Classification Predicate Classification
Model R@20 R@50 R@100 R@20 R@50 R@100 R@20 R@50 R@100
VRD [29] - 0.3 0.5 - 11.8 14.1 - 27.9 35.0
AsscEmbed [32] 6.5 8.1 8.2 18.2 21.8 22.6 47.9 54.1 55.4
IMP [46] 14.6 20.7 24.5 31.7 34.6 35.4 52.7 59.3 61.3
TFR [20] 3.4 4.8 6.0 19.6 24.3 26.6 40.1 51.9 58.3
FREQ [51] 20.1 26.2 30.1 29.3 32.3 32.9 53.6 60.6 62.2
MOTIFS [51] 21.4 27.2 30.3 32.9 35.8 36.5 58.5 65.2 67.1
Graph-RCNN [47] - 11.4 13.7 - 29.6 31.6 - 54.2 59.1
Chain 21.2 27.1 30.3 33.3 36.1 36.8 59.4 66.0 67.7
Overlap 21.4 27.3 30.4 33.7 36.5 37.1 59.5 66.0 67.8
Multi-Branch 21.5 27.3 30.6 34.3 37.1 37.8 59.5 66.1 67.8
VCTree-SL 21.7 27.7 31.1 35.0 37.9 38.6 59.8 66.2 67.9
VCTree-HL 22.0 27.9 31.3 35.2 38.1 38.8 60.1 66.4 68.1
Table 1: SGG performances (%) of various methods. denotes the methods using the same Faster-RCNN detector as ours. IMP is reported from the re-implemented version [51].

3.5 Hybrid Learning

Due to the discrete nature of VCTree construction, the score matrix is not fully differentiable from the loss back-propagated from the end-task loss. Inspired by [19], we use a hybrid learning strategy that combines reinforcement learning, i.e., policy gradient [45] for the parameters of in the tree construction and supervised learning for the rest parameters. Suppose a layout , i.e., a constructed VCTree, is sampled from , i.e., the construction procedure in Section 3.1, where is the given image, is the task, e.g., questions in VQA. To avoid clutter, we drop and . Then, we define the reinforcement learning loss as:

(7)

where aims to minimize the negative expected reward

, which can be the end-task evaluation metrics such as Recall@100 for SGG and Accuracy for VQA. Then, the above gradient will be

. Since it is impractical to estimate all possible layouts, we use the Monte-Carlo sampling to estimate the gradient:

(8)

where we set M to 1 in our implementation.

To reduce the gradient variance, we apply a self-critic baseline 

[38] , where is the greedy constructed tree without sampling. So the original reward can be replaced by in Eq. (8). We observe faster convergence than using a traditional moving baseline [31].

The overall hybrid learning will be alternatively conducted between supervised learning and reinforcement learning, where we first train the supervised end-task on pretrained , then fix the end-task as reward function to learn our reinforcement policy network, after that, we update the supervised end-task by new . The latter two stages are running alternatively 2 times in our model.

4 Experiments on Scene Graph Generation

4.1 Settings

Dataset. Visual Genome (VG) [23] is a popular benchmark for SGG. It contains 108,077 images with tens of thousands of unique object and predicate relation categories, yet most of categories have very limited instances. Therefore, previous works [25, 46, 52] proposed various VG splits that remove rare categories. We adopted the most popular one from [46], which selects top-150 object categories and top-50 predicate categories by frequency. The entire dataset is divided into the training set and test set by 70%, 30%, respectively. We further picked 5,000 images from training set as the validation set for hyper-parameter tuning.

Protocols. We followed three conventional protocols to evaluate our SGG model: (1) Scene Graph Generation (SGGen): given an image, detect object bounding boxes and their categories, and predict their relationships; (2) Scene Graph Classification (SGCls): given ground-truth object bounding boxes in an image, predict the object categories and their relationships; (3) Predicate Classification (PredCls): given the object categories and their bounding boxes in the image, predict their relationships.

Metrics. Since the annotation in VG is incomplete and biased, we followed the conventional Recall@K (R@K = 20,50,100) as the evaluation metrics [29, 46, 51]. However, it is well-known that SGG models trained on biased datasets such as VG have low performances for less frequent categories. To this end, we introduced a balanced metric called: Mean Recall (mR@K). It calculates the recall on each predicate category independently, and then averages the results. So, each category contributes equally. Such a metric reduces the influence of some common yet meaningless predicates, e.g., “on”, “of”, and gives equal attention to those infrequent predicates, e.g., “riding”, “carrying”, which are more valuable to high-level reasoning.

SGGen SGCls PredCls
Model mR@100 mR@100 mR@100
MOTIFS [51] 6.6 8.2 15.3
FREQ [51] 7.1 8.5 16.0
VCTree-HL 8.0 10.8 19.4
Table 2: Mean recall (%) of various methods across all the 50 predicate categories.

4.2 Implementation Details

We adopted Faster-RCNN [37] with VGG backbone to detect object bounding boxes and extract RoI features. Since the performance of SGG highly depends on the underlying detector, we used the same set of parameters as [51] for fair comparison. Object correlations in Eq. (1) will be pretrained on ground-truth bounding boxes with class-agnostic relationships (i.e., foreground/background relationships), using all possible symmetric pairs without sampling. In SGGen, top-64 object proposals were selected after non-maximal suppression (NMS) with 0.3 IoU. We set background/foreground ratio for predicate classification to 3, and capped the number of training samples at 64 (retained all foreground pairs if possible). Our model is optimized by SGD with momentum, using learning rate and batch size for supervised learning, and for reinforcement learning.

Figure 6: The statistics of left-branch (hierarchical) nodes and right-branch (parallel) nodes of the “street” category.

4.3 Ablation Studies

We investigated the influence of different structure construction policies. They are reported on the bottom half of Table 1. The ablative methods are (1) Chain: sorting all the objects by , then constructing a chain, which is different from the left-to-right ordered chain in MOTIFS [51]; (2) Overlap: iteratively constructing a binary tree by selecting the node with largest number of overlapped objects as parent, and dividing the rest nodes into left/right sub-trees by relatively positions of their bounding boxes; (3) Multi-Branch: the maximum spanning tree generated from score matrix , using Child-Sum TreeLSTM [41] to incorporate context; (4) VCTree-SL: the proposed VCTree trained by supervised learning; (5) VCTree-HL: the complete version of VCTree, trained by hybrid learning for structure exploration in Section 3.5. As we will show that Multi-Branch is significantly worse than VCTree, so there is no need to conduct hybrid learning experiment on Multi-Branch. We observe that VCTree performs better than other structures, and it is further improved by hybrid learning for structure exploration.

VQA2.0 Validation Accuracy
Model Yes/No Number Other All Balanced Pairs
Graph 81.8 44.9 56.6 64.5 36.3
Chain 81.8 44.5 56.9 64.6 36.3
Overlap 81.8 44.8 57.0 64.7 36.4
Multi-Branch 82.1 44.3 56.9 64.7 36.6
VCTree-SL 82.3 45.0 57.0 64.9 36.9
VCTree-HL 82.6 45.1 57.1 65.1 37.2
Table 3: Accuracies (%) of various context structures on the VQA2.0 validation set.

4.4 Comparisons with State-of-the-Arts

Comparing Methods. We compared VCTree with state-of-the-art methods in Table 1: (1) VRD [29], FREQ [51] are methods without using visual contexts. (2) AssocEmbed [32] assembles implicit contextual features by stacked hourglass backbone [33]. (3) IMP [46], TFR [20], MOTIFS [51], Graph-RCNN [47] are explicit context models with a variety of structures.

Quantitative Analysis. From Table 1, compared with the previous state-of-the-art MOTIFS [51], the proposed VCTree has the best performances. Interestingly, Overlap tree and Multi-Branch tree are better than other non-tree context models. From Table 2, the proposed VCTree-HL shows larger absolute gains of PredCls under mR@100, which indicates that our model learns non-trivial visual context, i.e., not merely class distribution bias as in FREQ and partially in MOTIFS. Note that MOTIFS [51] is even worse than its FREQ [51] baseline under mR@100.

Qualitative Analysis. To better understand what context is learned by VCTree, we visualized a statistics of left-/right-branch nodes for nodes classified as “street” in Figure 6. From the left pie, the hierarchical relations, we can see the node categories are long-tailed, i.e

., top-10 categories cover the 73% of the instances; while the right pie, the parallel relations, are more uniformly distributed. This demonstrates that

VCTree captures the two types of context successfully. More qualitative examples of VCTrees and their generated scene graph can be viewed in Figure 7. The common errors are generally synonymous labels, e.g., “jeans” vs. “pants”, “man” vs. “person”, and over-interpretation, e.g., the “tail” of bottom left “dog” is considered as “leg”, as it appears at the place where “leg” should be.

VQA2.0 test-dev
Model Yes/No Number Other All
Teney [42] 81.82 44.21 56.05 65.32
MUTAN [5] 82.88 44.54 56.50 66.01
MLB [21] 83.58 44.92 56.34 66.27
DA-NTN [3] 84.29 47.14 57.92 67.56
Count [53] 83.14 51.62 58.97 68.09
Chain 82.74 47.31 58.93 67.42
Graph 83.53 47.09 58.6 67.56
VCTree-HL 84.28 47.78 59.11 68.19
Table 4: Single-model accuracies (%) on VQA2.0 test-dev, where MUTAN and MLB are re-implemented versions from [3].
VQA2.0 test-standard
Model Yes/No Number Other All
Teney [42] 82.20 43.90 56.26 65.67
MUTAN [5] 83.06 44.28 56.91 66.38
MLB [21] 83.96 44.77 56.52 66.62
DA-NTN [3] 84.60 47.13 58.20 67.94
Count [53] 83.56 51.39 59.11 68.41
Chain 83.06 47.38 58.95 67.68
Graph 84.03 47.08 58.82 68.0
VCTree-HL 84.55 47.36 59.34 68.49
Table 5: Single-model accuracies (%) on VQA2.0 test-standard, where MUTAN and MLB are re-implemented versions from [3].
Figure 7: Left: the learned tree structure and generated scene graphs in VG. Black color indicates correctly detected objects or predicates; red indicates the misclassified ones; blue indicates correct predictions that not labeled as ground-truth. Right: interpretable and dynamic trees subject to different questions in VQA2.0.

5 Experiments on Visual Q&A

5.1 Settings

Datasets. We evaluated the proposed VQA model on VQA2.0 [16]. Compared with VQA1.0 [2], VQA2.0 has more question-image pairs for training (443,757) and validation (214,354), and all the question-answer pairs are balanced by making sure the same question can have different answers. In VQA2.0, the ground-truth accuracy of a candidate answer is considered as the average of over all 10 select 9 sets. Question-answer pairs are organized in three answer types: i.e. “Yes/No”, “Number”, “Other”. There are also 65 question types determined by prefixed words, which we used to generate question-guided gates. We also tested our models on a balanced subset of validation set, called Balanced Pairs [42], which requires the same question on different images with two different yet perfect (with 1.0 ground-truth score) answers. Since Balanced Pairs strictly removes question-related bias, it reflects the ability of a context model to distinguish subtle differences between images.

5.2 Implementation Details

We employed a simple text preprocessing for questions and answers, which changes all characters into lower-case and removes special characters. Questions were encoded into a vocabulary of the size 13,758 without trimming. Answers used a 3,000 vocabulary selected by frequency. For fair comparison, we used the same bottom-up feature [1] as previous methods [1, 3, 42, 53], which contains 10 to 100 object proposals per image extracted by Faster-RCNN [37]. We used the same Faster-RCNN detector to pretrain the . Since candidate answers were represented by probabilities rather than one-hot vectors in VQA2.0, we allowed the cross-entropy loss calculating soft categories, i.e., probabilities of ground-truth candidate answers. We used Adam optimizer with learning rate and batch size ,

decayed at ratio of 0.5 every 20 epochs.

5.3 Ablation Studies

In addition to the 5 structure construction policies introduced in Section 4.3, we also implemented a fully-connected graph structure using the message passing mechanism [46]. From Table 3, the proposed VCTree-HL outperforms all the context models on three answer types.

We further evaluated the above context models on VQA2.0 balanced pair subset [42]: the last column of Table 3, and found that the absolute gains between VCTree-HL and other structures are even larger than those on the original validation set. Meanwhile, as reported in [42], different architectures or hyper-parameters in non-contextual VQA model normally gain less improvements on the balanced pair subset than overall validation set. Thus, it suggests that VCTree indeed use better context structures to alleviate the question-answer bias in VQA.

5.4 Comparisons with State-of-the-Arts

Comparing Methods. Table 4 & 5 reports the single-model performances of various state-of-the-art methods [3, 5, 21, 42, 53] on both test-dev and test-standard sets. For fair comparison, the reported methods are all using the same Faster-RCNN features [1] as ours.

Quantitative Analysis. The proposed VCTree-HL shows the best overall performance in both test-dev and test-standard. Note that though Count [53] has close overall performance to our VCTree, it mainly improves the “Number” task by the elaborately designed model, while the proposed VCTree is a more general solution.

Qualitative Analysis. We visualized several examples of VCTree-HL on the validation set. They illustrate that the proposed VCTree is able to learn dynamic structures with interpretability, e.g., in Figure 7, given the right middle image with the question “Is there any snow on the trees?”, the generated VCTree locates the “tree” then searching for the “snow”, while with question “What sport is the man doing?”, the “man” appears to be the root.

6 Conclusions

In this paper, we proposed a dynamic tree structure called VCTree to capture task-specific visual contexts, which can be encoded to support two high-level vision tasks: SGG and VQA. By exploiting VCTree, we observed consistent performance gains in SGG on Visual Genome and in VQA on VQA2.0, compared to models with or without visual contexts. Besides, to justify that VCTree learns non-trivial contexts, we conducted additional experiments against the category bias in SGG and the question-answer bias in VQA, respectively. In the future, we intend to study the potential of a dynamic forest as the underlying context structure.

References

  • [1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, volume 3, page 6, 2018.
  • [2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. Vqa: Visual question answering. In ICCV, pages 2425–2433, 2015.
  • [3] Y. Bai, J. Fu, T. Zhao, and T. Mei.

    Deep attention neural tensor network for visual question answering.

    In ECCV, page 20. Springer, 2018.
  • [4] M. Bar. Visual objects in context. Nature Reviews Neuroscience, 5(8):617, 2004.
  • [5] H. Ben-Younes, R. Cadene, M. Cord, and N. Thome. Mutan: Multimodal tucker fusion for visual question answering. In ICCV, volume 3, 2017.
  • [6] I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz. Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology, 14(2):143–177, 1982.
  • [7] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 40(4):834–848, 2018.
  • [8] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
  • [9] X. Chen, L.-J. Li, L. Fei-Fei, and A. Gupta. Iterative visual reasoning beyond convolutions. In CVPR, 2018.
  • [10] K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio.

    On the properties of neural machine translation: Encoder-decoder approaches.

    In SSST-8, 2014.
  • [11] J. Choi, K. M. Yoo, and S.-g. Lee. Learning to compose task-specific tree structures. In AAAI, 2018.
  • [12] M. J. Choi, A. Torralba, and A. S. Willsky. A tree-based context model for object recognition. TPAMI, 34(2):240–252, 2012.
  • [13] T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson. Introduction to Algorithms. McGraw-Hill Higher Education, 2nd edition, 2001.
  • [14] B. Dai, Y. Zhang, and D. Lin. Detecting visual relationships with deep relational networks. In CVPR. IEEE, 2017.
  • [15] S. K. Divvala, D. Hoiem, J. H. Hays, A. A. Efros, and M. Hebert. An empirical study of context in object detection. In CVPR, pages 1271–1278. IEEE, 2009.
  • [16] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, volume 1, page 3, 2017.
  • [17] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In ICCV. IEEE, 2017.
  • [18] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9:1735–1780, 1997.
  • [19] R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko. Learning to reason: End-to-end module networks for visual question answering. In ICCV, pages 804–813. IEEE, 2017.
  • [20] S. Jae Hwang, S. N. Ravi, Z. Tao, H. J. Kim, M. D. Collins, and V. Singh. Tensorize, factorize and regularize: Robust visual relationship learning. In CVPR, pages 1014–1023, 2018.
  • [21] J.-H. Kim, K.-W. On, W. Lim, J. Kim, J.-W. Ha, and B.-T. Zhang. Hadamard product for low-rank bilinear pooling. In ICLR, 2016.
  • [22] A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár. Panoptic segmentation. arXiv preprint arXiv:1801.00868, 2018.
  • [23] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32–73, 2017.
  • [24] Y. Li, W. Ouyang, B. Zhou, J. Shi, C. Zhang, and X. Wang. Factorizable net: An efficient subgraph-based framework for scene graph generation. In ECCV, pages 346–363. Springer, 2018.
  • [25] Y. Li, W. Ouyang, B. Zhou, K. Wang, and X. Wang. Scene graph generation from objects, phrases and caption regions. In ICCV, 2017.
  • [26] T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, page 4, 2017.
  • [27] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In ECCV, pages 21–37. Springer, 2016.
  • [28] Y. Liu, R. Wang, S. Shan, and X. Chen. Structure inference net: Object detection using scene-level context and instance-level relationships. In CVPR, pages 6985–6994, 2018.
  • [29] C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual relationship detection with language priors. In ECCV, pages 852–869. Springer, 2016.
  • [30] T. Malisiewicz and A. Efros. Beyond categories: The visual memex model for reasoning about object relationships. In NIPS, 2009.
  • [31] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of visual attention. In NIPS, pages 2204–2212, 2014.
  • [32] A. Newell and J. Deng. Pixels to graphs by associative embedding. In NIPS, 2017.
  • [33] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In ECCV, pages 483–499. Springer, 2016.
  • [34] A. Oliva and A. Torralba. The role of context in object recognition. Trends in Cognitive Sciences, 11(12):520–527, 2007.
  • [35] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dollár. Learning to refine object segments. In ECCV. Springer, 2016.
  • [36] R. C. Prim. Shortest connection networks and some generalizations. Bell System Technical Journal, 36(6):1389–1401, 1957.
  • [37] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, pages 91–99, 2015.
  • [38] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. Self-critical sequence training for image captioning. In CVPR, July 2017.
  • [39] Y. Shi, T. Furlanello, S. Zha, and A. Anandkumar. Question type guided attention in visual question answering. In ECCV, September 2018.
  • [40] R. Socher, C. C. Lin, C. Manning, and A. Y. Ng. Parsing natural scenes and natural language with recursive neural networks. In ICML, pages 129–136, 2011.
  • [41] K. S. Tai, R. Socher, and C. D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In ACL, 2015.
  • [42] D. Teney, P. Anderson, X. He, and A. van den Hengel. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In CVPR, June 2018.
  • [43] D. Teney, L. Liu, and A. van den Hengel. Graph-structured representations for visual question answering. In CVPR, July 2017.
  • [44] T. Watanabe, A. M. Harner, S. Miyauchi, Y. Sasaki, M. Nielsen, D. Palomo, and I. Mukai. Task-dependent influences of attention on the activation of human primary visual cortex. Proceedings of the National Academy of Sciences, 95:11489–11492, 1998.
  • [45] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
  • [46] D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei. Scene graph generation by iterative message passing. In CVPR, 2017.
  • [47] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh. Graph r-cnn for scene graph generation. In ECCV, September 2018.
  • [48] T. Yao, Y. Pan, Y. Li, and T. Mei. Exploring visual relationship for image captioning. In ECCV, September 2018.
  • [49] G. Yin, L. Sheng, B. Liu, N. Yu, X. Wang, J. Shao, and C. C. Loy.

    Zoom-net: Mining deep feature interactions for visual relationship recognition.

    In ECCV, pages 330–347. Springer, 2018.
  • [50] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
  • [51] R. Zellers, M. Yatskar, S. Thomson, and Y. Choi. Neural motifs: Scene graph parsing with global context. In CVPR, 2018.
  • [52] J. Zhang, M. Elhoseiny, S. Cohen, W. Chang, and A. M. Elgammal. Relationship proposal networks. In CVPR, 2017.
  • [53] Y. Zhang, J. Hare, and A. Prügel-Bennett. Learning to count objects in natural images for visual question answering. In ICLR, 2018.
  • [54] R. Zhao, W. Ouyang, H. Li, and X. Wang.

    Saliency detection by multi-context deep learning.

    In CVPR, pages 1265–1274, 2015.
  • [55] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr.

    Conditional random fields as recurrent neural networks.

    In ICCV, pages 1529–1537, 2015.

Appendix A Bidirectional TreeLSTM

In this section, we will introduce the details of the bidirectional TreeLSTM applied to encode the object-level visual contexts. For the bottom-up direction, we employ -ary TreeLSTM [41] for binary trees, i.e., VCTrees and Overlap Trees, and the normalized Child-Sum [41] TreeLSTM for Multi-Branch Trees. For the top-down direction, since each node only has one parent, TreeLSTM is similar to the traditional LSTM [18].

a.1 -ary TreeLSTM for Binary Trees

According to the definition of -ary TreeLSTM [41], it can be applied to the tree structures with at most ordered branches for each node. In our work, we adopt binary TreeLSTM as our bottom-up TreeLSTM for the proposed binary tree structures, i.e., VCTrees and Overlap Trees. It can be formulated as follows:

(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)

where is the input feature for node ; are the hidden states; are memory cells; and are learnable matrices; are vectors; denotes sigmoid function; tanh

denotes tanh activation function;

means element-wise product. Note that we slightly abuse the subscripts of to denote hidden states and memory cells from the left-child and right-child of node . The hidden states and memory cells of the missing branches will be filled with zero vectors.

a.2 Child-Sum TreeLSTM for Multi-Branch Trees

The Child-Sum TreeLSTM [41] is able to deal with the tree structure where each node has arbitrary number of children. Therefore, we adopt it as the bottom-up TreeLSTM of the context encoder for the Multi-Branch Trees in the ablation studies. For each node of a Multi-Branch Tree, we define as the set of its children. Compared with the original paper [41], we replace the Child-Sum with the Child-Mean in our implementation for better normalization, then it is formulated as:

(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)

where are the hidden states; are memory cells; and are learnable matrices; are vectors; is the number of children for node ; denotes the mean hidden state of all the children of node .

Scene Graph Generation Scene Graph Classification Predicate Classification
Model mR@20 mR@50 mR@100 mR@20 mR@50 mR@100 mR@20 mR@50 mR@100
MOTIFS [51] 4.2 5.7 6.6 6.3 7.7 8.2 10.8 14.0 15.3
FREQ [51] 4.5 6.1 7.1 5.1 7.2 8.5 8.3 13.0 16.0
Chain 4.6 6.3 7.2 6.3 7.9 8.8 11.0 14.4 16.6
Overlap 4.8 6.5 7.5 7.2 9.0 9.3 12.5 16.1 17.4
Multi-Branch 4.7 6.5 7.4 6.9 8.6 9.2 11.9 15.5 16.9
VCTree-SL 5.0 6.7 7.7 8.0 9.8 10.5 13.4 17.0 18.5
VCTree-HL 5.2 6.9 8.0 8.2 10.1 10.8 14.0 17.9 19.4
Table 6: Mean recall (%) of various methods across all the 50 predicate categories. MOTIFS [51] and FREQ [51] are using the same Faster-RCNN detector as ours.
Figure 8: Recall@100 of MOTIFS [51] and the proposed VCTree-HL under PredCls for each Top-35 category ranking by frequency.

a.3 Top-Down TreeLSTM

We use the traditional LSTM [18] as the top-down TreeLSTM for all the VCTrees, Overlap Trees, and Multi-Branch Trees, because each node only has at most one parent. The only difference with the traditional LSTM is that our structures are trees rather than chains, the previous hidden state is from the parent of node .

For the proposed VCTree, we assigned different learnable matrices for the hidden states from the left-branch parents and right-branch parents. However, the result didn’t show significant improvements in the end-tasks, so we employ traditional LSTM as our top-down LSTM for efficiency.

Appendix B Quantitative Analysis

b.1 Mean Recall for Scene Graph

We also report more detailed results of the proposed Mean Recall (mR@K) in Table 6. The proposed VCTree-HL shows best performance among all the ablative structures. Note that MOTIFS [51] has lower mR@100 than FREQ [51] baseline in SGCls and PredCls, which means that MOTIFS is even worse at predicting infrequent predicate categories. However, its mR@20 and mR@50 are higher than FREQ in SGCls and PredCls, which indicates that MOTIFS better separates the foreground relationships from the background ones than FREQ.

b.2 Predicate Recall Analysis

To better visualize the improvement of the proposed VCTree-HL on infrequent predicate categories, we rank all the predicate categories by frequency, and show the PredCls Recall@100 of MOTIFS [51] and VCTree-HL for each top-35 category independently in Figure 8. We can observe significant improvements on those less frequent but more semantically meaningful predicates.

Appendix C Qualitative Analysis

c.1 Scene Graph Generation

We further investigated more misclassified results of the proposed VCTree-HL. The corresponding tree structures and the generated scene graphs are reported in Figure 9. We observed 3 types of interesting misclassifications: 1) In the image (a) of Figure 9, the proposed VCTree-HL predicts more appropriate predicates “in front of” and “behind” than original “near”. 2) In the image (b) and (d), the ground truth “man in snow” and “window near building” are improper, while our method shows more appropriate predicates. 3) In the image (c) and (d), the objects isolated from the Scene Graph (only considering R@20 predicates) are easier to be misclassified.

c.2 Visual Question Answering

More constructed VCTrees for VQA2.0 are visualized in Figure 10. The dynamic tree structures are subject to different questions, which allow the objects in an image to incorporate the different contextual cues according to each question. The proposed VCTree also helps us understand how the model predicts the answer of the question given the image, e.g., in image (a) of Figure 10, given the question “does this dog have a collar?”, we find that our model first focuses on the collar-like object rather than the dog; in image (b) of Figure 10, given the question “what sport is being played?”, we find that our model focuses on the sportsman rather than playground to answer this question.

Figure 9: The learned tree structures and generated scene graphs in VG. We selectively report the predicates from R@20 and all the ground-truth predicates. Black color indicates correctly detected objects or predicates; red indicates the misclassified ones; blue indicates correct predictions that not labeled as ground-truth.
Figure 10: The dynamic and interpretable tree structures that subject to different questions, which allow the objects in an image incorporate different contextual cues according to each question.