In recent years, there has been growing interest to connect successes in visual perception with language and reasoning [29, 45]. This requires us to design systems that can not only recognize objects, but understand and reason about the relationships between them. This is essential for such tasks as visual question answering (VQA) [2, 16, 5] or caption generation [39, 12]. However, predicting a high-level semantic output (e.g. answer) from a low-level visual signal (e.g. image) is challenging due to a vast gap between the modalities. To bridge this gap, it would be useful to have some intermediate representation that can be relatively easily generated by the low-level module and, at the same time, can be effectively used by the high-level reasoning module. We want this representation to semantically describe the visual scene in terms of objects and relationships between them, which leads us to a structured image representation, the scene graph (SG) [18, 20]. A scene graph is a collection of visual relationship triplets: <subject, predicate, object> (e.g. <cup, on, table>). Each node in the graph corresponds to a subject or object (with a specific image location) and edges to predicates (Figure 1). Besides bridging the gap, SGs can be used to verify how well the model has understood the visual world, as opposed to just exploiting one of the biases in a dataset [17, 1, 3]. Alternative directions to SGs include, for example, attention  and neural-symbolic models .
, table>), which creates a strong frequency bias. This makes it particularly challenging for models to generalize to novel (zero-shot) and rare (few-shot) compositions, even though each of the subjects, objects and predicates have been observed at training time. The problem is exacerbated by the test set and evaluation metrics, which do not penalize models that blindly rely on such bias. Indeed,Zellers et al.  has pointed out that SGG models largely exploit simple co-occurrence information. In fact, the performance of models predicting solely based on frequency (i.e. a cup is most likely to be on a table) is not far from the state-of-the-art using common metrics (see Freq in Table 1).
In this work, we reveal that (a) the frequency bias exploited by certain models leads to poor generalization on few-shot and zero-shot compositions; (b) existing models disproportionately penalize large graphs, even if these often contain many of the infrequent visual relationships, which leads to performance degradation on few and zero-shot cases (Figure 2). We address these challenges and show that our suggested improvements can provide benefits for two strong baseline models [36, 40]. Overall, we make the following four contributions:
Improved loss: we introduce a density-normalized edge loss, which improves results on all metrics, especially for few and zero-shots (Section 3.2);
Novel weighted metric: we illustrate several issues in the evaluation of few and zero-shots, proposing a novel weighted metric which can better track the performance of this critical desiderata (Section 3.3);
Scaling to GQA: in addition to evaluating on Visual Genome (VG) , we confirm the usefulness of our loss and metrics on GQA  – an improved version of VG. GQA has not been used to evaluate SGG models before and is interesting to study, because compared to VG its scene graphs are cleaner, larger, more dense and contain a larger variety of objects and predicates (Section 4).
2 Related Work
Zero-shot learning. In vision tasks, such as image classification, zero-shot learning has been extensively studied, and the main approaches are based on attributes  and semantic embeddings [11, 35]. The first approach is related to the zero-shot problem we address in this work: it assumes that all individual attributes of objects (color, shape, etc.) are observed during training, such that novel classes can be detected at test time based on compositions of their attributes. Zero-shot learning in scene graphs is similar: all individual subjects, objects and predicates are observed during training, but most of their compositions are not. This task was first evaluated in  on the VRD dataset using a joint vision-language model. Several follow-up works attempted to improve upon it: by learning a translation operator in the embedding space , clustering in a weakly-supervised fashion , using conditional random fields  or optimizing a cycle-consistency loss to learn object-agnostic features . Augmentation using generative networks to generate more examples of rare cases is another promising approach ; but, it was only evaluated in the predicate classification task. In our work, we also consider subject/object classification to enable the classification of the whole triplets, making the “image to scene graph” pipeline complete. Most recently, Tang et al.  proposed learning causal graphs and showed strong performance in zero-shot cases.
While these works improve generalization, none of them has identified the challenges and importance of learning from large graphs for generalization. By concentrating the model’s capacity on smaller graphs and neglecting larger graphs, baseline models limit the variability of training data, useful for stronger generalization . Our loss enables this learning, increasing the effective data variability. Moreover, previous gains typically incur a large computational cost, while our loss has negligible cost and can be easily added to other models.
Few-shot predicates. Several recent works have addressed the problem of imbalanced and few-shot predicate classes [6, 10, 30, 44, 31, 7]. However, compared to our work, these works have not considered the imbalance between foreground and background edges, which is more severe than other predicate classes (Figure 3) and is important to be fixed as we show in this work. Moreover, we argue that the compositional generalization, not addressed in those works, can be more difficult than generalization to rare predicates. For example, the triplet <cup, on, surfboard> is challenging to be predicted correctly as a whole; even though ‘on’ can be the most frequent predicate, it has never been observed together with ‘cup’ and ‘surfboard’. Experimental results in previous work [23, 42, 38, 34, 31] highlight this difficulty. Throughout this work, by “few-shot” we assume triplets, not predicates.
“Unbiasing” methods. Our idea is similar to the Focal loss , which addresses the imbalance between foreground and background objects in the object detection task. However, directly applying the focal loss to Visual Genome is challenging, due to the large amount of missing and mislabeled examples in the dataset. In this case, concentrating the model’s capacity on “hard” examples can be equivalent to putting more weight on noise, which can hurt performance. Tang et al.  compared the focal loss and other unbiasing methods, such as upsampling and upweighting, and did not report significantly better results.
In this section, we will review a standard loss used to train scene graph generation models (Section 3.1) and describe our improved loss (Section 3.2). We will then discuss issues with evaluating rarer combinations and propose a new weighted metric (Section 3.3).
3.1 Overview of Scene Graph Generation
In scene graph generation, given an image , we aim to output a scene graph consisting of a set of subjects and objects () as nodes and a set of relationships or predicates () between them as edges (Figure 1
). So the task is to maximize the probability, which can be expressed as . Except for works that directly learn from pixels , the task is commonly [36, 37, 40] reformulated by first detecting bounding boxes and extracting corresponding object and edge features, and respectively, using some functions (e.g. a ConvNet followed by ROI Align ):
The advantage of this approach is that solving is easier than solving . At the same time, to compute we can use pretrained object detectors [27, 13]. Therefore, we follow [36, 37, 40] and use this approach to scene graph generation.
In practice, we can assume that the pretrained object detector is fixed or that ground truth bounding boxes are available, so we can assume is constant. In addition, following [23, 36, 37], we can assume conditional independence of variables and : . We thus obtain the scene graph generation loss:
Some models [40, 44] do not assume the conditional independence of and , making predicates explicitly depend on subject and object labels: . However, such a model must be carefully regularized, since it can start to ignore and mainly rely on the frequency distribution as a stronger signal. For example, the model can learn that between ‘cup’ and ‘table’ the relationship is most likely to be ‘on’, regardless the visual signal. As we show, this can hurt generalization.
Eq. (2) is commonly handled as a multitask classification problem, where each task is optimized by the cross-entropy loss . In particular, given a batch of scene graphs with nodes and edges in total, the loss is the following:
Node and edge features output by the detector form a complete graph without self-loops (Figure 1). So, conventionally [36, 37, 40], the loss is applied to all edges: . These edges can be divided into foreground (FG), corresponding to annotated edges, and background (BG), corresponding to not annotated edges: . The BG edge type is similar to a “negative” class in the object detection task and has a similar purpose. Without training on BG edges, at test time the model would label all pairs of nodes as “positive”, i.e. having some relationship, when often it is not the case (at least, given the vocabulary in the datasets). Therefore, not using the BG type can hurt the quality of predicted scene graphs and can lower recall.
3.2 Hyperparameter-free Normalization of the Edge Loss
Baseline loss as a function of graph density. In scene graph datasets such as Visual Genome, the number of BG edges is greater than FG ones (Figure 3), yet the baseline loss (3) does not explicitly differentiate between BG and other edges. If we assume a fixed probability for two objects to have a relationship, then as the number of nodes grows we can expect fewer of them to have a relationship. Thus the graph density can vary based on the number of nodes (Figure 4), a fact not taken into account in Eq. (3). To avoid this, we start by decoupling the edge term of (3) into the foreground (FG) and background (BG) terms:
where is a set of FG edges, is the number of FG edges () and is the number of BG edges. Next, we denote FG and BG edge losses averaged per batch as and , respectively. Then, using the definition of graph density as a proportion of FG edges to all edges, , we can express the total baseline loss equivalent to Eq. (3) as a function of graph density:
Discrepancy between object and edge losses. Due to tending to be small on average, is much smaller than , so the model might focus mainly on (Fig. 4, right).
Both issues can be addressed by normalizing FG and BG terms by graph density :
in our default hyperparameter-free variant andonly to empirically analyze the loss (Table 4). Even though the BG term still depends on graph density, we found it to be less sensitive to variations in , since the BG loss quickly converges to some stable value, performing a role of regularization (Figure 4, right). We examine this in detail in Section 4.
|for large graphs||Large graphs are downweighted by|
3.3 Weighted Triplet Recall
The common evaluation metric for scene graph prediction is image-level Recall@K or R@K [36, 37, 40]. To compute it, we first need to extract the top- triplets, , from the entire image based on ranked predictions of a model . Given a set of ground truth triplets, GT, the image-level R@K is computed as (see Figure 5 for a visualization):
There are four issues with this metric (we discuss additional details in Appendix):
[label=(),wide, labelwidth=!, labelindent=0pt, leftmargin=0pt, listparindent=0pt, labelsep=3pt]
The frequency bias of triplets means more frequent triplets will dominate the metric.
The denominator in (7) creates discrepancies between images with different (the number of ground truth triplets in an image), especially pronounced in few/zero shots.
To address issue (a), the predicate-normalized metric, mean recall (mR@K) [6, 30] and weighted mR@K were introduced . These metrics, however, only address the imbalance of predicate classes, not whole triplets. Early work [23, 9] used triplet-level Recall@K (or R@K) for some tasks (e.g. predicate detection), which is based on ranking predicted triplets for each ground truth subject-object pair independently; the pairs without relationships are not evaluated. Hence, R@K is similar to top- accuracy. This metric avoids issues (b) and (d), but the issues of the frequency bias (a) and unseen/rare cases (c) still remain. To alleviate these, we adapt this metric to better track unseen and rare cases. We call our novel metric Weighted Triplet Recall wR@K, which computes a recall at each triplet and reweights the average result based on the frequency of the GT triplet in the training set:
where is the number of all test triplets, is the Iverson bracket, and is the number of occurrences of triplet in the training set; is used to handle zero-shot triplets; . Since wR@K is still a triplet-level metric, we avoid issues (b) and (d). Our metric is also robust to the frequency-bias (a), since frequent triplets (with high ) are downweighted proportionally, which we confirm by evaluating the Freq model from . Finally, a single wR@K value shows zero and few-shot performance linearly aggregated for all , solving issue (c).
Datasets. We evaluate our loss and metric on Visual Genome . Since it is a noisy dataset, several “clean” variants were introduced. We mainly experiment with the most common variant (VG) , which consists of the 150 most frequent object and 50 predicates classes. An alternative variant (VTE)  has been often used for zero-shot evaluation. Surprisingly, we found that the VG split  is better suited for this task, given a larger variability of zero-shot triplets in the test set (see Table 7 in Appendix). Recently, GQA  was introduced, where scene graphs were cleaned to automatically construct question answer pairs. GQA has more object and predicate classes, so that zero and few-shot triplets are more likely to occur at test time. To the best of our knowledge, scene graph generation (SGG) results have not been reported on GQA before, even though some VQA models have relied on SGG .
Training and evaluation details. We experiment with two models: Message Passing (MP)  and Neural Motifs (NM) . We use publicly available implementations of MP and NM333https://github.com/rowanz/neural-motifs, with all architecture details and hyperparameters kept the same (except for the small changes outlined in Table 8 in Appendix). To be consistent with baseline models, for Visual Genome we use Faster R-CNN  with VGG16 as a backbone to extract node and edge features. For GQA we choose a more recent Mask R-CNN  with ResNet-50-FPN as a backbone pretrained on COCO. We also use this detector on the VTE split. We perform more experiments with Message Passing, since our experiments revealed that it better generalizes to zero and few-shot cases, while performing only slightly worse on other metrics. In addition, it is a relatively simple model, which makes the analysis of its performance easier. We evaluate models on three tasks, according to : 1) predicate classification (PredCls), in which the model only needs to label a predicate given ground truth object labels and bounding boxes, i.e. ; 2) scene graph classification (SGCls), in which the model must also label objects, i.e. ; 3) scene graph generation SGGen (sometimes denoted as SGDet), , which includes detecting bounding boxes first (reported separately in Tables 5, 6).
Table 1 shows our main results, where for each task we report five metrics: image-level recall on all triplets (R@K) and zero-shot triplets (R@K), triplet-level recall (R@K) and our weighted triplet recall (wR@K), and mean recall (mR@K). We compute recalls without the graph constraint since, as we discuss in Appendix, this is a more accurate metric. We denote graph-constrained results as PredCls-GC, SGCls-GC, SGGen-GC and report them only in Tables 4, 4 and Table 5.
|Data -set||Model||Loss||Scene Graph Classification||Predicate Classification|
|Visual Genome||Freq ||45.4||0.5||51.7||18.3||19.1||69.8||0.3||89.8||31.0||22.1|
|MP [36, 40]||Baseline (3)||47.2||8.2||51.9||26.2||17.3||74.8||23.3||86.6||51.3||20.6|
|NM ||Baseline (3)||48.1||5.7||51.9||26.5||20.4||80.5||11.1||91.0||51.8||26.9|
|Ours (6), no Freq||48.4||8.9||51.8||28.0||26.1||82.5||26.6||92.4||60.3||35.8|
|GQA||MP [36, 40]||Baseline (3)||27.1||2.8||31.9||8.9||1.6||59.7||34.9||96.4||88.4||1.8|
|GQA -nLR||Ours (6)||27.6||3.0||32.2||8.9||2.8||61.0||37.2||96.9||89.5||2.9|
|MP [36, 40]||Baseline (3)||24.9||3.0||30.2||12.4||2.8||58.1||21.7||71.6||47.0||4.6|
We can observe that both Message Passing (MP) and Neural Motifs (NM) greatly benefit from our density-normalized loss on all reported metrics. Larger gaps are achieved on metrics evaluating zero and few-shots. For example, in PredCls on Visual Genome, MP with our loss is 22% better (in relative terms) on zero-shots, while NM with our loss is 50% better. The gains arising from other zero-shot and weighted metrics are also significant.
On GQA, our loss also consistently improves results, especially in PredCls. However, the gap is lower compared to VG. There are two reasons for this: 1) scene graphs in GQA are much denser (see Appendix), i.e. the imbalance between FG and BG edges is less pronounced, which means that in the baseline loss the edge term is not diminished to the extent it is in VG; and 2) the training set of GQA is more diverse than VG (with 15 times more labeled triplets), which makes the baseline model generalize well on zero and few-shots. We confirm these arguments by training and evaluating on our version of GQA: GQA-nLR with left and right predicate classes excluded making scene graph properties, in particular sparsity, more similar to those of VG.
Effect of the Frequency Bias (Freq) on Zero and Few-Shot Performance.
The Freq model  simply predicts the most frequent predicate between a subject and an object, . Its effect on few-shot generalization has not been empirically studied before. We study this by adding/ablating Freq from baseline MP and NM on Visual Genome (Figure 6). Our results indicate that Freq only marginally improves results on unweighted metrics. At the same time, perhaps unsurprisingly, it leads to severe drops in zero-shot and weighted metrics, especially in NM. For example, by ablating Freq from NM, we improve PredCls-R@50 from 11% to 25%. This also highlights that the existing recall metrics are a poor choice to understand the effectiveness of a model.
Why does loss normalization help more on few and zero-shots?
The baseline loss effectively ignores edge labels of large graphs, because it is scaled by a small in those cases (Figure 4). To validate that, we split the training set of Visual Genome in two subsets, with a comparable number of images in each: with relatively small and large graphs evaluating on the original test set in both cases. We observe that the baseline model does not learn well from large graphs, while our loss enables this learning (Figure 7). Moreover, when trained on small graphs only, the baseline is even better in PredCls than when trained on all graphs. This is because in the latter case, large graphs, when present in a batch, make the whole batch more sparse, downweighting the edge loss of small graphs as well. At the same time, larger graphs predictably contain more labels, including many few-shot labels (Figure 2). Together, these two factors make the baseline ignore many few-shot triplets pertaining to larger graphs at training time, so the model cannot generalize to them at test time. Since the baseline essentially observes less variability during training, it leads to poor generalization on zero-shots as well. This argument aligns well to the works from other domains , showing that generalization strongly depends on the diversity of samples during training. Our loss fixes the issue of learning from larger graphs, which, given the reasons above, directly affects the ability to generalize.
|Testing on VG||Testing on GQA|
|No tune (baseline)||(3)||47.2/8.2||74.8/23.3||27.1/2.8||59.7/34.9|
|No tune (ours, independ. norm)||(9)||47.5/8.4||74.3/25.3||27.4/2.9||59.5/35.4|
|No tune (ours, no upweight)||for VG/GQA||(6)||48.7/9.6||78.3/28.2||27.4/2.9||61.1/36.8|
|No tune (ours)||(6)||48.6/9.1||78.2/28.4||27.6/3.0||61.0/37.2|
We compare our loss to ones with tuned hyperparameters (Table 4):
Our main finding is that, while these losses can give similar or better results in some cases, the parameters , and do not generally transfer across datasets and must be tuned every time, which can be problematic at larger scale . In contrast, our loss does not require tuning and achieves comparable performance.
To study the effect of density normalization separately from upweighting the edge loss (which is a side effect of our normalization), we also consider downweighting our edge term (6) by some to cancel out this upweighting effect. This ensures a similar range for the losses in our comparison. We found (Table 4) that the results are still significantly better than the baseline and, in some cases, even better than our hyperparameter-free loss. This further confirms that normalization of the graph density is important on its own. When carefully fine-tuned, the effects of normalization and upweighting are complimentary (e.g. when or are fine-tuned, the results tend to be better).
Comparison to other zero-shot works.
We also compare to previous works studying zero-shot generalization (Tables 4 and 4). For comprehensive evaluation, we test on both VTE and VG splits. We achieve superior results on VTE, even by just using the baseline MP, because, as shown in our main results, it generalizes well. On the VG split, we obtain results that compete with a more recent Total Direct Effect (TDE) method , even though the latter uses a more advanced detector and feature extractor. In all cases, our loss improves baseline results and, except for R@100 in SGCls, leads to state-of-the-art generalization.
|MP [36, 40]||VGG16||Baseline (3)||24.3||0.8||4.5||27.2||0.9||7.1|
||NM ||VGG16||Baseline (3)||29.8||0.3||5.9||35.0||0.8||12.4|
|VGG16||Ours (6), no Freq||30.4||1.7||7.8||35.9||2.4||15.3|
|KERN ||VGG16||Baseline (3)||29.8||0.04||7.3||35.8||0.02||16.0|
|NM ||ResNeXt-101||Baseline (3)||36.9||0.2||6.8|
|NM+TDE ||ResNeXt-101||Baseline (3)||20.3||2.9||9.8|
|VCTree+TDE ||ResNeXt-101||Baseline (3)||23.2||3.2||11.1|
Comparison on SGGen, .
In SGCls and PredCls, we relied on ground truth bounding boxes , while in SGGen the bounding boxes predicted by a detector should be used to enable a complete image-to-scene graph pipeline. Here, even small differences between and can create large distribution shifts between corresponding extracted features (see Section 3.1), on which SGCls models are trained. Therefore, it is important to refine the SGCls model on extracted based on predicted , according to previous work [40, 6]. In our experience, this refinement can boost the R@100 result by around 3% for Message Passing and up to 8% for Neural Motifs (in absolute terms). In Table 5, we report results after the refinement completed both for the baseline loss and our loss in the same way. Similarly to the SGCls and PredCls results, our loss consistently improves baseline results in SGGen. It also allows Neural Motifs (NM) to significantly outperform KERN  on zero-shots (R@100), while being only slightly worse in one of the mR@100 results. The main drawback of KERN is its slow training, which prevented us to explore this model together with our loss. Following our experiments in Table 1 and Figure 6, we also confirm the positive effect of removing Freq from NM. A more recent work of Tang et al.  shows better results on zero-shots and mean recall, however, we note their more advanced feature extractor, therefore it is difficult to compare our results to theirs in a fair fashion. But, since they also use the baseline loss (3), our loss (6) can potentially improve their model, which we leave for future work.
Finally, we evaluate SGGen on GQA using Message Passing (Table 6), where we also obtain improvements with our loss. GQA has 1703 object classes compared to 150 in VG making object detection harder. When evaluating SGGen, the predicted triplet is matched to ground truth (GT) if predicted and GT bounding boxes have an intersection over union (IoU) of 50%, so more misdetections lead to a larger gap between SGCls and SGGen results.
|Detections||Scene Graph||Detections||Scene Graph||Detections||Scene Graph|
|Baseline is correct, Ours is incorrect|
|zero-shot triplet:||boat on snow||match:||boat on snow||closest match:||boat in snow|
|zero-shot triplet: leaf on bike (mislabeled)||match:||leaf on bike||closest match:||leaf on sidewalk|
|zero-shot triplet:||wire on bed||match:||wire on bed||closest match:||wire near bed|
|zero-shot triplet:||plant in bottle||match:||plant in bottle||closest match:||flower in bottle|
|Baseline is incorrect, Ours is correct|
|zero-shot triplet:||banana on tile||no triplet involving a banana in top-20||match:||banana on tile|
|zero-shot triplet: horse walking on sidewalk||no triplet involving a sidewalk in top-20||match: horse walking on sidewalk|
|zero-shot triplet:||bear in wave||closest match:||bear on wave||match:||bear in wave|
|zero-shot triplet:||woman sitting on rock||no triplet involving a rock in top-20||match: woman sitting on rock|
Scene graphs are a useful semantic representation of images, accelerating research in many applications including visual question answering. It is vital for the SGG model to perform well on unseen or rare compositions of objects and predicates, which are inevitable due to an extremely long tail of the distribution over triplets. We show that strong baseline models do not effectively learn from all labels, leading to poor generalization on few/zero shots. Moreover, current evaluation metrics do not reflect this problem, exacerbating it instead. We also show that learning well from larger graphs is essential to enable stronger generalization. To this end, we modify the loss commonly used in SGG and achieve significant improvements and, in certain cases, state-of-the-art results, on both the existing and our novel weighted metric.
BK is funded by the Mila internship, the Vector Institute and the University of Guelph. CC is funded by DREAM CDT. EB is funded by IVADO. This research was developed with funding from DARPA. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. The authors also acknowledge support from the Canadian Institute for Advanced Research and the Canada Foundation for Innovation. We are also thankful to Brendan Duke for the help with setting up the compute environment. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute:http://www.vectorinstitute.ai/#partners.
- Anand et al.  A. Anand, E. Belilovsky, K. Kastner, H. Larochelle, and A. Courville. Blindfold baselines for embodied QA. arXiv preprint arXiv:1811.05013, 2018.
Antol et al. 
S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and
VQA: Visual question answering.
Proceedings of the IEEE international conference on computer vision, pages 2425–2433, 2015.
- Bahdanau et al.  D. Bahdanau, S. Murty, M. Noukhovitch, T. H. Nguyen, H. de Vries, and A. Courville. Systematic generalization: what is required and can it be learned? arXiv preprint arXiv:1811.12889, 2018.
- Belilovsky et al.  E. Belilovsky, M. Blaschko, J. Kiros, R. Urtasun, and R. Zemel. Joint embeddings of scene graphs and images. 2017.
- Cangea et al.  C. Cangea, E. Belilovsky, P. Liò, and A. Courville. Videonavqa: Bridging the gap between visual and embodied question answering, 2019.
Chen et al. [2019a]
T. Chen, W. Yu, R. Chen, and L. Lin.
Knowledge-embedded routing network for scene graph generation.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6163–6171, 2019a.
- Chen et al. [2019b] V. S. Chen, P. Varma, R. Krishna, M. Bernstein, C. Re, and L. Fei-Fei. Scene graph prediction with limited labels. In Proceedings of the IEEE International Conference on Computer Vision, pages 2580–2590, 2019b.
- Cong et al.  W. Cong, W. Wang, and W.-C. Lee. Scene graph generation via conditional random fields. arXiv preprint arXiv:1811.08075, 2018.
- Dai et al.  B. Dai, Y. Zhang, and D. Lin. Detecting visual relationships with deep relational networks. In Proceedings of the IEEE conference on computer vision and Pattern recognition, pages 3076–3086, 2017.
- Dornadula et al.  A. Dornadula, A. Narcomey, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual relationships as functions: Enabling few-shot scene graph prediction. In ArXiv, 2019.
- Frome et al.  A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems, pages 2121–2129, 2013.
- Gu et al.  J. Gu, S. Joty, J. Cai, H. Zhao, X. Yang, and G. Wang. Unpaired image captioning via scene graph alignments. In Proceedings of the IEEE International Conference on Computer Vision, pages 10323–10332, 2019.
- He et al.  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
- Hill et al.  F. Hill, A. Lampinen, R. Schneider, S. Clark, M. Botvinick, J. L. McClelland, and A. Santoro. Environmental drivers of systematicity and generalization in a situated agent, 2019.
- Hudson and Manning [2019a] D. Hudson and C. D. Manning. Learning by abstraction: The neural state machine. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5903–5916. Curran Associates, Inc., 2019a. URL http://papers.nips.cc/paper/8825-learning-by-abstraction-the-neural-state-machine.pdf.
- Hudson and Manning [2019b] D. A. Hudson and C. D. Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6700–6709, 2019b.
- Jabri et al.  A. Jabri, A. Joulin, and L. Van Der Maaten. Revisiting visual question answering baselines. In European conference on computer vision, pages 727–739. Springer, 2016.
- Johnson et al.  J. Johnson, R. Krishna, M. Stark, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3668–3678, 2015.
Knyazev et al. 
B. Knyazev, G. W. Taylor, and M. Amer.
Understanding attention and generalization in graph neural networks.In Advances in Neural Information Processing Systems, pages 4204–4214, 2019.
- Krishna et al.  R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, and et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, Feb 2017. ISSN 1573-1405. doi: 10.1007/s11263-016-0981-7. URL http://dx.doi.org/10.1007/s11263-016-0981-7.
- Lampert et al.  C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE transactions on pattern analysis and machine intelligence, 36(3):453–465, 2013.
- Lin et al.  T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
- Lu et al.  C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei. Visual relationship detection with language priors. In European conference on computer vision, pages 852–869. Springer, 2016.
- Newell and Deng  A. Newell and J. Deng. Pixels to graphs by associative embedding. In Advances in neural information processing systems, pages 2171–2180, 2017.
- Norcliffe-Brown et al.  W. Norcliffe-Brown, S. Vafeias, and S. Parisot. Learning conditioned graph structures for interpretable visual question answering. In Advances in Neural Information Processing Systems, pages 8334–8343, 2018.
Peyre et al. 
J. Peyre, J. Sivic, I. Laptev, and C. Schmid.
Weakly-supervised learning of visual relations.In Proceedings of the IEEE International Conference on Computer Vision, pages 5179–5188, 2017.
- Ren et al.  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
- Rong et al.  Y. Rong, W. Huang, T. Xu, and J. Huang. The truly deep graph convolutional networks for node classification. arXiv preprint arXiv:1907.10903, 2019.
- Su et al.  W. Su, X. Zhu, Y. Cao, B. Li, L. Lu, F. Wei, and J. Dai. Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530, 2019.
- Tang et al.  K. Tang, H. Zhang, B. Wu, W. Luo, and W. Liu. Learning to compose dynamic tree structures for visual contexts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6619–6628, 2019.
- Tang et al.  K. Tang, Y. Niu, J. Huang, J. Shi, and H. Zhang. Unbiased scene graph generation from biased training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020.
- Vedantam et al.  R. Vedantam, K. Desai, S. Lee, M. Rohrbach, D. Batra, and D. Parikh. Probabilistic neural-symbolic models for interpretable visual question answering. arXiv preprint arXiv:1902.07864, 2019.
- Veličković et al.  P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
- Wang et al.  X. Wang, Q. Sun, M. ANG, and T.-S. CHUA. Generating expensive relationship features from cheap objects. 2019.
- Xian et al.  Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele. Latent embeddings for zero-shot classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 69–77, 2016.
- Xu et al.  D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410–5419, 2017.
- Yang et al. [2018a] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh. Graph r-cnn for scene graph generation. In Proceedings of the European conference on computer vision (ECCV), pages 670–685, 2018a.
- Yang et al. [2018b] X. Yang, H. Zhang, and J. Cai. Shuffle-then-assemble: Learning object-agnostic visual relationship features. In Proceedings of the European Conference on Computer Vision (ECCV), pages 36–52, 2018b.
- Yang et al.  X. Yang, K. Tang, H. Zhang, and J. Cai. Auto-encoding scene graphs for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10685–10694, 2019.
- Zellers et al.  R. Zellers, M. Yatskar, S. Thomson, and Y. Choi. Neural motifs: Scene graph parsing with global context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5831–5840, 2018.
- Zhang et al. [2019a] C. Zhang, W.-L. Chao, and D. Xuan. An empirical study on leveraging scene graphs for visual question answering. arXiv preprint arXiv:1907.12133, 2019a.
- Zhang et al.  H. Zhang, Z. Kyaw, S.-F. Chang, and T.-S. Chua. Visual translation embedding network for visual relation detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5532–5540, 2017.
Zhang et al. [2019b]
J. Zhang, Y. Kalantidis, M. Rohrbach, M. Paluri, A. Elgammal, and M. Elhoseiny.
Large-scale visual relationship understanding.
Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9185–9194, 2019b.
- Zhang et al. [2019c] J. Zhang, K. J. Shih, A. Elgammal, A. Tao, and B. Catanzaro. Graphical contrastive losses for scene graph parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11535–11543, 2019c.
- Zhou et al.  L. Zhou, H. Palangi, L. Zhang, H. Hu, J. J. Corso, and J. Gao. Unified vision-language pre-training for image captioning and VQA. arXiv preprint arXiv:1909.11059, 2019.
6.1 Additional Results and Analysis
|, edge sampling|
|VG ||VTE ||GQA ||GQA-nLR|
|# train images||57,723||68,786||66,078||59,790|
|# train triplets (unique)||29,283||19,811||470,129||98,367|
|# val images||5,000||4,990||4,903||4,382|
|# test images||26,446||25,851||10,055||9,159|
|# test-ZS images||4,519||653||6,418||4,266|
|# test-ZS triplets (unique/total)||5,278/7,601||601/2,414||37,116/45,135||9,704/11,067|
|VG ||VTE ||GQA ||GQA-nLR|
|Object detector||Faster R-CNN ||Mask R-CNN , chosen in lieu of Faster R-CNN, since it achieves better performance due to multitask training on COCO. In SGGen, we extract up to 50 bounding boxes with a confidence threshold of 0.2 as in .|
|Detector pretrained on||VG ||COCO (followed by fine-tuning on GQA in case of SGGen)|
|Learning rate||(increased due to larger graphs in a batch)|
|Batch size (# scene graphs),||6|
|MP: 20, lr decay by 0.1 after 15 epochs; NM: 12, lr decay by 0.1 after 10 epochs|
Evaluation of zero/few shot cases. To evaluate -shots using image-level recall, we need to keep in the test images only those triplets that have occurred no more than times and remove images without such triplets. This results in computing recall for very sparse annotations, so the image-level metric can be noisy and create discrepancies between simple images with a few triplets and complex images with hundreds of triplets. For example, for an image with only two ground truth triplets, R@100 of 50% can be a quite bad result, while for an image with hundreds of triplets, this can be an excellent result. Our Weighted Triplet Recall is computed for all test triplets joined into a single set, so it resolves this discrepancy.
Constrained vs unconstrained metrics. In the graph constrained case , only the top-1 predicted predicate is considered when triplets are ranked, and follow-up works [24, 40] improved results by removing this constraint. This unconstrained metric more reliably evaluates models, since it does not require a perfect triplet match to be the top-1 prediction, which is an unreasonable expectation given plenty of synonyms and mislabeled annotations in scene graph datasets. For example, ‘man wearing shirt’ and ‘man in shirt’ are similar predictions, however, only the unconstrained metric allows for both to be included in ranking. The SGDet+ metric  has a similar motivation as removing the graph constraint, but it does not address the other issues of image-level metrics.