Graph Density-Aware Losses for Novel Compositions in Scene Graph Generation

05/17/2020
by   Boris Knyazev, et al.
0

Scene graph generation (SGG) aims to predict graph-structured descriptions of input images, in the form of objects and relationships between them. This task is becoming increasingly useful for progress at the interface of vision and language. Here, it is important - yet challenging - to perform well on novel (zero-shot) or rare (few-shot) compositions of objects and relationships. In this paper, we identify two key issues that limit such generalization. Firstly, we show that the standard loss used in this task is unintentionally a function of scene graph density. This leads to the neglect of individual edges in large sparse graphs during training, even though these contain diverse few-shot examples that are important for generalization. Secondly, the frequency of relationships can create a strong bias in this task, such that a blind model predicting the most frequent relationship achieves good performance. Consequently, some state-of-the-art models exploit this bias to improve results. We show that such models can suffer the most in their ability to generalize to rare compositions, evaluating two different models on the Visual Genome dataset and its more recent, improved version, GQA. To address these issues, we introduce a density-normalized edge loss, which provides more than a two-fold improvement in certain generalization metrics. Compared to other works in this direction, our enhancements require only a few lines of code and no added computational cost. We also highlight the difficulty of accurately evaluating models using existing metrics, especially on zero/few shots, and introduce a novel weighted metric.

READ FULL TEXT

page 2

page 10

page 14

research
07/11/2020

Generative Graph Perturbations for Scene Graph Prediction

Inferring objects and their relationships from an image is useful in man...
research
01/18/2022

Resistance Training using Prior Bias: toward Unbiased Scene Graph Generation

Scene Graph Generation (SGG) aims to build a structured representation o...
research
03/03/2021

Energy-Based Learning for Scene Graph Generation

Traditional scene graph generation methods are trained using cross-entro...
research
07/08/2022

GEMS: Scene Expansion using Generative Models of Graphs

Applications based on image retrieval require editing and associating in...
research
05/10/2023

Incorporating Structured Representations into Pretrained Vision Language Models Using Scene Graphs

Vision and Language (VL) models have demonstrated remarkable zero-shot p...
research
03/23/2023

Visually-Prompted Language Model for Fine-Grained Scene Graph Generation in an Open World

Scene Graph Generation (SGG) aims to extract <subject, predicate, object...
research
06/12/2019

Visual Relationships as Functions: Enabling Few-Shot Scene Graph Prediction

Scene graph prediction --- classifying the set of objects and predicates...

Please sign up or login with your details

Forgot password? Click here to reset