Referring Expression Grounding by Marginalizing Scene Graph Likelihood

06/09/2019
by   Daqing Liu, et al.
3

We focus on the task of grounding referring expressions in images, e.g., localizing "the white truck in front of a yellow one". To resolve this task fundamentally, one should first find out the contextual objects (e.g., the "yellow" truck) and then exploit them to disambiguate the referent from other similar objects, by using the attributes and relationships (e.g., "white", "yellow", "in front of"). However, it is extremely challenging to train such a model as the ground-truth of the contextual objects and their relationships are usually missing due to the prohibitive annotation cost. Therefore, nearly all existing methods attempt to evade the above joint grounding and reasoning process, but resort to a holistic association between the sentence and region feature. As a result, they suffer from heavy parameters of fully-connected layers, poor interpretability, and limited generalization to unseen expressions. In this paper, we tackle this challenge by training and inference with the proposed Marginalized Scene Graph Likelihood (MSGL). Specifically, we use scene graph: a graphical representation parsed from the referring expression, where the nodes are objects with attributes and the edges are relationships. Thanks to the conditional random field (CRF) built on scene graph, we can ground every object to its corresponding region, and perform reasoning with the unlabeled contexts by marginalizing out them using the sum-product belief propagation. Overall, our proposed MSGL is effective and interpretable, e.g., on three benchmarks, MSGL consistently outperforms the state-of-the-arts while offers a complete grounding of all the objects in a sentence.

READ FULL TEXT

page 2

page 8

page 13

page 14

research
09/18/2019

Dynamic Graph Attention for Referring Expression Comprehension

Referring expression comprehension aims to locate the object instance de...
research
01/06/2022

Incremental Object Grounding Using Scene Graphs

Object grounding tasks aim to locate the target object in an image throu...
research
06/11/2019

Cross-Modal Relationship Inference for Grounding Referring Expressions

Grounding referring expressions is a fundamental yet challenging task fa...
research
12/31/2021

Deconfounded Visual Grounding

We focus on the confounding bias between language and location in the vi...
research
07/08/2019

Variational Context: Exploiting Visual and Textual Context for Grounding Referring Expressions

We focus on grounding (i.e., localizing or linking) referring expression...
research
12/05/2017

Grounding Referring Expressions in Images by Variational Context

We focus on grounding (i.e., localizing or linking) referring expression...
research
08/01/2016

Modeling Context Between Objects for Referring Expression Understanding

Referring expressions usually describe an object using properties of the...

Please sign up or login with your details

Forgot password? Click here to reset