Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person Retrieval

03/22/2023
by   Ding Jiang, et al.
0

Text-to-image person retrieval aims to identify the target person based on a given textual description query. The primary challenge is to learn the mapping of visual and textual modalities into a common latent space. Prior works have attempted to address this challenge by leveraging separately pre-trained unimodal models to extract visual and textual features. However, these approaches lack the necessary underlying alignment capabilities required to match multimodal data effectively. Besides, these works use prior information to explore explicit part alignments, which may lead to the distortion of intra-modality information. To alleviate these issues, we present IRRA: a cross-modal Implicit Relation Reasoning and Aligning framework that learns relations between local visual-textual tokens and enhances global image-text matching without requiring additional prior supervision. Specifically, we first design an Implicit Relation Reasoning module in a masked language modeling paradigm. This achieves cross-modal interaction by integrating the visual cues into the textual tokens with a cross-modal multimodal interaction encoder. Secondly, to globally align the visual and textual embeddings, Similarity Distribution Matching is proposed to minimize the KL divergence between image-text similarity distributions and the normalized label matching distributions. The proposed method achieves new state-of-the-art results on all three public datasets, with a notable margin of about 3 compared to prior methods.

READ FULL TEXT

page 1

page 4

page 5

page 8

research
08/18/2022

See Finer, See More: Implicit Modality Alignment for Text-based Person Retrieval

Text-based person retrieval aims to find the query person based on a tex...
research
07/18/2023

Unleashing the Imagination of Text: A Novel Framework for Text-to-image Person Retrieval via Exploring the Power of Words

The goal of Text-to-image person retrieval is to retrieve person images ...
research
07/19/2023

Multi-Grained Multimodal Interaction Network for Entity Linking

Multimodal entity linking (MEL) task, which aims at resolving ambiguous ...
research
09/09/2023

BiLMa: Bidirectional Local-Matching for Text-based Person Re-identification

Text-based person re-identification (TBPReID) aims to retrieve person im...
research
02/27/2023

TOT: Topology-Aware Optimal Transport For Multimodal Hate Detection

Multimodal hate detection, which aims to identify harmful content online...
research
09/13/2022

Visual Recipe Flow: A Dataset for Learning Visual State Changes of Objects with Recipe Flows

We present a new multimodal dataset called Visual Recipe Flow, which ena...
research
10/23/2020

Beyond the Deep Metric Learning: Enhance the Cross-Modal Matching with Adversarial Discriminative Domain Regularization

Matching information across image and text modalities is a fundamental c...

Please sign up or login with your details

Forgot password? Click here to reset