Revisiting Cross Modal Retrieval

07/19/2018
by   Shah Nawaz, et al.
0

This paper proposes a cross-modal retrieval system that leverages on image and text encoding. Most multimodal architectures employ separate networks for each modality to capture the semantic relationship between them. However, in our work image-text encoding can achieve comparable results in terms of cross-modal retrieval without having to use a separate network for each modality. We show that text encodings can capture semantic relationships between multiple modalities. In our knowledge, this work is the first of its kind in terms of employing a single network and fused image-text embedding for cross-modal retrieval. We evaluate our approach on two famous multimodal datasets: MS-COCO and Flickr30K.

READ FULL TEXT

page 2

page 10

research
08/21/2019

Learning Joint Embedding for Cross-Modal Retrieval

A cross-modal retrieval process is to use a query in one modality to obt...
research
09/03/2019

Do Cross Modal Systems Leverage Semantic Relationships?

Current cross-modal retrieval systems are evaluated using R@K measure wh...
research
10/19/2020

DIME: An Online Tool for the Visual Comparison of Cross-Modal Retrieval Models

Cross-modal retrieval relies on accurate models to retrieve relevant res...
research
07/16/2020

Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval

The abundance of multimodal data (e.g. social media posts) has inspired ...
research
04/30/2020

Crisscrossed Captions: Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO

Image captioning datasets have proven useful for multimodal representati...
research
11/19/2015

Multimodal sparse representation learning and applications

Unsupervised methods have proven effective for discriminative tasks in a...
research
09/30/2019

Diachronic Cross-modal Embeddings

Understanding the semantic shifts of multimodal information is only poss...

Please sign up or login with your details

Forgot password? Click here to reset