-
Learning Joint Embedding for Cross-Modal Retrieval
A cross-modal retrieval process is to use a query in one modality to obt...
read it
-
Do Cross Modal Systems Leverage Semantic Relationships?
Current cross-modal retrieval systems are evaluated using R@K measure wh...
read it
-
DIME: An Online Tool for the Visual Comparison of Cross-Modal Retrieval Models
Cross-modal retrieval relies on accurate models to retrieve relevant res...
read it
-
Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval
The abundance of multimodal data (e.g. social media posts) has inspired ...
read it
-
Crisscrossed Captions: Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO
Image captioning datasets have proven useful for multimodal representati...
read it
-
Multimodal sparse representation learning and applications
Unsupervised methods have proven effective for discriminative tasks in a...
read it
-
Diachronic Cross-modal Embeddings
Understanding the semantic shifts of multimodal information is only poss...
read it
Revisiting Cross Modal Retrieval
This paper proposes a cross-modal retrieval system that leverages on image and text encoding. Most multimodal architectures employ separate networks for each modality to capture the semantic relationship between them. However, in our work image-text encoding can achieve comparable results in terms of cross-modal retrieval without having to use a separate network for each modality. We show that text encodings can capture semantic relationships between multiple modalities. In our knowledge, this work is the first of its kind in terms of employing a single network and fused image-text embedding for cross-modal retrieval. We evaluate our approach on two famous multimodal datasets: MS-COCO and Flickr30K.
READ FULL TEXT
Comments
There are no comments yet.