Modality-dependent Cross-media Retrieval

06/22/2015
by   Yunchao Wei, et al.
0

In this paper, we investigate the cross-media retrieval between images and text, i.e., using image to search text (I2T) and using text to search images (T2I). Existing cross-media retrieval methods usually learn one couple of projections, by which the original features of images and text can be projected into a common latent space to measure the content similarity. However, using the same projections for the two different retrieval tasks (I2T and T2I) may lead to a tradeoff between their respective performances, rather than their best performances. Different from previous works, we propose a modality-dependent cross-media retrieval (MDCR) model, where two couples of projections are learned for different cross-media retrieval tasks instead of one couple of projections. Specifically, by jointly optimizing the correlation between images and text and the linear regression from one modal space (image or text) to the semantic space, two couples of mappings are learned to project images and text from their original feature spaces into two common latent subspaces (one for I2T and the other for T2I). Extensive experiments show the superiority of the proposed MDCR compared with other methods. In particular, based the 4,096 dimensional convolutional neural network (CNN) visual feature and 100 dimensional LDA textual feature, the mAP of the proposed method achieves 41.5%, which is a new state-of-the-art performance on the Wikipedia dataset.

READ FULL TEXT

page 2

page 11

page 12

research
08/16/2017

Modality-specific Cross-modal Similarity Measurement with Recurrent Attention Network

Nowadays, cross-modal retrieval plays an indispensable role to flexibly ...
research
03/29/2022

Cross-Media Scientific Research Achievements Retrieval Based on Deep Language Model

Science and technology big data contain a lot of cross-media information...
research
11/19/2015

Learning Deep Structure-Preserving Image-Text Embeddings

This paper proposes a method for learning joint embeddings of images and...
research
07/16/2020

Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval

The abundance of multimodal data (e.g. social media posts) has inspired ...
research
03/10/2021

Cross-modal Image Retrieval with Deep Mutual Information Maximization

In this paper, we study the cross-modal image retrieval, where the input...
research
03/20/2017

Twitter100k: A Real-world Dataset for Weakly Supervised Cross-Media Retrieval

This paper contributes a new large-scale dataset for weakly supervised c...
research
07/11/2022

LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval

Video-text retrieval is a class of cross-modal representation learning p...

Please sign up or login with your details

Forgot password? Click here to reset