Self-Supervised Visual Representations for Cross-Modal Retrieval

01/31/2019
by   Yash Patel, et al.
0

Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are usually limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration (global context), and (2) the semantic context of its caption (local context). Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like image classification and object detection, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.

READ FULL TEXT

page 3

page 8

research
07/04/2018

TextTopicNet - Self-Supervised Learning of Visual Features Through Embedding Images on Semantic Text Spaces

The immense success of deep learning based methods in computer vision he...
research
08/23/2018

Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval

Cross-modal retrieval between visual data and natural language descripti...
research
08/07/2022

See What You See: Self-supervised Cross-modal Retrieval of Visual Stimuli from Brain Activity

Recent studies demonstrate the use of a two-stage supervised framework t...
research
04/25/2023

Sample-Specific Debiasing for Better Image-Text Models

Self-supervised representation learning on image-text data facilitates c...
research
05/24/2017

Self-supervised learning of visual features through embedding images into text topic spaces

End-to-end training from scratch of current deep architectures for new c...
research
04/30/2018

Cross-Modal Retrieval in the Cooking Context: Learning Semantic Text-Image Embeddings

Designing powerful tools that support cooking activities has rapidly gai...
research
08/08/2022

Semi-Supervised Cross-Modal Salient Object Detection with U-Structure Networks

Salient Object Detection (SOD) is a popular and important topic aimed at...

Please sign up or login with your details

Forgot password? Click here to reset