Probabilistic Embeddings for Cross-Modal Retrieval

01/13/2021
by   Sanghyuk Chun, et al.
3

Cross-modal retrieval methods build a common representation space for samples from multiple modalities, typically from the vision and the language domains. For images and their captions, the multiplicity of the correspondences makes the task particularly challenging. Given an image (respectively a caption), there are multiple captions (respectively images) that equally make sense. In this paper, we argue that deterministic functions are not sufficiently powerful to capture such one-to-many correspondences. Instead, we propose to use Probabilistic Cross-Modal Embedding (PCME), where samples from the different modalities are represented as probabilistic distributions in the common embedding space. Since common benchmarks such as COCO suffer from non-exhaustive annotations for cross-modal matches, we propose to additionally evaluate retrieval on the CUB dataset, a smaller yet clean database where all possible image-caption pairs are annotated. We extensively ablate PCME and demonstrate that it not only improves the retrieval performance over its deterministic counterpart, but also provides uncertainty estimates that render the embeddings more interpretable.

READ FULL TEXT

page 1

page 4

page 5

page 8

page 13

page 17

research
04/20/2022

Uncertainty-based Cross-Modal Retrieval with Probabilistic Representations

Probabilistic embeddings have proven useful for capturing polysemous wor...
research
07/01/2023

ProbVLM: Probabilistic Adapter for Frozen Vison-Language Models

Large-scale vision-language models (VLMs) like CLIP successfully find co...
research
03/14/2019

Show, Translate and Tell

Humans have an incredible ability to process and understand information ...
research
05/29/2023

Improved Probabilistic Image-Text Representations

Image-Text Matching (ITM) task, a fundamental vision-language (VL) task,...
research
01/10/2023

Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images

Self-driving vehicles rely on urban street maps for autonomous navigatio...
research
12/05/2016

Deep Multi-Modal Image Correspondence Learning

Inference of correspondences between images from different modalities is...
research
02/04/2017

Simple to Complex Cross-modal Learning to Rank

The heterogeneity-gap between different modalities brings a significant ...

Please sign up or login with your details

Forgot password? Click here to reset