Unsupervised Data Uncertainty Learning in Visual Retrieval Systems

by   Ahmed Taha, et al.

We introduce an unsupervised formulation to estimate heteroscedastic uncertainty in retrieval systems. We propose an extension to triplet loss that models data uncertainty for each input. Besides improving performance, our formulation models local noise in the embedding space. It quantifies input uncertainty and thus enhances interpretability of the system. This helps identify noisy observations in query and search databases. Evaluation on both image and video retrieval applications highlight the utility of our approach. We highlight our efficiency in modeling local noise using two real-world datasets: Clothing1M and Honda Driving datasets. Qualitative results illustrate our ability in identifying confusing scenarios in various domains. Uncertainty learning also enables data cleaning by detecting noisy training labels.



page 1

page 6

page 7

page 8


Modeling Uncertainty with Hedged Instance Embedding

Instance embeddings are an efficient and versatile image representation ...

Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty

Label noise is a critical factor that degrades the generalization perfor...

Exploring Uncertainty in Conditional Multi-Modal Retrieval Systems

We cast visual retrieval as a regression problem by posing triplet loss ...

Data Uncertainty Guided Noise-aware Preprocessing Of Fingerprints

The effectiveness of fingerprint-based authentication systems on good qu...

An Uncertainty-Aware Approach for Exploratory Microblog Retrieval

Although there has been a great deal of interest in analyzing customer o...

On the Unreasonable Effectiveness of Centroids in Image Retrieval

Image retrieval task consists of finding similar images to a query image...

Relevance-based Margin for Contrastively-trained Video Retrieval Models

Video retrieval using natural language queries has attracted increasing ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.