Doubly-stochastic mining for heterogeneous retrieval

04/23/2020
by   Ankit Singh Rawat, et al.
6

Modern retrieval problems are characterised by training sets with potentially billions of labels, and heterogeneous data distributions across subpopulations (e.g., users of a retrieval system may be from different countries), each of which poses a challenge. The first challenge concerns scalability: with a large number of labels, standard losses are difficult to optimise even on a single example. The second challenge concerns uniformity: one ideally wants good performance on each subpopulation. While several solutions have been proposed to address the first challenge, the second challenge has received relatively less attention. In this paper, we propose doubly-stochastic mining (S2M ), a stochastic optimization technique that addresses both challenges. In each iteration of S2M, we compute a per-example loss based on a subset of hardest labels, and then compute the minibatch loss based on the hardest examples. We show theoretically and empirically that by focusing on the hardest examples, S2M ensures that all data subpopulations are modelled well.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2023

UniUD Submission to the EPIC-Kitchens-100 Multi-Instance Retrieval Challenge 2023

In this report, we present the technical details of our submission to th...
research
10/16/2018

Stochastic Negative Mining for Learning with Large Output Spaces

We consider the problem of retrieving the most relevant labels for a giv...
research
02/26/2022

Improved Hard Example Mining Approach for Single Shot Object Detectors

Hard example mining methods generally improve the performance of the obj...
research
05/15/2023

Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study

Auditing plays a pivotal role in the development of trustworthy AI. Howe...
research
03/29/2018

Mining on Manifolds: Metric Learning without Labels

In this work we present a novel unsupervised framework for hard training...
research
03/28/2019

Improving MAE against CCE under Label Noise

Label noise is inherent in many deep learning tasks when the training se...
research
08/06/2023

Serverless Federated AUPRC Optimization for Multi-Party Collaborative Imbalanced Data Mining

Multi-party collaborative training, such as distributed learning and fed...

Please sign up or login with your details

Forgot password? Click here to reset