Okapi: Generalising Better by Making Statistical Matches Match

11/07/2022
by   Myles Bartlett, et al.
0

We propose Okapi, a simple, efficient, and general method for robust semi-supervised learning based on online statistical matching. Our method uses a nearest-neighbours-based matching procedure to generate cross-domain views for a consistency loss, while eliminating statistical outliers. In order to perform the online matching in a runtime- and memory-efficient way, we draw upon the self-supervised literature and combine a memory bank with a slow-moving momentum encoder. The consistency loss is applied within the feature space, rather than on the predictive distribution, making the method agnostic to both the modality and the task in question. We experiment on the WILDS 2.0 datasets Sagawa et al., which significantly expands the range of modalities, applications, and shifts available for studying and benchmarking real-world unsupervised adaptation. Contrary to Sagawa et al., we show that it is in fact possible to leverage additional unlabelled data to improve upon empirical risk minimisation (ERM) results with the right method. Our method outperforms the baseline methods in terms of out-of-distribution (OOD) generalisation on the iWildCam (a multi-class classification task) and PovertyMap (a regression task) image datasets as well as the CivilComments (a binary classification task) text dataset. Furthermore, from a qualitative perspective, we show the matches obtained from the learned encoder are strongly semantically related. Code for our paper is publicly available at https://github.com/wearepal/okapi/.

READ FULL TEXT

page 5

page 7

page 10

page 18

page 22

research
08/31/2021

Semi-supervised Image Classification with Grad-CAM Consistency

Consistency training, which exploits both supervised and unsupervised le...
research
08/22/2023

GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised Learning

Large-scale foundation models, such as CLIP, have demonstrated remarkabl...
research
02/07/2021

Self-supervised driven consistency training for annotation efficient histopathology image analysis

Training a neural network with a large labeled dataset is still a domina...
research
04/17/2023

BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors

Medical data poses a daunting challenge for AI algorithms: it exists in ...
research
10/15/2020

Self-Supervised Domain Adaptation with Consistency Training

We consider the problem of unsupervised domain adaptation for image clas...
research
07/17/2020

Self-Supervised Bernoulli Autoencoders for Semi-Supervised Hashing

Semantic hashing is an emerging technique for large-scale similarity sea...
research
09/05/2023

Doppelgangers: Learning to Disambiguate Images of Similar Structures

We consider the visual disambiguation task of determining whether a pair...

Please sign up or login with your details

Forgot password? Click here to reset