Test-Time Training for Out-of-Distribution Generalization

09/29/2019
by   Yu Sun, et al.
8

We introduce a general approach, called test-time training, for improving the performance of predictive models when test and training data come from different distributions. Test-time training turns a single unlabeled test instance into a self-supervised learning problem, on which we update the model parameters before making a prediction on the test sample. We show that this simple idea leads to surprising improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts. Theoretical investigations on a convex model reveal helpful intuitions for when we can expect our approach to help.

READ FULL TEXT

page 3

page 16

page 18

page 19

page 20

research
03/30/2021

MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption

An unresolved problem in Deep Learning is the ability of neural networks...
research
09/15/2022

Test-Time Training with Masked Autoencoders

Test-time training adapts to a new test distribution on the fly by optim...
research
11/24/2020

Lethean Attack: An Online Data Poisoning Technique

Data poisoning is an adversarial scenario where an attacker feeds a spec...
research
02/28/2023

Temporal Coherent Test-Time Optimization for Robust Video Classification

Deep neural networks are likely to fail when the test data is corrupted ...
research
05/02/2020

Treebank Embedding Vectors for Out-of-domain Dependency Parsing

A recent advance in monolingual dependency parsing is the idea of a tree...
research
08/12/2021

AffRankNet+: Ranking Affect Using Privileged Information

Many of the affect modelling tasks present an asymmetric distribution of...
research
08/19/2022

TTT-UCDR: Test-time Training for Universal Cross-Domain Retrieval

Image retrieval is a niche problem in computer vision curated towards fi...

Please sign up or login with your details

Forgot password? Click here to reset