Parameter-free Online Test-time Adaptation

01/15/2022
by   Malik Boudiaf, et al.
0

Training state-of-the-art vision models has become prohibitively expensive for researchers and practitioners. For the sake of accessibility and resource reuse, it is important to focus on adapting these models to a variety of downstream scenarios. An interesting and practical paradigm is online test-time adaptation, according to which training data is inaccessible, no labelled data from the test distribution is available, and adaptation can only happen at test time and on a handful of samples. In this paper, we investigate how test-time adaptation methods fare for a number of pre-trained models on a variety of real-world scenarios, significantly extending the way they have been originally evaluated. We show that they perform well only in narrowly-defined experimental setups and sometimes fail catastrophically when their hyperparameters are not selected for the same scenario in which they are being tested. Motivated by the inherent uncertainty around the conditions that will ultimately be encountered at test time, we propose a particularly "conservative" approach, which addresses the problem with a Laplacian Adjusted Maximum-likelihood Estimation (LAME) objective. By adapting the model's output (not its parameters), and solving our objective with an efficient concave-convex procedure, our approach exhibits a much higher average accuracy across scenarios than existing methods, while being notably faster and have a much lower memory footprint. Code available at https://github.com/fiveai/LAME.

READ FULL TEXT

page 5

page 15

research
03/27/2023

A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts

Machine learning methods strive to acquire a robust model during trainin...
research
09/04/2023

CA2: Class-Agnostic Adaptive Feature Adaptation for One-class Classification

One-class classification (OCC), i.e., identifying whether an example bel...
research
10/22/2021

Learning Proposals for Practical Energy-Based Regression

Energy-based models (EBMs) have experienced a resurgence within machine ...
research
06/16/2023

Neural Priming for Sample-Efficient Adaptation

We propose Neural Priming, a technique for adapting large pretrained mod...
research
05/16/2022

Test-Time Adaptation with Shape Moments for Image Segmentation

Supervised learning is well-known to fail at generalization under distri...
research
03/24/2023

Robust Test-Time Adaptation in Dynamic Scenarios

Test-time adaptation (TTA) intends to adapt the pretrained model to test...
research
10/14/2022

Parameter Sharing in Budget-Aware Adapters for Multi-Domain Learning

Deep learning has achieved state-of-the-art performance on several compu...

Please sign up or login with your details

Forgot password? Click here to reset