RDumb: A simple approach that questions our progress in continual test-time adaptation

06/08/2023
by   Ori Press, et al.
0

Test-Time Adaptation (TTA) allows to update pretrained models to changing data distributions at deployment time. While early work tested these algorithms for individual fixed distribution shifts, recent work proposed and applied methods for continual adaptation over long timescales. To examine the reported progress in the field, we propose the Continuously Changing Corruptions (CCC) benchmark to measure asymptotic performance of TTA techniques. We find that eventually all but one state-of-the-art methods collapse and perform worse than a non-adapting model, including models specifically proposed to be robust to performance collapse. In addition, we introduce a simple baseline, "RDumb", that periodically resets the model to its pretrained state. RDumb performs better or on par with the previously proposed state-of-the-art in all considered benchmarks. Our results show that previous TTA approaches are neither effective at regularizing adaptation to avoid collapse nor able to outperform a simplistic resetting strategy.

READ FULL TEXT

page 2

page 3

page 18

research
08/16/2022

Gradual Test-Time Adaptation by Self-Training and Style Transfer

Domain shifts at test-time are inevitable in practice. Test-time adaptat...
research
11/23/2022

Robust Mean Teacher for Continual and Gradual Test-Time Adaptation

Since experiencing domain shifts during test-time is inevitable in pract...
research
08/18/2022

Evaluating Continual Test-Time Adaptation for Contextual and Semantic Domain Shifts

In this paper, our goal is to adapt a pre-trained Convolutional Neural N...
research
12/08/2022

Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation

Continual Test-Time Adaptation (CTTA) aims to adapt the source model to ...
research
08/10/2022

Robust Continual Test-time Adaptation: Instance-aware BN and Prediction-balanced Memory

Test-time adaptation (TTA) is an emerging paradigm that addresses distri...
research
06/28/2021

Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation

Deep neural networks often exhibit poor performance on data that is unli...
research
11/02/2022

Continual Conscious Active Fine-Tuning to Robustify Online Machine Learning Models Against Data Distribution Shifts

Unlike their offline traditional counterpart, online machine learning mo...

Please sign up or login with your details

Forgot password? Click here to reset