Robust Mean Teacher for Continual and Gradual Test-Time Adaptation

11/23/2022
by   Mario Döbler, et al.
0

Since experiencing domain shifts during test-time is inevitable in practice, test-time adaption (TTA) continues to adapt the model during deployment. Recently, the area of continual and gradual test-time adaptation (TTA) emerged. In contrast to standard TTA, continual TTA considers not only a single domain shift, but a sequence of shifts. Gradual TTA further exploits the property that some shifts evolve gradually over time. Since in both settings long test sequences are present, error accumulation needs to be addressed for methods relying on self-training. In this work, we propose and show that in the setting of TTA, the symmetric cross-entropy is better suited as a consistency loss for mean teachers compared to the commonly used cross-entropy. This is justified by our analysis with respect to the (symmetric) cross-entropy's gradient properties. To pull the test feature space closer to the source domain, where the pre-trained model is well posed, contrastive learning is leveraged. Since applications differ in their requirements, we address different settings, namely having source data available and the more challenging source-free setting. We demonstrate the effectiveness of our proposed method 'robust mean teacher' (RMT) on the continual and gradual corruption benchmarks CIFAR10C, CIFAR100C, and Imagenet-C. We further consider ImageNet-R and propose a new continual DomainNet-126 benchmark. State-of-the-art results are achieved on all benchmarks.

READ FULL TEXT

page 4

page 14

research
08/16/2022

Gradual Test-Time Adaptation by Self-Training and Style Transfer

Domain shifts at test-time are inevitable in practice. Test-time adaptat...
research
08/18/2022

Evaluating Continual Test-Time Adaptation for Contextual and Semantic Domain Shifts

In this paper, our goal is to adapt a pre-trained Convolutional Neural N...
research
03/17/2023

TeSLA: Test-Time Self-Learning With Automatic Adversarial Augmentation

Most recent test-time adaptation methods focus on only classification ta...
research
06/08/2023

RDumb: A simple approach that questions our progress in continual test-time adaptation

Test-Time Adaptation (TTA) allows to update pretrained models to changin...
research
02/22/2021

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness

Invariance to a broad array of image corruptions, such as warping, noise...
research
11/02/2022

Continual Conscious Active Fine-Tuning to Robustify Online Machine Learning Models Against Data Distribution Shifts

Unlike their offline traditional counterpart, online machine learning mo...
research
04/20/2023

SATA: Source Anchoring and Target Alignment Network for Continual Test Time Adaptation

Adapting a trained model to perform satisfactorily on continually changi...

Please sign up or login with your details

Forgot password? Click here to reset