Towards Open-Set Test-Time Adaptation Utilizing the Wisdom of Crowds in Entropy Minimization

08/14/2023
by   Jungsoo Lee, et al.
0

Test-time adaptation (TTA) methods, which generally rely on the model's predictions (e.g., entropy minimization) to adapt the source pretrained model to the unlabeled target domain, suffer from noisy signals originating from 1) incorrect or 2) open-set predictions. Long-term stable adaptation is hampered by such noisy signals, so training models without such error accumulation is crucial for practical TTA. To address these issues, including open-set TTA, we propose a simple yet effective sample selection method inspired by the following crucial empirical finding. While entropy minimization compels the model to increase the probability of its predicted label (i.e., confidence values), we found that noisy samples rather show decreased confidence values. To be more specific, entropy minimization attempts to raise the confidence values of an individual sample's prediction, but individual confidence values may rise or fall due to the influence of signals from numerous other predictions (i.e., wisdom of crowds). Due to this fact, noisy signals misaligned with such 'wisdom of crowds', generally found in the correct signals, fail to raise the individual confidence values of wrong samples, despite attempts to increase them. Based on such findings, we filter out the samples whose confidence values are lower in the adapted model than in the original model, as they are likely to be noisy. Our method is widely applicable to existing TTA methods and improves their long-term adaptation performance in both image classification (e.g., 49.4 semantic segmentation (e.g., 11.7

READ FULL TEXT

page 1

page 12

page 13

research
06/18/2020

Fully Test-time Adaptation by Entropy Minimization

Faced with new and different data during testing, a model must adapt its...
research
03/03/2023

EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization

This paper presents a simple yet effective approach that improves contin...
research
09/07/2023

REALM: Robust Entropy Adaptive Loss Minimization for Improved Single-Sample Test-Time Adaptation

Fully-test-time adaptation (F-TTA) can mitigate performance loss due to ...
research
06/28/2021

Test-Time Adaptation to Distribution Shift by Confidence Maximization and Input Transformation

Deep neural networks often exhibit poor performance on data that is unli...
research
04/06/2022

Efficient Test-Time Model Adaptation without Forgetting

Test-time adaptation (TTA) seeks to tackle potential distribution shifts...
research
05/29/2023

Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models

Misalignment between the outputs of a vision-language (VL) model and tas...

Please sign up or login with your details

Forgot password? Click here to reset