Boost Test-Time Performance with Closed-Loop Inference

03/21/2022
by   Shuaicheng Niu, et al.
7

Conventional deep models predict a test sample with a single forward propagation, which, however, may not be sufficient for predicting hard-classified samples. On the contrary, we human beings may need to carefully check the sample many times before making a final decision. During the recheck process, one may refine/adjust the prediction by referring to related samples. Motivated by this, we propose to predict those hard-classified test samples in a looped manner to boost the model performance. However, this idea may pose a critical challenge: how to construct looped inference, so that the original erroneous predictions on these hard test samples can be corrected with little additional effort. To address this, we propose a general Closed-Loop Inference (CLI) method. Specifically, we first devise a filtering criterion to identify those hard-classified test samples that need additional inference loops. For each hard sample, we construct an additional auxiliary learning task based on its original top-K predictions to calibrate the model, and then use the calibrated model to obtain the final prediction. Promising results on ImageNet (in-distribution test samples) and ImageNet-C (out-of-distribution test samples) demonstrate the effectiveness of CLI in improving the performance of any pre-trained model.

READ FULL TEXT

page 11

page 12

research
12/04/2021

SITA: Single Image Test-time Adaptation

In Test-time Adaptation (TTA), given a model trained on some source data...
research
04/06/2022

Efficient Test-Time Model Adaptation without Forgetting

Test-time adaptation (TTA) seeks to tackle potential distribution shifts...
research
09/07/2023

REALM: Robust Entropy Adaptive Loss Minimization for Improved Single-Sample Test-Time Adaptation

Fully-test-time adaptation (F-TTA) can mitigate performance loss due to ...
research
09/14/2021

YES SIR!Optimizing Semantic Space of Negatives with Self-Involvement Ranker

Pre-trained model such as BERT has been proved to be an effective tool f...
research
04/13/2023

Meta-Auxiliary Learning for Adaptive Human Pose Prediction

Predicting high-fidelity future human poses, from a historically observe...
research
02/22/2023

Energy-Based Test Sample Adaptation for Domain Generalization

In this paper, we propose energy-based sample adaptation at test time fo...
research
06/22/2021

Test-time Collective Prediction

An increasingly common setting in machine learning involves multiple par...

Please sign up or login with your details

Forgot password? Click here to reset