Learning Loss for Test-Time Augmentation

10/22/2020
by   Ildoo Kim, et al.
0

Data augmentation has been actively studied for robust neural networks. Most of the recent data augmentation methods focus on augmenting datasets during the training phase. At the testing phase, simple transformations are still widely used for test-time augmentation. This paper proposes a novel instance-level test-time augmentation that efficiently selects suitable transformations for a test input. Our proposed method involves an auxiliary module to predict the loss of each possible transformation given the input. Then, the transformations having lower predicted losses are applied to the input. The network obtains the results by averaging the prediction results of augmented inputs. Experimental results on several image classification benchmarks show that the proposed instance-aware test-time augmentation improves the model's robustness against various corruptions.

READ FULL TEXT

page 9

page 13

research
02/21/2020

Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation

Test-time data augmentation—averaging the predictions of a machine learn...
research
12/19/2020

Augmentation Inside the Network

In this paper, we present augmentation inside the network, a method that...
research
03/08/2022

Data augmentation with mixtures of max-entropy transformations for filling-level classification

We address the problem of distribution shifts in test-time data with a p...
research
06/14/2019

Fixing the train-test resolution discrepancy

Data-augmentation is key to the training of neural networks for image cl...
research
07/02/2023

CNN-BiLSTM model for English Handwriting Recognition: Comprehensive Evaluation on the IAM Dataset

We present a CNN-BiLSTM system for the problem of offline English handwr...
research
11/01/2022

SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization

Methods for improving deep neural network training times and model gener...
research
05/03/2020

A Causal View on Robustness of Neural Networks

We present a causal view on the robustness of neural networks against in...

Please sign up or login with your details

Forgot password? Click here to reset