-
Test-time augmentation with uncertainty estimation for deep learning-based medical image segmentation
Data augmentation has been widely used for training deep learning system...
read it
-
Augmentation Inside the Network
In this paper, we present augmentation inside the network, a method that...
read it
-
Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation
Test-time data augmentation—averaging the predictions of a machine learn...
read it
-
Fixing the train-test resolution discrepancy
Data-augmentation is key to the training of neural networks for image cl...
read it
-
Changing Model Behavior at Test-Time Using Reinforcement Learning
Machine learning models are often used at test-time subject to constrain...
read it
-
Leaf Segmentation and Counting with Deep Learning: on Model Certainty, Test-Time Augmentation, Trade-Offs
Plant phenotyping tasks such as leaf segmentation and counting are funda...
read it
-
Learning to Abstain via Curve Optimization
In practical applications of machine learning, it is often desirable to ...
read it
When and Why Test-Time Augmentation Works
Test-time augmentation (TTA)—the aggregation of predictions across transformed versions of a test input—is a common practice in image classification. In this paper, we present theoretical and experimental analyses that shed light on 1) when test time augmentation is likely to be helpful and 2) when to use various test-time augmentation policies. A key finding is that even when TTA produces a net improvement in accuracy, it can change many correct predictions into incorrect predictions. We delve into when and why test-time augmentation changes a prediction from being correct to incorrect and vice versa. Our analysis suggests that the nature and amount of training data, the model architecture, and the augmentation policy all matter. Building on these insights, we present a learning-based method for aggregating test-time augmentations. Experiments across a diverse set of models, datasets, and augmentations show that our method delivers consistent improvements over existing approaches.
READ FULL TEXT
Comments
There are no comments yet.