VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization

04/25/2022
by   Minghui Chen, et al.
0

Invariance to diverse types of image corruption, such as noise, blurring, or colour shifts, is essential to establish robust models in computer vision. Data augmentation has been the major approach in improving the robustness against common corruptions. However, the samples produced by popular augmentation strategies deviate significantly from the underlying data manifold. As a result, performance is skewed toward certain types of corruption. To address this issue, we propose a multi-source vicinal transfer augmentation (VITA) method for generating diverse on-manifold samples. The proposed VITA consists of two complementary parts: tangent transfer and integration of multi-source vicinal samples. The tangent transfer creates initial augmented samples for improving corruption robustness. The integration employs a generative model to characterize the underlying manifold built by vicinal samples, facilitating the generation of on-manifold samples. Our proposed VITA significantly outperforms the current state-of-the-art augmentation methods, demonstrated in extensive experiments on corruption benchmarks.

READ FULL TEXT

page 3

page 9

page 12

page 13

page 14

page 17

research
09/21/2020

SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness

Models that perform well on a training domain often fail to generalize t...
research
11/23/2021

Multi-task manifold learning for small sample size datasets

In this study, we develop a method for multi-task manifold learning. The...
research
10/14/2019

Rethinking Data Augmentation: Self-Supervision and Self-Distillation

Data augmentation techniques, e.g., flipping or cropping, which systemat...
research
03/28/2023

Improving the Transferability of Adversarial Samples by Path-Augmented Method

Deep neural networks have achieved unprecedented success on diverse visi...
research
07/23/2023

Improving Out-of-Distribution Robustness of Classifiers via Generative Interpolation

Deep neural networks achieve superior performance for learning from inde...
research
02/24/2022

Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration

Diverse data augmentation strategies are a natural approach to improving...
research
11/25/2020

Squared ℓ_2 Norm as Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations

Data augmentation is one of the most popular techniques for improving th...

Please sign up or login with your details

Forgot password? Click here to reset