Toward Learning Robust and Invariant Representations with Alignment Regularization and Data Augmentation

06/04/2022
by   Haohan Wang, et al.
25

Data augmentation has been proven to be an effective technique for developing machine learning models that are robust to known classes of distributional shifts (e.g., rotations of images), and alignment regularization is a technique often used together with data augmentation to further help the model learn representations invariant to the shifts used to augment the data. In this paper, motivated by a proliferation of options of alignment regularizations, we seek to evaluate the performances of several popular design choices along the dimensions of robustness and invariance, for which we introduce a new test procedure. Our synthetic experiment results speak to the benefits of squared l2 norm regularization. Further, we also formally analyze the behavior of alignment regularization to complement our empirical study under assumptions we consider realistic. Finally, we test this simple technique we identify (worst-case data augmentation with squared l2 norm alignment regularization) and show that the benefits of this method outrun those of the specially designed methods. We also release a software package in both TensorFlow and PyTorch for users to use the method with a couple of lines at https://github.com/jyanln/AlignReg.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2020

Squared ℓ_2 Norm as Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations

Data augmentation is one of the most popular techniques for improving th...
research
05/22/2023

Tied-Augment: Controlling Representation Similarity Improves Data Augmentation

Data augmentation methods have played an important role in the recent ad...
research
06/29/2020

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

We introduce three new robustness benchmarks consisting of naturally occ...
research
01/02/2022

Improving Out-of-Distribution Robustness via Selective Augmentation

Machine learning algorithms typically assume that training and test exam...
research
05/01/2020

On the Benefits of Invariance in Neural Networks

Many real world data analysis problems exhibit invariant structure, and ...
research
10/18/2019

Illumination-Based Data Augmentation for Robust Background Subtraction

A core challenge in background subtraction (BGS) is handling videos with...
research
03/03/2022

Robustness and Adaptation to Hidden Factors of Variation

We tackle here a specific, still not widely addressed aspect, of AI robu...

Please sign up or login with your details

Forgot password? Click here to reset