On the Benefits of Invariance in Neural Networks

05/01/2020
by   Clare Lyle, et al.
6

Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning. While the literature contains a variety of methods to incorporate invariance into models, theoretical understanding is poor and there is no way to assess when one method should be preferred over another. In this work, we analyze the benefits and limitations of two widely used approaches in deep learning in the presence of invariance: data augmentation and feature averaging. We prove that training with data augmentation leads to better estimates of risk and gradients thereof, and we provide a PAC-Bayes generalization bound for models trained with data augmentation. We also show that compared to data augmentation, feature averaging reduces generalization error when used with convex losses, and tightens PAC-Bayes bounds. We provide empirical support of these theoretical results, including a demonstration of why generalization may not improve by training with data augmentation: the `learned invariance' fails outside of the training distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2019

Invariance reduces Variance: Understanding Data Augmentation in Deep Learning and Beyond

Many complex deep learning models have found success by exploiting symme...
research
03/07/2022

Regularising for invariance to data augmentation improves supervised learning

Data augmentation is used in machine learning to make the classifier inv...
research
10/07/2022

In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

It is often said that a deep learning model is "invariant" to some speci...
research
02/04/2022

Deep invariant networks with differentiable augmentation layers

Designing learning systems which are invariant to certain data transform...
research
09/19/2019

Data Augmentation Revisited: Rethinking the Distribution Gap between Clean and Augmented Data

Data augmentation has been widely applied as an effective methodology to...
research
02/18/2022

Quantifying the Effects of Data Augmentation

We provide results that exactly quantify how data augmentation affects t...
research
06/04/2022

Toward Learning Robust and Invariant Representations with Alignment Regularization and Data Augmentation

Data augmentation has been proven to be an effective technique for devel...

Please sign up or login with your details

Forgot password? Click here to reset