In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

10/07/2022
by   Henry Kvinge, et al.
0

It is often said that a deep learning model is "invariant" to some specific type of transformation. However, what is meant by this statement strongly depends on the context in which it is made. In this paper we explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways in which they actually capture these concepts on a formal level. We introduce a family of invariance and equivariance metrics that allows us to quantify these properties in a way that disentangles them from other metrics such as loss or accuracy. We use our metrics to better understand the two most popular methods used to build invariance into networks: data augmentation and equivariant layers. We draw a range of conclusions about invariance and equivariance in deep learning models, ranging from whether initializing a model with pretrained weights has an effect on a trained model's invariance, to the extent to which invariance learned via training can generalize to out-of-distribution data.

READ FULL TEXT

page 10

page 15

page 18

page 21

research
08/07/2023

On genuine invariance learning without weight-tying

In this paper, we investigate properties and limitations of invariance l...
research
05/01/2020

On the Benefits of Invariance in Neural Networks

Many real world data analysis problems exhibit invariant structure, and ...
research
02/04/2022

Deep invariant networks with differentiable augmentation layers

Designing learning systems which are invariant to certain data transform...
research
09/27/2021

ML4ML: Automated Invariance Testing for Machine Learning Models

In machine learning workflows, determining invariance qualities of a mod...
research
05/08/2023

Riesz networks: scale invariant neural networks in a single forward pass

Scale invariance of an algorithm refers to its ability to treat objects ...
research
04/13/2023

Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance

Interpretability methods are valuable only if their explanations faithfu...
research
11/25/2020

Squared ℓ_2 Norm as Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations

Data augmentation is one of the most popular techniques for improving th...

Please sign up or login with your details

Forgot password? Click here to reset