Provably Strict Generalisation Benefit for Equivariant Models

02/20/2021
by   Bryn Elesedy, et al.
20

It is widely believed that engineering a model to be invariant/equivariant improves generalisation. Despite the growing popularity of this approach, a precise characterisation of the generalisation benefit is lacking. By considering the simplest case of linear models, this paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models when the target distribution is invariant/equivariant with respect to a compact group. Moreover, our work reveals an interesting relationship between generalisation, the number of training examples and properties of the group action. Our results rest on an observation of the structure of function spaces under averaging operators which, along with its consequences for feature averaging, may be of independent interest.

READ FULL TEXT
research
06/04/2021

Provably Strict Generalisation Benefit for Invariance in Kernel Methods

It is a commonly held belief that enforcing invariance improves generali...
research
09/29/2022

Equivariant maps from invariant functions

In equivariant machine learning the idea is to restrict the learning to ...
research
08/26/1999

A Differential Invariant for Zooming

This paper presents an invariant under scaling and linear brightness cha...
research
05/30/2023

Group Invariant Global Pooling

Much work has been devoted to devising architectures that build group-eq...
research
02/13/2020

A Provably Robust Multiple Rotation Averaging Scheme for SO(2)

We give adversarial robustness results for synchronization on the rotati...
research
10/27/2019

On the asymptotic distribution of model averaging based on information criterion

Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in mo...

Please sign up or login with your details

Forgot password? Click here to reset