DeepAI AI Chat
Log In Sign Up

Provably Strict Generalisation Benefit for Equivariant Models

02/20/2021
by   Bryn Elesedy, et al.
20

It is widely believed that engineering a model to be invariant/equivariant improves generalisation. Despite the growing popularity of this approach, a precise characterisation of the generalisation benefit is lacking. By considering the simplest case of linear models, this paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models when the target distribution is invariant/equivariant with respect to a compact group. Moreover, our work reveals an interesting relationship between generalisation, the number of training examples and properties of the group action. Our results rest on an observation of the structure of function spaces under averaging operators which, along with its consequences for feature averaging, may be of independent interest.

READ FULL TEXT
06/04/2021

Provably Strict Generalisation Benefit for Invariance in Kernel Methods

It is a commonly held belief that enforcing invariance improves generali...
09/29/2022

Equivariant maps from invariant functions

In equivariant machine learning the idea is to restrict the learning to ...
08/26/1999

A Differential Invariant for Zooming

This paper presents an invariant under scaling and linear brightness cha...
04/07/2020

Model selection in the space of Gaussian models invariant by symmetry

We consider multivariate centred Gaussian models for the random variable...
04/02/2019

Optimal designs for model averaging in non-nested models

In this paper we construct optimal designs for frequentist model averagi...
10/27/2019

On the asymptotic distribution of model averaging based on information criterion

Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in mo...