DeepAI AI Chat
Log In Sign Up

Provably Strict Generalisation Benefit for Equivariant Models

by   Bryn Elesedy, et al.

It is widely believed that engineering a model to be invariant/equivariant improves generalisation. Despite the growing popularity of this approach, a precise characterisation of the generalisation benefit is lacking. By considering the simplest case of linear models, this paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models when the target distribution is invariant/equivariant with respect to a compact group. Moreover, our work reveals an interesting relationship between generalisation, the number of training examples and properties of the group action. Our results rest on an observation of the structure of function spaces under averaging operators which, along with its consequences for feature averaging, may be of independent interest.


Provably Strict Generalisation Benefit for Invariance in Kernel Methods

It is a commonly held belief that enforcing invariance improves generali...

Equivariant maps from invariant functions

In equivariant machine learning the idea is to restrict the learning to ...

A Differential Invariant for Zooming

This paper presents an invariant under scaling and linear brightness cha...

Model selection in the space of Gaussian models invariant by symmetry

We consider multivariate centred Gaussian models for the random variable...

Optimal designs for model averaging in non-nested models

In this paper we construct optimal designs for frequentist model averagi...

On the asymptotic distribution of model averaging based on information criterion

Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in mo...