Learning with invariances in random features and kernel models

02/25/2021
by   Song Mei, et al.
0

A number of machine learning tasks entail a high degree of invariance: the data distribution does not change if we act on the data with a certain group of transformations. For instance, labels of images are invariant under translations of the images. Certain neural network architectures – for instance, convolutional networks – are believed to owe their success to the fact that they exploit such invariance properties. With the objective of quantifying the gain achieved by invariant architectures, we introduce two classes of models: invariant random features and invariant kernel methods. The latter includes, as a special case, the neural tangent kernel for convolutional networks with global average pooling. We consider uniform covariates distributions on the sphere and hypercube and a general invariant target function. We characterize the test error of invariant methods in a high-dimensional regime in which the sample size and number of hidden units scale as polynomials in the dimension, for a class of groups that we call `degeneracy α', with α≤ 1. We show that exploiting invariance in the architecture saves a d^α factor (d stands for the dimension) in sample size and number of hidden units to achieve the same test error as for unstructured architectures. Finally, we show that output symmetrization of an unstructured kernel estimator does not give a significant statistical improvement; on the other hand, data augmentation with an unstructured kernel estimator is equivalent to an invariant kernel estimator and enjoys the same improvement in statistical efficiency.

READ FULL TEXT

page 21

page 22

page 23

research
06/14/2021

On the Sample Complexity of Learning with Geometric Stability

Many supervised learning problems involve high-dimensional data such as ...
research
03/18/2021

Stride and Translation Invariance in CNNs

Convolutional Neural Networks have become the standard for image classif...
research
01/16/2013

Learning Stable Group Invariant Representations with Convolutional Networks

Transformation groups, such as translations or rotations, effectively ex...
research
06/08/2015

Learning with Group Invariant Features: A Kernel Perspective

We analyze in this paper a random feature map based on a theory of invar...
research
05/21/2021

Properties of the After Kernel

The Neural Tangent Kernel (NTK) is the wide-network limit of a kernel de...
research
08/08/2023

Probabilistic Invariant Learning with Randomized Linear Classifiers

Designing models that are both expressive and preserve known invariances...

Please sign up or login with your details

Forgot password? Click here to reset