HyperInvariances: Amortizing Invariance Learning

07/17/2022
by   Ruchika Chavhan, et al.
9

Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified. However, the ideal invariances for many problems of interest are often not known, which has led both to a body of engineering lore as well as attempts to provide frameworks for invariance learning. However, invariance learning is expensive and data intensive for popular neural architectures. We introduce the notion of amortizing invariance learning. In an up-front learning phase, we learn a low-dimensional manifold of feature extractors spanning invariance to different transformations using a hyper-network. Then, for any problem of interest, both model and invariance learning are rapid and efficient by fitting a low-dimensional invariance descriptor an output head. Empirically, this framework can identify appropriate invariances in different downstream tasks and lead to comparable or better test performance than conventional approaches. Our HyperInvariance framework is also theoretically appealing as it enables generalisation-bounds that provide an interesting new operating point in the trade-off between model fit and complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

Amortised Invariance Learning for Contrastive Self-Supervision

Contrastive self-supervised learning methods famously produce high quali...
research
10/19/2020

Improving Transformation Invariance in Contrastive Representation Learning

We propose methods to strengthen the invariance properties of representa...
research
09/15/2023

Unveiling Invariances via Neural Network Pruning

Invariance describes transformations that do not alter data's underlying...
research
07/14/2022

On the Strong Correlation Between Model Invariance and Generalization

Generalization and invariance are two essential properties of any machin...
research
07/25/2022

Equivariance and Invariance Inductive Bias for Learning from Insufficient Data

We are interested in learning robust models from insufficient data, with...
research
08/16/2022

Order-Invariance of Two-Variable Logic is coNExpTime-complete

We establish coNExpTime-completeness of the problem of deciding order-in...

Please sign up or login with your details

Forgot password? Click here to reset