Self-supervised learning with rotation-invariant kernels

07/28/2022
by   Léon Zheng, et al.
6

A major paradigm for learning image representations in a self-supervised manner is to learn a model that is invariant to some predefined image transformations (cropping, blurring, color jittering, etc.), while regularizing the embedding distribution to avoid learning a degenerate solution. Our first contribution is to propose a general kernel framework to design a generic regularization loss that promotes the embedding distribution to be close to the uniform distribution on the hypersphere, with respect to the maximum mean discrepancy pseudometric. Our framework uses rotation-invariant kernels defined on the hypersphere, also known as dot-product kernels. Our second contribution is to show that this flexible kernel approach encompasses several existing self-supervised learning methods, including uniformity-based and information-maximization methods. Finally, by exploring empirically several kernel choices, our experiments demonstrate that using a truncated rotation-invariant kernel provides competitive results compared to state-of-the-art methods, and we show practical situations where our method benefits from the kernel trick to reduce computational complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/14/2023

Gated Self-supervised Learning For Improving Supervised Learning

In past research on self-supervised learning for image classification, t...
research
12/25/2019

Multiple Pretext-Task for Self-Supervised Learning via Mixing Multiple Image Transformations

Self-supervised learning is one of the most promising approaches to lear...
research
02/26/2019

Implicit Kernel Learning

Kernels are powerful and versatile tools in machine learning and statist...
research
03/27/2023

On the stepwise nature of self-supervised learning

We present a simple picture of the training process of self-supervised l...
research
05/10/2022

Domain Invariant Masked Autoencoders for Self-supervised Learning from Multi-domains

Generalizing learned representations across significantly different visu...
research
05/30/2022

The Devil is in the Pose: Ambiguity-free 3D Rotation-invariant Learning via Pose-aware Convolution

Rotation-invariant (RI) 3D deep learning methods suffer performance degr...
research
08/09/2023

Self-supervised Learning of Rotation-invariant 3D Point Set Features using Transformer and its Self-distillation

Invariance against rotations of 3D objects is an important property in a...

Please sign up or login with your details

Forgot password? Click here to reset