DeepAI AI Chat
Log In Sign Up

Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles

by   Bram Wallace, et al.
cornell university

Pretraining convolutional neural networks via self-supervision, and applying them in transfer learning, is an incredibly fast-growing field that is rapidly and iteratively improving performance across practically all image domains. Meanwhile, model ensembling is one of the most universally applicable techniques in supervised learning literature and practice, offering a simple solution to reliably improve performance. But how to optimally combine self-supervised models to maximize representation quality has largely remained unaddressed. In this work, we provide a framework to perform self-supervised model ensembling via a novel method of learning representations directly through gradient descent at inference time. This technique improves representation quality, as measured by k-nearest neighbors, both on the in-domain dataset and in the transfer setting, with models transferable from the former setting to the latter. Additionally, this direct learning of feature through backpropagation improves representations from even a single model, echoing the improvements found in self-distillation.


page 2

page 7


Distilling Visual Priors from Self-Supervised Learning

Convolutional Neural Networks (CNNs) are prone to overfit small training...

Improving Spatiotemporal Self-Supervision by Deep Reinforcement Learning

Self-supervised learning of convolutional neural networks can harness la...

Adaptive Similarity Bootstrapping for Self-Distillation

Most self-supervised methods for representation learning leverage a cros...

Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning

We are interested in representation learning in self-supervised, supervi...

Transfer Learning Application of Self-supervised Learning in ARPES

Recent development in angle-resolved photoemission spectroscopy (ARPES) ...

Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning

We propose a unifying view to analyze the representation quality of self...

TLDR: Twin Learning for Dimensionality Reduction

Dimensionality reduction methods are unsupervised approaches which learn...