ClusterFit: Improving Generalization of Visual Representations

12/06/2019
by   Xueting Yan, et al.
0

Pre-training convolutional neural networks with weakly-supervised and self-supervised strategies is becoming increasingly popular for several computer vision tasks. However, due to the lack of strong discriminative signals, these learned representations may overfit to the pre-training objective (e.g., hashtag prediction) and not generalize well to downstream tasks. In this work, we present a simple strategy - ClusterFit (CF) to improve the robustness of the visual representations learned during pre-training. Given a dataset, we (a) cluster its features extracted from a pre-trained network using k-means and (b) re-train a new network from scratch on this dataset using cluster assignments as pseudo-labels. We empirically show that clustering helps reduce the pre-training task-specific information from the extracted features thereby minimizing overfitting to the same. Our approach is extensible to different pre-training frameworks – weak- and self-supervised, modalities – images and videos, and pre-training tasks – object and action classification. Through extensive transfer learning experiments on 11 different target datasets of varied vocabularies and granularities, we show that ClusterFit significantly improves the representation quality compared to the state-of-the-art large-scale (millions / billions) weakly-supervised image and video models and self-supervised image models.

READ FULL TEXT
research
01/20/2022

Revisiting Weakly Supervised Pre-Training of Visual Perception Models

Model pre-training is a cornerstone of modern visual recognition systems...
research
11/20/2020

Efficient Conditional Pre-training for Transfer Learning

Almost all the state-of-the-art neural networks for computer vision task...
research
04/04/2023

Evaluating Synthetic Pre-Training for Handwriting Processing Tasks

In this work, we explore massive pre-training on synthetic word images f...
research
06/01/2021

Exploring the Diversity and Invariance in Yourself for Visual Pre-Training Task

Recently, self-supervised learning methods have achieved remarkable succ...
research
04/04/2018

Self-supervised Learning of Geometrically Stable Features Through Probabilistic Introspection

Self-supervision can dramatically cut back the amount of manually-labell...
research
08/21/2022

Semantic-enhanced Image Clustering

Image clustering is an important, and open challenge task in computer vi...
research
03/11/2022

Masked Visual Pre-training for Motor Control

This paper shows that self-supervised visual pre-training from real-worl...

Please sign up or login with your details

Forgot password? Click here to reset