Variance-Covariance Regularization Improves Representation Learning

06/23/2023
by   Jiachen Zhu, et al.
0

Transfer learning has emerged as a key approach in the machine learning domain, enabling the application of knowledge derived from one domain to improve performance on subsequent tasks. Given the often limited information about these subsequent tasks, a strong transfer learning approach calls for the model to capture a diverse range of features during the initial pretraining stage. However, recent research suggests that, without sufficient regularization, the network tends to concentrate on features that primarily reduce the pretraining loss function. This tendency can result in inadequate feature learning and impaired generalization capability for target tasks. To address this issue, we propose Variance-Covariance Regularization (VCR), a regularization technique aimed at fostering diversity in the learned network features. Drawing inspiration from recent advancements in the self-supervised learning approach, our approach promotes learned representations that exhibit high variance and minimal covariance, thus preventing the network from focusing solely on loss-reducing features. We empirically validate the efficacy of our method through comprehensive experiments coupled with in-depth analytical studies on the learned representations. In addition, we develop an efficient implementation strategy that assures minimal computational overhead associated with our method. Our results indicate that VCR is a powerful and efficient method for enhancing transfer learning performance for both supervised learning and self-supervised learning, opening new possibilities for future research in this domain.

READ FULL TEXT
research
03/01/2023

An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization

In this paper, we provide an information-theoretic perspective on Varian...
research
09/05/2023

A Survey of the Impact of Self-Supervised Pretraining for Diagnostic Tasks with Radiological Images

Self-supervised pretraining has been observed to be effective at improvi...
research
05/11/2021

VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

Recent self-supervised methods for image representation learning are bas...
research
09/29/2022

Variance Covariance Regularization Enforces Pairwise Independence in Self-Supervised Representations

Self-Supervised Learning (SSL) methods such as VICReg, Barlow Twins or W...
research
11/03/2020

Learning Visual Representations for Transfer Learning by Suppressing Texture

Recent literature has shown that features obtained from supervised train...
research
05/26/2022

Self-supervised Pretraining and Transfer Learning Enable Flu and COVID-19 Predictions in Small Mobile Sensing Datasets

Detailed mobile sensing data from phones, watches, and fitness trackers ...
research
09/02/2022

Feature diversity in self-supervised learning

Many studies on scaling laws consider basic factors such as model size, ...

Please sign up or login with your details

Forgot password? Click here to reset