Improving Transferability of Representations via Augmentation-Aware Self-Supervision

11/18/2021
by   Hankook Lee, et al.
9

Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering. However, such invariance could be harmful to downstream tasks if they rely on the characteristics of the data augmentations, e.g., location- or color-sensitive. This is not an issue just for unsupervised learning; we found that this occurs even in supervised learning because it also learns to predict the same label for all augmented samples of an instance. To avoid such failures and obtain more generalizable representations, we suggest to optimize an auxiliary self-supervised loss, coined AugSelf, that learns the difference of augmentation parameters (e.g., cropping positions, color adjustment intensities) between two randomly augmented samples. Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability. Furthermore, AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost. Extensive experiments demonstrate that our simple idea consistently improves the transferability of representations learned by supervised and unsupervised methods in various transfer learning scenarios. The code is available at https://github.com/hankook/AugSelf.

READ FULL TEXT
research
07/18/2022

ExAgt: Expert-guided Augmentation for Representation Learning of Traffic Scenarios

Representation learning in recent years has been addressed with self-sup...
research
12/02/2021

InsCLR: Improving Instance Retrieval with Self-Supervision

This work aims at improving instance retrieval with self-supervision. We...
research
11/02/2022

EquiMod: An Equivariance Module to Improve Self-Supervised Learning

Self-supervised visual representation methods are closing the gap with s...
research
11/22/2021

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

Self-supervised learning is a powerful paradigm for representation learn...
research
10/20/2022

Self-Supervised Learning via Maximum Entropy Coding

A mainstream type of current self-supervised learning methods pursues a ...
research
04/15/2023

Multi-View Graph Representation Learning Beyond Homophily

Unsupervised graph representation learning(GRL) aims to distill diverse ...
research
03/23/2022

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

Recent studies in deepfake detection have yielded promising results when...

Please sign up or login with your details

Forgot password? Click here to reset