MixMask: Revisiting Masked Siamese Self-supervised Learning in Asymmetric Distance

10/20/2022
by   Kirill Vishniakov, et al.
0

Recent advances in self-supervised learning integrate Masked Modeling and Siamese Networks into a single framework to fully reap the advantages of both the two techniques. However, previous erasing-based masking scheme in masked image modeling is not originally designed for siamese networks. Existing approaches simply inherit the default loss design from previous siamese networks, and ignore the information loss and distance change after employing masking operation in the frameworks. In this paper, we propose a filling-based masking strategy called MixMask to prevent information loss due to the randomly erased areas of an image in vanilla masking method. We further introduce a dynamic loss function design with soft distance to adapt the integrated architecture and avoid mismatches between transformed input and objective in Masked Siamese ConvNets (MSCN). The dynamic loss distance is calculated according to the proposed mix-masking scheme. Extensive experiments are conducted on various datasets of CIFAR-100, Tiny-ImageNet and ImageNet-1K. The results demonstrate that the proposed framework can achieve better accuracy on linear probing, semi-supervised and supervised finetuning, which outperforms the state-of-the-art MSCN by a significant margin. We also show the superiority on downstream tasks of object detection and segmentation. Our source code is available at https://github.com/LightnessOfBeing/MixMask.

READ FULL TEXT

page 5

page 14

research
04/14/2022

Masked Siamese Networks for Label-Efficient Learning

We propose Masked Siamese Networks (MSN), a self-supervised learning fra...
research
03/13/2023

Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning

We propose universally slimmable self-supervised learning (dubbed as US3...
research
04/01/2022

On the Importance of Asymmetry for Siamese Representation Learning

Many recent self-supervised frameworks for visual representation learnin...
research
10/07/2022

An Investigation into Whitening Loss for Self-supervised Learning

A desirable objective in self-supervised learning (SSL) is to avoid feat...
research
04/04/2019

Siamese Encoding and Alignment by Multiscale Learning with Self-Supervision

We propose a method of aligning a source image to a target image, where ...
research
11/25/2022

Ladder Siamese Network: a Method and Insights for Multi-level Self-Supervised Learning

Siamese-network-based self-supervised learning (SSL) suffers from slow c...
research
05/26/2023

Modulate Your Spectrum in Self-Supervised Learning

Whitening loss provides theoretical guarantee in avoiding feature collap...

Please sign up or login with your details

Forgot password? Click here to reset