GraVIS: Grouping Augmented Views from Independent Sources for Dermatology Analysis

by   Hong-Yu Zhou, et al.

Self-supervised representation learning has been extremely successful in medical image analysis, as it requires no human annotations to provide transferable representations for downstream tasks. Recent self-supervised learning methods are dominated by noise-contrastive estimation (NCE, also known as contrastive learning), which aims to learn invariant visual representations by contrasting one homogeneous image pair with a large number of heterogeneous image pairs in each training step. Nonetheless, NCE-based approaches still suffer from one major problem that is one homogeneous pair is not enough to extract robust and invariant semantic information. Inspired by the archetypical triplet loss, we propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images, to group homogeneous dermatology images while separating heterogeneous ones. In addition, a hardness-aware attention is introduced and incorporated to address the importance of homogeneous image views with similar appearance instead of those dissimilar homogeneous ones. GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks, sometimes by 5 percents under extremely limited supervision. More importantly, when equipped with the pre-trained weights provided by GraVIS, a single model could achieve better results than winners that heavily rely on ensemble strategies in the well-known ISIC 2017 challenge.


page 1

page 3

page 4

page 9


Augmented Contrastive Self-Supervised Learning for Audio Invariant Representations

Improving generalization is a major challenge in audio classification du...

MSR: Making Self-supervised learning Robust to Aggressive Augmentations

Most recent self-supervised learning methods learn visual representation...

Fast-MoCo: Boost Momentum-based Contrastive Learning with Combinatorial Patches

Contrastive-based self-supervised learning methods achieved great succes...

Self-distillation Augmented Masked Autoencoders for Histopathological Image Classification

Self-supervised learning (SSL) has drawn increasing attention in patholo...

An Embedding-Dynamic Approach to Self-supervised Learning

A number of recent self-supervised learning methods have shown impressiv...

Focus on the Positives: Self-Supervised Learning for Biodiversity Monitoring

We address the problem of learning self-supervised representations from ...

SimTriplet: Simple Triplet Representation Learning with a Single GPU

Contrastive learning is a key technique of modern self-supervised learni...

Please sign up or login with your details

Forgot password? Click here to reset