Joint Self-Supervised Image-Volume Representation Learning with Intra-Inter Contrastive Clustering

12/04/2022
by   Duy M. H. Nguyen, et al.
10

Collecting large-scale medical datasets with fully annotated samples for training of deep networks is prohibitively expensive, especially for 3D volume data. Recent breakthroughs in self-supervised learning (SSL) offer the ability to overcome the lack of labeled training samples by learning feature representations from unlabeled data. However, most current SSL techniques in the medical field have been designed for either 2D images or 3D volumes. In practice, this restricts the capability to fully leverage unlabeled data from numerous sources, which may include both 2D and 3D data. Additionally, the use of these pre-trained networks is constrained to downstream tasks with compatible data dimensions. In this paper, we propose a novel framework for unsupervised joint learning on 2D and 3D data modalities. Given a set of 2D images or 2D slices extracted from 3D volumes, we construct an SSL task based on a 2D contrastive clustering problem for distinct classes. The 3D volumes are exploited by computing vectored embedding at each slice and then assembling a holistic feature through deformable self-attention mechanisms in Transformer, allowing incorporating long-range dependencies between slices inside 3D volumes. These holistic features are further utilized to define a novel 3D clustering agreement-based SSL task and masking embedding prediction inspired by pre-trained language models. Experiments on downstream tasks, such as 3D brain segmentation, lung nodule detection, 3D heart structures segmentation, and abnormal chest X-ray detection, demonstrate the effectiveness of our joint 2D and 3D SSL approach. We improve plain 2D Deep-ClusterV2 and SwAV by a significant margin and also surpass various modern 2D and 3D SSL approaches.

READ FULL TEXT

page 1

page 7

page 12

page 13

research
06/23/2021

Bootstrap Representation Learning for Segmentation on Medical Volumes and Sequences

In this work, we propose a novel straightforward method for medical volu...
research
06/16/2022

Volumetric Supervised Contrastive Learning for Seismic Semantic Segmentation

In seismic interpretation, pixel-level labels of various rock structures...
research
08/08/2022

AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning

As a self-supervised learning paradigm, contrastive learning has been wi...
research
06/06/2020

3D Self-Supervised Methods for Medical Imaging

Self-supervised learning methods have witnessed a recent surge of intere...
research
06/20/2023

LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching

Obtaining large pre-trained models that can be fine-tuned to new tasks w...
research
12/17/2021

Unified 2D and 3D Pre-training for Medical Image classification and Segmentation

Self-supervised learning (SSL) opens up huge opportunities for better ut...
research
09/06/2023

Self-Supervised Masked Digital Elevation Models Encoding for Low-Resource Downstream Tasks

The lack of quality labeled data is one of the main bottlenecks for trai...

Please sign up or login with your details

Forgot password? Click here to reset