Hierarchical Self-Supervised Learning for Medical Image Segmentation Based on Multi-Domain Data Aggregation

07/10/2021
by   Hao Zheng, et al.
10

A large labeled dataset is a key to the success of supervised deep learning, but for medical image segmentation, it is highly challenging to obtain sufficient annotated images for model training. In many scenarios, unannotated images are abundant and easy to acquire. Self-supervised learning (SSL) has shown great potentials in exploiting raw data information and representation learning. In this paper, we propose Hierarchical Self-Supervised Learning (HSSL), a new self-supervised framework that boosts medical image segmentation by making good use of unannotated data. Unlike the current literature on task-specific self-supervised pretraining followed by supervised fine-tuning, we utilize SSL to learn task-agnostic knowledge from heterogeneous data for various medical image segmentation tasks. Specifically, we first aggregate a dataset from several medical challenges, then pre-train the network in a self-supervised manner, and finally fine-tune on labeled data. We develop a new loss function by combining contrastive loss and classification loss and pretrain an encoder-decoder architecture for segmentation tasks. Our extensive experiments show that multi-domain joint pre-training benefits downstream segmentation tasks and outperforms single-domain pre-training significantly. Compared to learning from scratch, our new method yields better performance on various tasks (e.g., +0.69 data). With limited amounts of training data, our method can substantially bridge the performance gap w.r.t. denser annotations (e.g., 10 annotated data).

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

page 13

01/15/2022

SS-3DCapsNet: Self-supervised 3D Capsule Networks for Medical Segmentation on Less Labeled Data

Capsule network is a recent new deep network architecture that has been ...
11/29/2021

Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis

Vision Transformers (ViT)s have shown great performance in self-supervis...
11/27/2017

Exploiting the potential of unlabeled endoscopic video data with self-supervised learning

Purpose: Due to the breakthrough successes of deep learning-based soluti...
01/16/2022

Is Contrastive Learning Suitable for Left Ventricular Segmentation in Echocardiographic Images?

Contrastive learning has proven useful in many applications where access...
12/07/2021

BT-Unet: A self-supervised learning framework for biomedical image segmentation using Barlow Twins with U-Net models

Deep learning has brought the most profound contribution towards biomedi...
09/15/2021

Semi-supervised Contrastive Learning for Label-efficient Medical Image Segmentation

The success of deep learning methods in medical image segmentation tasks...
07/11/2021

A Spatial Guided Self-supervised Clustering Network for Medical Image Segmentation

The segmentation of medical images is a fundamental step in automated cl...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.