Learning to segment with limited annotations: Self-supervised pretraining with regression and contrastive loss in MRI

05/26/2022
by   Lavanya Umapathy, et al.
0

Obtaining manual annotations for large datasets for supervised training of deep learning (DL) models is challenging. The availability of large unlabeled datasets compared to labeled ones motivate the use of self-supervised pretraining to initialize DL models for subsequent segmentation tasks. In this work, we consider two pre-training approaches for driving a DL model to learn different representations using: a) regression loss that exploits spatial dependencies within an image and b) contrastive loss that exploits semantic similarity between pairs of images. The effect of pretraining techniques is evaluated in two downstream segmentation applications using Magnetic Resonance (MR) images: a) liver segmentation in abdominal T2-weighted MR images and b) prostate segmentation in T2-weighted MR images of the prostate. We observed that DL models pretrained using self-supervision can be finetuned for comparable performance with fewer labeled datasets. Additionally, we also observed that initializing the DL model using contrastive loss based pretraining performed better than the regression loss.

READ FULL TEXT

page 2

page 3

page 4

research
12/17/2021

Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation

Supervised deep learning-based methods yield accurate results for medica...
research
11/16/2022

Semi-Supervised and Self-Supervised Collaborative Learning for Prostate 3D MR Image Segmentation

Volumetric magnetic resonance (MR) image segmentation plays an important...
research
05/25/2022

Interaction of a priori Anatomic Knowledge with Self-Supervised Contrastive Learning in Cardiac Magnetic Resonance Imaging

Training deep learning models on cardiac magnetic resonance imaging (CMR...
research
06/29/2021

Two-Stage Self-Supervised Cycle-Consistency Network for Reconstruction of Thin-Slice MR Images

The thick-slice magnetic resonance (MR) images are often structurally bl...
research
08/24/2022

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

Self supervised contrastive learning based pretraining allows developmen...
research
07/07/2022

Revisiting Pretraining Objectives for Tabular Deep Learning

Recent deep learning models for tabular data currently compete with the ...
research
12/10/2021

Tradeoffs Between Contrastive and Supervised Learning: An Empirical Study

Contrastive learning has made considerable progress in computer vision, ...

Please sign up or login with your details

Forgot password? Click here to reset