Exploiting the potential of unlabeled endoscopic video data with self-supervised learning

11/27/2017
by   Tobias Ross, et al.
0

Purpose: Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is evolving as one of the major bottlenecks in the field of surgical data science. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue. Methods: Guided by the hypothesis that unlabeled video data contains valuable information about the target domain that can be used to boost the performance of state-of-the-art deep learning algorithms, we show how to reduce the required amount of manual labeling with self-supervised learning. The core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a generative adversarial network (GAN)-based architecture as auxiliary task. A variant of the method involves a second pretraining step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task. Results: The proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases the number of labeled images by up to 75 outperforms alternative methods for CNN pretraining, such as pretraining on publicly available non-medical (COCO) or medical data (MICCAI endoscopic vision challenge 2017) using the target task (here: segmentation). Conclusion: As it makes efficient use of available public and non-public, labeled and unlabeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.

READ FULL TEXT

page 4

page 9

research
07/10/2021

Hierarchical Self-Supervised Learning for Medical Image Segmentation Based on Multi-Domain Data Aggregation

A large labeled dataset is a key to the success of supervised deep learn...
research
08/12/2020

Self-Path: Self-supervision for Classification of Pathology Images with Limited Annotations

While high-resolution pathology images lend themselves well to `data hun...
research
10/05/2019

Self-supervised Feature Learning for 3D Medical Images by Playing a Rubik's Cube

Witnessed the development of deep learning, increasing number of studies...
research
11/19/2022

Domain-Adaptive Self-Supervised Pre-Training for Face Body Detection in Drawings

Drawings are powerful means of pictorial abstraction and communication. ...
research
09/18/2023

Self-supervised TransUNet for Ultrasound regional segmentation of the distal radius in children

Supervised deep learning offers great promise to automate analysis of me...
research
07/06/2023

Self-supervised learning via inter-modal reconstruction and feature projection networks for label-efficient 3D-to-2D segmentation

Deep learning has become a valuable tool for the automation of certain m...
research
03/02/2021

Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation

Purpose: Segmentation of surgical instruments in endoscopic videos is es...

Please sign up or login with your details

Forgot password? Click here to reset