Exploiting the potential of unlabeled endoscopic video data with self-supervised learning

11/27/2017
by   Tobias Ross, et al.
0

Purpose: Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is evolving as one of the major bottlenecks in the field of surgical data science. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue. Methods: Guided by the hypothesis that unlabeled video data contains valuable information about the target domain that can be used to boost the performance of state-of-the-art deep learning algorithms, we show how to reduce the required amount of manual labeling with self-supervised learning. The core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a generative adversarial network (GAN)-based architecture as auxiliary task. A variant of the method involves a second pretraining step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task. Results: The proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases the number of labeled images by up to 75 outperforms alternative methods for CNN pretraining, such as pretraining on publicly available non-medical (COCO) or medical data (MICCAI endoscopic vision challenge 2017) using the target task (here: segmentation). Conclusion: As it makes efficient use of available public and non-public, labeled and unlabeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.

READ FULL TEXT

page 4

page 9

07/10/2021

Hierarchical Self-Supervised Learning for Medical Image Segmentation Based on Multi-Domain Data Aggregation

A large labeled dataset is a key to the success of supervised deep learn...
08/12/2020

Self-Path: Self-supervision for Classification of Pathology Images with Limited Annotations

While high-resolution pathology images lend themselves well to `data hun...
02/26/2020

Unsupervised Temporal Video Segmentation as an Auxiliary Task for Predicting the Remaining Surgery Duration

Estimating the remaining surgery duration (RSD) during surgical procedur...
12/07/2021

Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation

Despite the outstanding success of self-supervised pretraining methods f...
07/17/2020

Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise Transformation for 3D Medical Image Segmentation

Deep learning highly relies on the quantity of annotated data. However, ...
11/08/2021

Hybrid BYOL-ViT: Efficient approach to deal with small datasets

Supervised learning can learn large representational spaces, which are c...
08/24/2022

Self-Supervised Endoscopic Image Key-Points Matching

Feature matching and finding correspondences between endoscopic images i...