SoftEnNet: Symbiotic Monocular Depth Estimation and Lumen Segmentation for Colonoscopy Endorobots

01/19/2023
by   Alwyn Mathew, et al.
0

Colorectal cancer is the third most common cause of cancer death worldwide. Optical colonoscopy is the gold standard for detecting colorectal cancer; however, about 25 percent of polyps are missed during the procedure. A vision-based autonomous endorobot can improve colonoscopy procedures significantly through systematic, complete screening of the colonic mucosa. The reliable robot navigation needed requires a three-dimensional understanding of the environment and lumen tracking to support autonomous tasks. We propose a novel multi-task model that simultaneously predicts dense depth and lumen segmentation with an ensemble of deep networks. The depth estimation sub-network is trained in a self-supervised fashion guided by view synthesis; the lumen segmentation sub-network is supervised. The two sub-networks are interconnected with pathways that enable information exchange and thereby mutual learning. As the lumen is in the image's deepest visual space, lumen segmentation helps with the depth estimation at the farthest location. In turn, the estimated depth guides the lumen segmentation network as the lumen location defines the farthest scene location. Unlike other environments, view synthesis often fails in the colon because of the deformable wall, textureless surface, specularities, and wide field of view image distortions, all challenges that our pipeline addresses. We conducted qualitative analysis on a synthetic dataset and quantitative analysis on a colon training model and real colonoscopy videos. The experiments show that our model predicts accurate scale-invariant depth maps and lumen segmentation from colonoscopy images in near real-time.

READ FULL TEXT

page 1

page 6

page 7

research
12/27/2021

Improving Depth Estimation using Location Information

The ability to accurately estimate depth information is crucial for many...
research
05/13/2023

Seeing Through the Grass: Semantic Pointcloud Filter for Support Surface Learning

Mobile ground robots require perceiving and understanding their surround...
research
11/18/2021

SUB-Depth: Self-distillation and Uncertainty Boosting Self-supervised Monocular Depth Estimation

We propose SUB-Depth, a universal multi-task training framework for self...
research
07/03/2022

Beyond Visual Field of View: Perceiving 3D Environment with Echoes and Vision

This paper focuses on perceiving and navigating 3D environments using ec...
research
12/16/2021

On the Uncertain Single-View Depths in Endoscopies

Estimating depth from endoscopic images is a pre-requisite for a wide se...
research
07/13/2020

UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models

In classical computer vision, rectification is an integral part of multi...
research
12/02/2016

A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images

Colorectal cancer (CRC) is the third cause of cancer death worldwide. Cu...

Please sign up or login with your details

Forgot password? Click here to reset