DeepAI AI Chat
Log In Sign Up

Uncertainty and Interpretability in Convolutional Neural Networks for Semantic Segmentation of Colorectal Polyps

by   Kristoffer Wickstrøm, et al.
University of Tromsø the Arctic University of Norway

Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.


page 2

page 4

page 5


Semantic Segmentation using Vision Transformers: A survey

Semantic segmentation has a broad range of applications in a variety of ...

Convolutional Neural Networks in Orthodontics: a review

Convolutional neural networks (CNNs) are used in many areas of computer ...

CUAB: Convolutional Uncertainty Attention Block Enhanced the Chest X-ray Image Analysis

In recent years, convolutional neural networks (CNNs) have been successf...

Deep Learning based Segmentation of Fish in Noisy Forward Looking MBES Images

In this work, we investigate a Deep Learning (DL) approach to fish segme...

Technical Considerations for Semantic Segmentation in MRI using Convolutional Neural Networks

High-fidelity semantic segmentation of magnetic resonance volumes is cri...