Segment Anything Model for Medical Image Analysis: an Experimental Study

04/20/2023
by   Maciej A. Mazurowski, et al.
0

Training segmentation models for medical images continues to be challenging due to the limited availability and acquisition expense of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to be able to segment the user-defined object of interest in an interactive manner. Despite its impressive performance on natural images, it is unclear how the model is affected when shifting to medical image domains. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 11 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point prompts using a standard method that simulates interactive segmentation. Experimental results show that SAM's performance based on single prompts highly varies depending on the task and the dataset, i.e., from 0.1135 for a spine MRI dataset to 0.8650 for a hip x-ray dataset, evaluated by IoU. Performance appears to be high for tasks including well-circumscribed objects with unambiguous prompts and poorer in many other scenarios such as segmentation of tumors. When multiple prompts are provided, performance improves only slightly overall, but more so for datasets where the object is not contiguous. An additional comparison to RITM showed a much better performance of SAM for one prompt but a similar performance of the two methods for a larger number of prompts. We conclude that SAM shows impressive performance for some datasets given the zero-shot learning setup but poor to moderate performance for multiple other datasets. While SAM as a model and as a learning paradigm might be impactful in the medical imaging domain, extensive research is needed to identify the proper ways of adapting it in this domain.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
04/28/2023

Segment Anything Model for Medical Images?

The Segment Anything Model (SAM) is the first foundation model for gener...
research
04/25/2023

Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation

We examine the recent Segment Anything Model (SAM) on medical images, an...
research
07/15/2019

Concept-Centric Visual Turing Tests for Method Validation

Recent advances in machine learning for medical imaging have led to impr...
research
11/30/2020

Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs

Despite current advances in deep learning, domain shift remains a common...
research
06/24/2023

Utilizing Segment Anything Model For Assessing Localization of GRAD-CAM in Medical Imaging

The introduction of saliency map algorithms as an approach for assessing...
research
08/02/2022

Gesture-aware Interactive Machine Teaching with In-situ Object Annotations

Interactive Machine Teaching (IMT) systems allow non-experts to easily c...

Please sign up or login with your details

Forgot password? Click here to reset