Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks

04/18/2023
by   Sheng He, et al.
0

The segment-anything model (SAM), was introduced as a fundamental model for segmenting images. It was trained using over 1 billion masks from 11 million natural images. The model can perform zero-shot segmentation of images by using various prompts such as masks, boxes, and points. In this report, we explored (1) the accuracy of SAM on 12 public medical image segmentation datasets which cover various organs (brain, breast, chest, lung, skin, liver, bowel, pancreas, and prostate), image modalities (2D X-ray, histology, endoscropy, and 3D MRI and CT), and health conditions (normal, lesioned). (2) if the computer vision foundational segmentation model SAM can provide promising research directions for medical image segmentation. We found that SAM without re-training on medical images does not perform as accurately as U-Net or other deep learning models trained on medical images.

READ FULL TEXT
research
04/22/2023

Input Augmentation with SAM: Boosting Medical Image Segmentation with Segmentation Foundation Model

The Segment Anything Model (SAM) is a recently developed large model for...
research
06/23/2023

How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Images

The emerging scale segmentation model, Segment Anything (SAM), exhibits ...
research
12/16/2019

Pneumothorax Segmentation: Deep Learning Image Segmentation to predict Pneumothorax

Computer vision has shown promising results in medical image processing....
research
08/08/2023

Few-shot medical image classification with simple shape and texture text descriptors using vision-language models

In this work, we investigate the usefulness of vision-language models (V...
research
05/13/2022

Leveraging Global Binary Masks for Structure Segmentation in Medical Images

Deep learning (DL) models for medical image segmentation are highly infl...
research
08/30/2023

SAM-Med2D

The Segment Anything Model (SAM) represents a state-of-the-art research ...

Please sign up or login with your details

Forgot password? Click here to reset