Towards Optimal Patch Size in Vision Transformers for Tumor Segmentation

08/31/2023
by   Ramtin Mojtahedi, et al.
0

Detection of tumors in metastatic colorectal cancer (mCRC) plays an essential role in the early diagnosis and treatment of liver cancer. Deep learning models backboned by fully convolutional neural networks (FCNNs) have become the dominant model for segmenting 3D computerized tomography (CT) scans. However, since their convolution layers suffer from limited kernel size, they are not able to capture long-range dependencies and global context. To tackle this restriction, vision transformers have been introduced to solve FCNN's locality of receptive fields. Although transformers can capture long-range features, their segmentation performance decreases with various tumor sizes due to the model sensitivity to the input patch size. While finding an optimal patch size improves the performance of vision transformer-based models on segmentation tasks, it is a time-consuming and challenging procedure. This paper proposes a technique to select the vision transformer's optimal input multi-resolution image patch size based on the average volume size of metastasis lesions. We further validated our suggested framework using a transfer-learning technique, demonstrating that the highest Dice similarity coefficient (DSC) performance was obtained by pre-training on training data with a larger tumour volume using the suggested ideal patch size and then training with a smaller one. We experimentally evaluate this idea through pre-training our model on a multi-resolution public dataset. Our model showed consistent and improved results when applied to our private multi-resolution mCRC dataset with a smaller average tumor volume. This study lays the groundwork for optimizing semantic segmentation of small objects using vision transformers. The implementation source code is available at:https://github.com/Ramtin-Mojtahedi/OVTPS.

READ FULL TEXT
research
04/01/2022

UNetFormer: A Unified Vision Transformer Model and Pre-Training Framework for 3D Medical Image Segmentation

Vision Transformers (ViT)s have recently become popular due to their out...
research
01/17/2022

Automatic Segmentation of Head and Neck Tumor: How Powerful Transformers Are?

Cancer is one of the leading causes of death worldwide, and head and nec...
research
02/27/2022

Transformer-based Knowledge Distillation for Efficient Semantic Segmentation of Road-driving Scenes

For scene understanding in robotics and automated driving, there is a gr...
research
08/15/2021

SOTR: Segmenting Objects with Transformers

Most recent transformer-based models show impressive performance on visi...
research
03/24/2021

Vision Transformers for Dense Prediction

We introduce dense vision transformers, an architecture that leverages v...
research
12/15/2022

FlexiViT: One Model for All Patch Sizes

Vision Transformers convert images to sequences by slicing them into pat...
research
09/14/2021

Multi-Scale Input Strategies for Medulloblastoma Tumor Classification using Deep Transfer Learning

Medulloblastoma (MB) is a primary central nervous system tumor and the m...

Please sign up or login with your details

Forgot password? Click here to reset