Local block-wise self attention for normal organ segmentation

09/11/2019
by   Jue Jiang, et al.
0

We developed a new and computationally simple local block-wise self attention based normal structures segmentation approach applied to head and neck computed tomography (CT) images. Our method uses the insight that normal organs exhibit regularity in their spatial location and inter-relation within images, which can be leveraged to simplify the computations required to aggregate feature information. We accomplish this by using local self attention blocks that pass information between each other to derive the attention map. We show that adding additional attention layers increases the contextual field and captures focused attention from relevant structures. We developed our approach using U-net and compared it against multiple state-of-the-art self attention methods. All models were trained on 48 internal headneck CT scans and tested on 48 CT scans from the external public domain database of computational anatomy dataset. Our method achieved the highest Dice similarity coefficient segmentation accuracy of 0.85±0.04, 0.86±0.04 for left and right parotid glands, 0.79±0.07 and 0.77±0.05 for left and right submandibular glands, 0.93±0.01 for mandible and 0.88±0.02 for the brain stem with the lowest increase of 66.7% computing time per image and 0.15% increase in model parameters compared with standard U-net. The best state-of-the-art method called point-wise spatial attention, achieved blackcomparable accuracy but with 516.7% increase in computing time and 8.14% increase in parameters compared with standard U-net. Finally, we performed ablation tests and studied the impact of attention block size, overlap of the attention blocks, additional attention layers, and attention block placement on segmentation performance.

READ FULL TEXT

page 5

page 7

research
05/27/2020

Multiple resolution residual network for automatic thoracic organs-at-risk segmentation from CT

We implemented and evaluated a multiple resolution residual network (MRR...
research
02/26/2021

Nested-block self-attention for robust radiotherapy planning segmentation

Although deep convolutional networks have been widely studied for head a...
research
11/05/2021

Hepatic vessel segmentation based on 3Dswin-transformer with inductive biased multi-head self-attention

Purpose: Segmentation of liver vessels from CT images is indispensable p...
research
08/15/2018

AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy

Methods: Our deep learning model, called AnatomyNet, segments OARs from ...
research
03/04/2022

Characterizing Renal Structures with 3D Block Aggregate Transformers

Efficiently quantifying renal structures can provide distinct spatial co...
research
10/31/2021

A Simple Approach to Image Tilt Correction with Self-Attention MobileNet for Smartphones

The main contributions of our work are two-fold. First, we present a Sel...
research
06/09/2023

Two Independent Teachers are Better Role Model

Recent deep learning models have attracted substantial attention in infa...

Please sign up or login with your details

Forgot password? Click here to reset