Multi-head Cascaded Swin Transformers with Attention to k-space Sampling Pattern for Accelerated MRI Reconstruction

07/18/2022
by   Mevan Ekanayake, et al.
0

Global correlations are widely seen in human anatomical structures due to similarity across tissues and bones. These correlations are reflected in magnetic resonance imaging (MRI) scans as a result of close-range proton density and T1/T2 parameter. Furthermore, to achieve accelerated MRI, k-space data are undersampled which causes global aliasing artifacts. Convolutional neural network (CNN) models are widely utilized for accelerated MRI reconstruction, but those models are limited in capturing global correlations due to the intrinsic locality of the convolution operation. The self-attention-based transformer models are capable of capturing global correlations among image features, however, the current contributions of transformer models for MRI reconstruction are minute. The existing contributions mostly provide CNN-transformer hybrid solutions and rarely leverage the physics of MRI. In this paper, we propose a physics-based stand-alone (convolution free) transformer model titled, the Multi-head Cascaded Swin Transformers (McSTRA) for accelerated MRI reconstruction. McSTRA combines several interconnected MRI physics-related concepts with the transformer networks: it exploits global MR features via the shifted window self-attention mechanism; it extracts MR features belonging to different spectral components separately using a multi-head setup; it iterates between intermediate de-aliasing and k-space correction via a cascaded network with data consistency in k-space and intermediate loss computations; furthermore, we propose a novel positional embedding generation mechanism to guide self-attention utilizing the point spread function corresponding to the undersampling mask. Our model significantly outperforms state-of-the-art MRI reconstruction methods both visually and quantitatively while depicting improved resolution and removal of aliasing artifacts.

READ FULL TEXT

page 7

page 10

page 11

page 16

page 17

page 19

page 20

page 22

research
09/19/2023

Learning Dynamic MRI Reconstruction with Convolutional Network Assisted Reconstruction Swin Transformer

Dynamic magnetic resonance imaging (DMRI) is an effective imaging tool f...
research
08/08/2023

SDLFormer: A Sparse and Dense Locality-enhanced Transformer for Accelerated MR Image Reconstruction

Transformers have emerged as viable alternatives to convolutional neural...
research
06/12/2021

Task Transformer Network for Joint MRI Reconstruction and Super-Resolution

The core problem of Magnetic Resonance Imaging (MRI) is the trade off be...
research
04/28/2023

3D Brainformer: 3D Fusion Transformer for Brain Tumor Segmentation

Magnetic resonance imaging (MRI) is critically important for brain mappi...
research
10/14/2022

Wide Range MRI Artifact Removal with Transformers

Artifacts on magnetic resonance scans are a serious challenge for both r...
research
06/02/2023

A Novel Vision Transformer with Residual in Self-attention for Biomedical Image Classification

Biomedical image classification requires capturing of bio-informatics ba...
research
01/30/2015

Gibbs-Ringing Artifact Removal Based on Local Subvoxel-shifts

Gibbs-ringing is a well known artifact which manifests itself as spuriou...

Please sign up or login with your details

Forgot password? Click here to reset