HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction

03/15/2022
by   Zalan Fabian, et al.
0

In accelerated MRI reconstruction, the anatomy of a patient is recovered from a set of under-sampled and noisy measurements. Deep learning approaches have been proven to be successful in solving this ill-posed inverse problem and are capable of producing very high quality reconstructions. However, current architectures heavily rely on convolutions, that are content-independent and have difficulties modeling long-range dependencies in images. Recently, Transformers, the workhorse of contemporary natural language processing, have emerged as powerful building blocks for a multitude of vision tasks. These models split input images into non-overlapping patches, embed the patches into lower-dimensional tokens and utilize a self-attention mechanism that does not suffer from the aforementioned weaknesses of convolutional architectures. However, Transformers incur extremely high compute and memory cost when 1) the input image resolution is high and 2) when the image needs to be split into a large number of patches to preserve fine detail information, both of which are typical in low-level vision problems such as MRI reconstruction, having a compounding effect. To tackle these challenges, we propose HUMUS-Net, a hybrid architecture that combines the beneficial implicit bias and efficiency of convolutions with the power of Transformer blocks in an unrolled and multi-scale network. HUMUS-Net extracts high-resolution features via convolutional blocks and refines low-resolution features via a novel Transformer-based multi-scale feature extractor. Features from both levels are then synthesized into a high-resolution output reconstruction. Our network establishes new state of the art on the largest publicly available MRI dataset, the fastMRI dataset. We further demonstrate the performance of HUMUS-Net on two other popular MRI datasets and perform fine-grained ablation studies to validate our design.

READ FULL TEXT
research
02/28/2022

A Multi-scale Transformer for Medical Image Segmentation: Architectures, Model Efficiency, and Benchmarks

Transformers have emerged to be successful in a number of natural langua...
research
08/08/2023

SDLFormer: A Sparse and Dense Locality-enhanced Transformer for Accelerated MR Image Reconstruction

Transformers have emerged as viable alternatives to convolutional neural...
research
07/05/2022

Swin Deformable Attention U-Net Transformer (SDAUT) for Explainable Fast MRI

Fast MRI aims to reconstruct a high fidelity image from partially observ...
research
11/01/2021

HRViT: Multi-Scale High-Resolution Vision Transformer

Vision transformers (ViTs) have attracted much attention for their super...
research
03/29/2021

Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

This paper presents a new Vision Transformer (ViT) architecture Multi-Sc...
research
06/14/2022

K-Space Transformer for Fast MRI Reconstruction with Implicit Representation

This paper considers the problem of fast MRI reconstruction. We propose ...
research
04/06/2023

GA-HQS: MRI reconstruction via a generically accelerated unfolding approach

Deep unfolding networks (DUNs) are the foremost methods in the realm of ...

Please sign up or login with your details

Forgot password? Click here to reset