Reference-based Magnetic Resonance Image Reconstruction Using Texture Transforme
Deep Learning (DL) based methods for magnetic resonance (MR) image reconstruction have been shown to produce superior performance in recent years. However, these methods either only leverage under-sampled data or require a paired fully-sampled auxiliary modality to perform multi-modal reconstruction. Consequently, existing approaches neglect to explore attention mechanisms that can transfer textures from reference fully-sampled data to under-sampled data within a single modality, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction, in which we formulate the under-sampled data and reference data as queries and keys in a transformer. The TTM facilitates joint feature learning across under-sampled and reference data, so the feature correspondences can be discovered by attention and accurate texture features can be leveraged during reconstruction. Notably, the proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance. Extensive experiments show that TTM can significantly improve the performance of several popular DL-based MRI reconstruction methods.
READ FULL TEXT