MoViT: Memorizing Vision Transformers for Medical Image Analysis

03/27/2023
by   Yiqing Shen, et al.
3

The synergy of long-range dependencies from transformers and local representations of image content from convolutional neural networks (CNNs) has led to advanced architectures and increased performance for various medical image analysis tasks due to their complementary benefits. However, compared with CNNs, transformers require considerably more training data, due to a larger number of parameters and an absence of inductive bias. The need for increasingly large datasets continues to be problematic, particularly in the context of medical imaging, where both annotation efforts and data protection result in limited data availability. In this work, inspired by the human decision-making process of correlating new “evidence” with previously memorized “experience”, we propose a Memorizing Vision Transformer (MoViT) to alleviate the need for large-scale datasets to successfully train and deploy transformer-based architectures. MoViT leverages an external memory structure to cache history attention snapshots during the training stage. To prevent overfitting, we incorporate an innovative memory update scheme, attention temporal moving average, to update the stored external memories with the historical moving average. For inference speedup, we design a prototypical attention learning method to distill the external memory into smaller representative subsets. We evaluate our method on a public histology image dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied medical image analysis tasks, can outperform vanilla transformer models across varied data regimes, especially in cases where only a small amount of annotated data is available. More importantly, MoViT can reach a competitive performance of ViT with only 3.0

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2021

TransMed: Transformers Advance Multi-modal Medical Image Classification

Over the past decade, convolutional neural networks (CNN) have shown ver...
research
02/21/2021

Medical Transformer: Gated Axial-Attention for Medical Image Segmentation

Over the past decade, Deep Convolutional Neural Networks have been widel...
research
11/19/2021

TransMorph: Transformer for unsupervised medical image registration

In the last decade, convolutional neural networks (ConvNets) have domina...
research
06/01/2022

The Fully Convolutional Transformer for Medical Image Segmentation

We propose a novel transformer model, capable of segmenting medical imag...
research
07/24/2023

Is attention all you need in medical image analysis? A review

Medical imaging is a key component in clinical diagnosis, treatment plan...
research
02/22/2023

Magnification Invariant Medical Image Analysis: A Comparison of Convolutional Networks, Vision Transformers, and Token Mixers

Convolution Neural Networks (CNNs) are widely used in medical image anal...
research
12/29/2022

Local Learning on Transformers via Feature Reconstruction

Transformers are becoming increasingly popular due to their superior per...

Please sign up or login with your details

Forgot password? Click here to reset