The Multiscale Surface Vision Transformer

by   Simon Dahan, et al.
King's College London

Surface meshes are a favoured domain for representing structural and functional information on the human cortex, but their complex topology and geometry pose significant challenges for deep learning analysis. While Transformers have excelled as domain-agnostic architectures for sequence-to-sequence learning, notably for structures where the translation of the convolution operation is non-trivial, the quadratic cost of the self-attention operation remains an obstacle for many dense prediction tasks. Inspired by some of the latest advances in hierarchical modelling with vision transformers, we introduce the Multiscale Surface Vision Transformer (MS-SiT) as a backbone architecture for surface deep learning. The self-attention mechanism is applied within local-mesh-windows to allow for high-resolution sampling of the underlying data, while a shifted-window strategy improves the sharing of information between windows. Neighbouring patches are successively merged, allowing the MS-SiT to learn hierarchical representations suitable for any prediction task. Results demonstrate that the MS-SiT outperforms existing surface deep learning methods for neonatal phenotyping prediction tasks using the Developing Human Connectome Project (dHCP) dataset. Furthermore, building the MS-SiT backbone into a U-shaped architecture for surface segmentation demonstrates competitive results on cortical parcellation using the UK Biobank (UKB) and manually-annotated MindBoggle datasets. Code and trained models are publicly available at .


page 7

page 14


Surface Analysis with Vision Transformers

The extension of convolutional neural networks (CNNs) to non-Euclidean g...

Surface Vision Transformers: Flexible Attention-Based Modelling of Biomedical Surfaces

Recent state-of-the-art performances of Vision Transformers (ViT) in com...

Surface Masked AutoEncoder: Self-Supervision for Cortical Imaging Data

Self-supervision has been widely explored as a means of addressing the l...

Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis

The extension of convolutional neural networks (CNNs) to non-Euclidean g...

Green Hierarchical Vision Transformer for Masked Image Modeling

We present an efficient approach for Masked Image Modeling (MIM) with hi...

MixFormer: Mixing Features across Windows and Dimensions

While local-window self-attention performs notably in vision tasks, it s...

Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control

Transformers are state-of-the-art networks for most sequence processing ...

Please sign up or login with your details

Forgot password? Click here to reset