DeepAI AI Chat
Log In Sign Up

More than Encoder: Introducing Transformer Decoder to Upsample

by   Yijiang Li, et al.

General segmentation models downsample images and then upsample to restore resolution for pixel level prediction. In such schema, upsample technique is vital in maintaining information for better performance. In this paper, we present a new upsample approach, Attention Upsample (AU), that could serve as general upsample method and be incorporated into any segmentation model that possesses lateral connections. AU leverages pixel-level attention to model long range dependency and global information for better reconstruction. It consists of Attention Decoder (AD) and bilinear upsample as residual connection to complement the upsampled features. AD adopts the idea of decoder from transformer which upsamples features conditioned on local and detailed information from contracting path. Moreover, considering the extensive memory and computation cost of pixel-level attention, we further propose to use window attention scheme to restrict attention computation in local windows instead of global range. Incorporating window attention, we denote our decoder as Window Attention Decoder (WAD) and our upsample method as Window Attention Upsample (WAU). We test our method on classic U-Net structure with lateral connection to deliver information from contracting path and achieve state-of-the-arts performance on Synapse (80.30 DSC and 23.12 HD) and MSD Brain (74.75 DSC) datasets.


page 12

page 14

page 15


Graph-Segmenter: Graph Transformer with Boundary-aware Attention for Semantic Segmentation

The transformer-based semantic segmentation approaches, which divide the...

Swin-Free: Achieving Better Cross-Window Attention and Efficiency with Size-varying Window

Transformer models have shown great potential in computer vision, follow...

U-Net Transformer: Self and Cross Attention for Medical Image Segmentation

Medical image segmentation remains particularly challenging for complex ...

Pixel-Level Dense Prediction without Decoder

Pixel-level dense prediction tasks such as keypoint estimation are domin...

DwinFormer: Dual Window Transformers for End-to-End Monocular Depth Estimation

Depth estimation from a single image is of paramount importance in the r...

Transformer Tracking with Cyclic Shifting Window Attention

Transformer architecture has been showing its great strength in visual o...

Side Window Filtering

Local windows are routinely used in computer vision and almost without e...