Lawin Transformer: Improving Semantic Segmentation Transformer with Multi-Scale Representations via Large Window Attention

01/05/2022
by   Haotian Yan, et al.
0

Multi-scale representations are crucial for semantic segmentation. The community has witnessed the flourish of semantic segmentation convolutional neural networks (CNN) exploiting multi-scale contextual information. Motivated by that the vision transformer (ViT) is powerful in image classification, some semantic segmentation ViTs are recently proposed, most of them attaining impressive results but at a cost of computational economy. In this paper, we succeed in introducing multi-scale representations into semantic segmentation ViT via window attention mechanism and further improves the performance and efficiency. To this end, we introduce large window attention which allows the local window to query a larger area of context window at only a little computation overhead. By regulating the ratio of the context area to the query area, we enable the large window attention to capture the contextual information at multiple scales. Moreover, the framework of spatial pyramid pooling is adopted to collaborate with the large window attention, which presents a novel decoder named large window attention spatial pyramid pooling (LawinASPP) for semantic segmentation ViT. Our resulting ViT, Lawin Transformer, is composed of an efficient hierachical vision transformer (HVT) as encoder and a LawinASPP as decoder. The empirical results demonstrate that Lawin Transformer offers an improved efficiency compared to the existing method. Lawin Transformer further sets new state-of-the-art performance on Cityscapes (84.4% mIoU), ADE20K (56.2% mIoU) and COCO-Stuff datasets. The code will be released at https://github.com/yan-hao-tian/lawin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2022

Pyramid Fusion Transformer for Semantic Segmentation

The recently proposed MaskFormer <cit.> gives a refreshed perspective on...
research
12/06/2022

IncepFormer: Efficient Inception Transformer with Pyramid Pooling for Semantic Segmentation

Semantic segmentation usually benefits from global contexts, fine locali...
research
07/29/2021

A Unified Efficient Pyramid Transformer for Semantic Segmentation

Semantic segmentation is a challenging problem due to difficulties in mo...
research
05/08/2022

Transformer Tracking with Cyclic Shifting Window Attention

Transformer architecture has been showing its great strength in visual o...
research
06/25/2022

CV 3315 Is All You Need : Semantic Segmentation Competition

This competition focus on Urban-Sense Segmentation based on the vehicle ...
research
06/22/2021

P2T: Pyramid Pooling Transformer for Scene Understanding

This paper jointly resolves two problems in vision transformer: i) the c...
research
05/21/2020

Hierarchical Multi-Scale Attention for Semantic Segmentation

Multi-scale inference is commonly used to improve the results of semanti...

Please sign up or login with your details

Forgot password? Click here to reset