SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers

05/31/2021
by   Enze Xie, et al.
6

We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3 smaller and 2.2 SegFormer-B5, achieves 84.0 excellent zero-shot robustness on Cityscapes-C. Code will be released at: github.com/NVlabs/SegFormer.

READ FULL TEXT

page 5

page 9

page 15

page 16

04/06/2022

PP-LiteSeg: A Superior Real-Time Semantic Segmentation Model

Real-world applications have high demands for semantic segmentation meth...
05/05/2022

Cross-view Transformers for real-time Map-view Semantic Segmentation

We present cross-view transformers, an efficient attention-based model f...
08/03/2022

SSformer: A Lightweight Transformer for Semantic Segmentation

It is well believed that Transformer performs better in semantic segment...
05/14/2022

Transformer Scale Gate for Semantic Segmentation

Effectively encoding multi-scale contextual information is crucial for a...
04/22/2021

Multiscale Vision Transformers

We present Multiscale Vision Transformers (MViT) for video and image rec...
06/11/2021

Conterfactual Generative Zero-Shot Semantic Segmentation

zero-shot learning is an essential part of computer vision. As a classic...
06/23/2021

Probabilistic Attention for Interactive Segmentation

We provide a probabilistic interpretation of attention and show that the...

Code Repositories

SegFormer

Official implementation of "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers"


view repo

segformer-pytorch

Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch


view repo

SegFormer

An edited version of SegFormer neural network


view repo