WeakTr: Exploring Plain Vision Transformer for Weakly-supervised Semantic Segmentation

04/03/2023
by   Lianghui Zhu, et al.
0

This paper explores the properties of the plain Vision Transformer (ViT) for Weakly-supervised Semantic Segmentation (WSSS). The class activation map (CAM) is of critical importance for understanding a classification network and launching WSSS. We observe that different attention heads of ViT focus on different image areas. Thus a novel weight-based method is proposed to end-to-end estimate the importance of attention heads, while the self-attention maps are adaptively fused for high-quality CAM results that tend to have more complete objects. Besides, we propose a ViT-based gradient clipping decoder for online retraining with the CAM results to complete the WSSS task. We name this plain Transformer-based Weakly-supervised learning framework WeakTr. It achieves the state-of-the-art WSSS performance on standard benchmarks, i.e., 78.4 COCO 2014. Code is available at https://github.com/hustvl/WeakTr.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset