k-means Mask Transformer

07/08/2022
by   Qihang Yu, et al.
15

The rise of transformers in vision tasks not only advances network backbone designs, but also starts a brand-new page to achieve end-to-end image recognition (e.g., object detection and panoptic segmentation). Originated from Natural Language Processing (NLP), transformer architectures, consisting of self-attention and cross-attention, effectively learn long-range interactions between elements in a sequence. However, we observe that most existing transformer-based vision models simply borrow the idea from NLP, neglecting the crucial difference between languages and images, particularly the extremely large sequence length of spatially flattened pixel features. This subsequently impedes the learning in cross-attention between pixel features and object queries. In this paper, we rethink the relationship between pixels and object queries and propose to reformulate the cross-attention learning as a clustering process. Inspired by the traditional k-means clustering algorithm, we develop a k-means Mask Xformer (kMaX-DeepLab) for segmentation tasks, which not only improves the state-of-the-art, but also enjoys a simple and elegant design. As a result, our kMaX-DeepLab achieves a new state-of-the-art performance on COCO val set with 58.0 83.5 can shed some light on designing transformers tailored for vision tasks. Code and models are available at https://github.com/google-research/deeplab2

READ FULL TEXT

page 14

page 24

page 25

research
06/07/2023

2D Object Detection with Transformers: A Review

Astounding performance of Transformers in natural language processing (N...
research
06/29/2023

ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation

This paper presents a new mechanism to facilitate the training of mask t...
research
06/17/2022

CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation

We propose Clustering Mask Transformer (CMT-DeepLab), a transformer-base...
research
11/25/2021

BoxeR: Box-Attention for 2D and 3D Transformers

In this paper, we propose a simple attention mechanism, we call Box-Atte...
research
11/21/2022

Mean Shift Mask Transformer for Unseen Object Instance Segmentation

Segmenting unseen objects is a critical task in many different domains. ...
research
07/26/2021

Contextual Transformer Networks for Visual Recognition

Transformer with self-attention has led to the revolutionizing of natura...
research
01/09/2022

MAXIM: Multi-Axis MLP for Image Processing

Recent progress on Transformers and multi-layer perceptron (MLP) models ...

Please sign up or login with your details

Forgot password? Click here to reset