BoxeR: Box-Attention for 2D and 3D Transformers

11/25/2021
by   Duy-Kien Nguyen, et al.
6

In this paper, we propose a simple attention mechanism, we call Box-Attention. It enables spatial interaction between grid features, as sampled from boxes of interest, and improves the learning capability of transformers for several vision tasks. Specifically, we present BoxeR, short for Box Transformer, which attends to a set of boxes by predicting their transformation from a reference window on an input feature map. The BoxeR computes attention weights on these boxes by considering its grid structure. Notably, BoxeR-2D naturally reasons about box information within its attention module, making it suitable for end-to-end instance detection and segmentation tasks. By learning invariance to rotation in the box-attention module, BoxeR-3D is capable of generating discriminative information from a bird-eye-view plane for 3D end-to-end object detection. Our experiments demonstrate that the proposed BoxeR-2D achieves better results on COCO detection, and reaches comparable performance with well-established and highly-optimized Mask R-CNN on COCO instance segmentation. BoxeR-3D already obtains a compelling performance for the vehicle category of Waymo Open, without any class-specific optimization. The code will be released.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 11

page 12

research
05/03/2021

ISTR: End-to-End Instance Segmentation with Transformers

End-to-end paradigms significantly improve the accuracy of various deep-...
research
08/15/2021

SOTR: Segmenting Objects with Transformers

Most recent transformer-based models show impressive performance on visi...
research
07/08/2022

k-means Mask Transformer

The rise of transformers in vision tasks not only advances network backb...
research
12/17/2021

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

This work presents a simple vision transformer design as a strong baseli...
research
12/01/2020

MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers

We present MaX-DeepLab, the first end-to-end model for panoptic segmenta...
research
09/21/2022

IoU-Enhanced Attention for End-to-End Task Specific Object Detection

Without densely tiled anchor boxes or grid points in the image, sparse R...
research
12/02/2021

Improved Multiscale Vision Transformers for Classification and Detection

In this paper, we study Multiscale Vision Transformers (MViT) as a unifi...

Please sign up or login with your details

Forgot password? Click here to reset