Bottleneck Transformers for Visual Recognition

01/27/2021
by   Aravind Srinivas, et al.
berkeley college
83

We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4 Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7 2.33x faster in compute time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.

READ FULL TEXT

page 16

page 17

05/30/2021

EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network

Recently, it has been demonstrated that the performance of a deep convol...
12/17/2021

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

This work presents a simple vision transformer design as a strong baseli...
10/05/2021

Transformer Assisted Convolutional Network for Cell Instance Segmentation

Region proposal based methods like R-CNN and Faster R-CNN models have pr...
06/06/2022

Separable Self-attention for Mobile Vision Transformers

Mobile vision transformers (MobileViT) can achieve state-of-the-art perf...
04/19/2020

ResNeSt: Split-Attention Networks

While image classification models have recently continued to advance, mo...
03/18/2022

Laneformer: Object-aware Row-Column Transformers for Lane Detection

We present Laneformer, a conceptually simple yet powerful transformer-ba...
05/12/2023

ROI-based Deep Image Compression with Swin Transformers

Encoding the Region Of Interest (ROI) with better quality than the backg...

Code Repositories

bottleneck-transformer-pytorch

Implementation of Bottleneck Transformer in Pytorch


view repo

BottleneckTransformers

Bottleneck Transformers for Visual Recognition


view repo

Bottleneck-Transformers-for-Visual-Recognition

PyTorch Implementation of BoTNet. Link to paper: https://arxiv.org/abs/2101.11605


view repo

bottleneck-transformer-flax

A JAX/Flax implementation of Bottleneck Transformers for Visual Recognition


view repo

Please sign up or login with your details

Forgot password? Click here to reset