DeepAI AI Chat
Log In Sign Up

QKVA grid: Attention in Image Perspective and Stacked DETR

by   Wenyuan Sheng, et al.

We present a new model named Stacked-DETR(SDETR), which inherits the main ideas in canonical DETR. We improve DETR in two directions: simplifying the cost of training and introducing the stacked architecture to enhance the performance. To the former, we focus on the inside of the Attention block and propose the QKVA grid, a new perspective to describe the process of attention. By this, we can step further on how Attention works for image problems and the effect of multi-head. These two ideas contribute the design of single-head encoder-layer. To the latter, SDETR reaches great improvement(+1.1AP, +3.4APs) to DETR. Especially to the performance on small objects, SDETR achieves better results to the optimized Faster R-CNN baseline, which was a shortcoming in DETR. Our changes are based on the code of DETR. Training code and pretrained models are available at


Grid R-CNN Plus: Faster and Better

Grid R-CNN is a well-performed objection detection framework. It transfo...

Transformer in Convolutional Neural Networks

We tackle the low-efficiency flaw of vision transformer caused by the hi...

SimA: Simple Softmax-free Attention for Vision Transformers

Recently, vision transformers have become very popular. However, deployi...

Cascaded Head-colliding Attention

Transformers have advanced the field of natural language processing (NLP...

MDMLP: Image Classification from Scratch on Small Datasets with MLP

The attention mechanism has become a go-to technique for natural languag...

ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked Models

With new accelerator hardware for DNN, the computing power for AI applic...

Multi-Outputs Is All You Need For Deblur

Image deblurring task is an ill-posed one, where exists infinite feasibl...