Task Specific Attention is one more thing you need for object detection
Various models have been proposed to solve the object detection problem. However, most of them require many hand-designed components to demonstrate good performance. To mitigate these issues, Transformer based DETR and its variant Deformable DETR were suggested. They solved much of the complex issue of designing a head of object detection model but it has not been generally clear that the Transformer-based models could be considered as the state-of-the-art method in object detection without doubt. Furthermore, as DETR adapted Transformer method only for the detection head, but still with including CNN for the backbone body, it has not been certain that it would be possible to build the competent end-to-end pipeline with the combination of attention modules. In this paper, we propose that combining several attention modules with our new Task Specific Split Transformer(TSST) is a fairly good enough method to produce the best COCO results without traditionally hand-designed components. By splitting generally purposed attention module into two separated mission specific attention module, the proposed method addresses the way to design simpler object detection models than before. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/navervision/tsst
READ FULL TEXT