YOLOSA: Object detection based on 2D local feature superimposed self-attention

06/23/2022
by   Weisheng Li, et al.
0

We analyzed the network structure of real-time object detection models and found that the features in the feature concatenation stage are very rich. Applying an attention module here can effectively improve the detection accuracy of the model. However, the commonly used attention module or self-attention module shows poor performance in detection accuracy and inference efficiency. Therefore, we propose a novel self-attention module, called 2D local feature superimposed self-attention, for the feature concatenation stage of the neck network. This self-attention module reflects global features through local features and local receptive fields. We also propose and optimize an efficient decoupled head and AB-OTA, and achieve SOTA results. Average precisions of 49.0% (66.2 FPS), 46.1% (80.6 FPS), and 39.1% (100 FPS) were obtained for large, medium, and small-scale models built using our proposed improvements. Our models exceeded YOLOv5 by 0.8% – 3.1% in average precision.

READ FULL TEXT

page 4

page 5

research
06/24/2022

Excavating RoI Attention for Underwater Object Detection

Self-attention is one of the most successful designs in deep learning, w...
research
12/14/2020

Decoupled Self Attention for Accurate One Stage Object Detection

As the scale of object detection dataset is smaller than that of image r...
research
04/30/2020

Salient Object Detection Combining a Self-attention Module and a Feature Pyramid Network

Salient object detection has achieved great improvement by using the Ful...
research
04/06/2021

Hyperspectral and LiDAR data classification based on linear self-attention

An efficient linear self-attention fusion model is proposed in this pape...
research
09/15/2023

M^3Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection

Most existing salient object detection methods mostly use U-Net or featu...
research
12/04/2018

Factorized Attention: Self-Attention with Linear Complexities

Recent works have been applying self-attention to various fields in comp...
research
12/10/2022

CamoFormer: Masked Separable Attention for Camouflaged Object Detection

How to identify and segment camouflaged objects from the background is c...

Please sign up or login with your details

Forgot password? Click here to reset