MDSSD: Multi-scale Deconvolutional Single Shot Detector for small objects
In order to improve the detection accuracy for objects at different scales, most of the recent works utilize the pyramidal feature hierarchy of the ConvNets from bottom to top. Nevertheless, the weak semantic information makes the bottom layers poor in detection, especially for small objects. Furthermore, most of the fine details are lost on the top layers. In this paper, we design a Multi-scale Deconvolutional Single Shot Detector for small objects (MDSSD for short). To obtain the feature maps with enriched representation power, we add the high-level layers with semantic information to the low-level layers via deconvolution Fusion Block. It is noteworthy that multiple high-level layers with different scales are upsampled simultaneously in our framework. We implement the skip connections to form more descriptive feature maps and predictions are made on these new fusion layers. Our proposed framework achieves 78.6 at 38.5 FPS with only 300*300 input.
READ FULL TEXT