SS3D: Single Shot 3D Object Detector
Single stage deep learning algorithm for 2D object detection was made popular by Single Shot MultiBox Detector (SSD) [13] and it was heavily adopted in several embedded applications. PointPillars [1] is a fast 3D object detection algorithm that produces state of the art results and uses SSD adapted for 3D object detection. The main downside of PointPillars is that it has a two stage approach with learned input representation based on fully connected layers followed by SSD. In this paper we present Single Shot 3D Object Detection (SS3D) - a single stage 3D object detection algorithm which combines a straight forward, statistically computed input representation and a single shot object detector based on PointPillars. This can be considered as a single shot deep learning algorithm as computing the input representation is straight forward and does not involve much computational cost. We also extend our method to stereo input and show that, aided by additional semantic segmentation input; our method produces similar accuracy as state of the art stereo based detectors. Achieving the accuracy of two stage detectors using a single stage approach is a important for 3D object detection as single stage approaches are simpler to implement in real-life applications. With LiDAR as well as stereo input, our method outperforms PointPillars, which is one of the state-of-the art methods for 3D object detection. When using LiDAR input, our input representation is able to improve the AP3D of Cars objects in the moderate category from 74.99 to 76.84. When using stereo input, our input representation is able to improve the AP3D of Cars objects in the moderate category from 38.13 to 45.13. Our results are also better than other popular 3D object detectors such as AVOD [7] and F-PointNet [8].
READ FULL TEXT