Fast Point R-CNN

08/08/2019
by   Yilun Chen, et al.
5

We present a unified, efficient and effective framework for point-cloud based 3D object detection. Our two-stage approach utilizes both voxel representation and raw point cloud data to exploit respective advantages. The first stage network, with voxel representation as input, only consists of light convolutional operations, producing a small number of high-quality initial predictions. Coordinate and indexed convolutional feature of each point in initial prediction are effectively fused with the attention mechanism, preserving both accurate localization and context information. The second stage works on interior points with their fused feature for further refining the prediction. Our method is evaluated on KITTI dataset, in terms of both 3D and Bird's Eye View (BEV) detection, and achieves state-of-the-arts with a 15FPS detection rate.

READ FULL TEXT

page 4

page 8

research
02/26/2023

Pillar R-CNN for Point Cloud 3D Object Detection

The performance of point cloud 3D object detection hinges on effectively...
research
07/22/2019

STD: Sparse-to-Dense 3D Object Detector for Point Cloud

We present a new two-stage 3D object detection framework, named sparse-t...
research
12/31/2019

PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection

We present a novel and high-performance 3D object detection framework, n...
research
10/09/2019

Patch Refinement – Localized 3D Object Detection

We introduce Patch Refinement a two-stage model for accurate 3D object d...
research
08/08/2021

From Voxel to Point: IoU-guided 3D Object Detection for Point Cloud with Voxel-to-Point Decoder

In this paper, we present an Intersection-over-Union (IoU) guided two-st...
research
01/31/2021

PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection

3D object detection is receiving increasing attention from both industry...

Please sign up or login with your details

Forgot password? Click here to reset