Multi-Frame to Single-Frame: Knowledge Distillation for 3D Object Detection

by   Yue Wang, et al.

A common dilemma in 3D object detection for autonomous driving is that high-quality, dense point clouds are only available during training, but not testing. We use knowledge distillation to bridge the gap between a model trained on high-quality inputs at training time and another tested on low-quality inputs at inference time. In particular, we design a two-stage training pipeline for point cloud object detection. First, we train an object detection model on dense point clouds, which are generated from multiple frames using extra information only available at training time. Then, we train the model's identical counterpart on sparse single-frame point clouds with consistency regularization on features from both models. We show that this procedure improves performance on low-quality data during testing, without additional overhead.


Boosting Single-Frame 3D Object Detection by Simulating Multi-Frame Point Clouds

To boost a detector for single-frame 3D object detection, we present a n...

Object DGCNN: 3D Object Detection using Dynamic Graphs

3D object detection often involves complicated training and testing pipe...

Auto4D: Learning to Label 4D Objects from Sequential Point Clouds

In the past few years we have seen great advances in 3D object detection...

PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection

The remarkable breakthroughs in point cloud representation learning have...

Temporal Point Cloud Completion with Pose Disturbance

Point clouds collected by real-world sensors are always unaligned and sp...

MPPNet: Multi-Frame Feature Intertwining with Proxy Points for 3D Temporal Object Detection

Accurate and reliable 3D detection is vital for many applications includ...

Offboard 3D Object Detection from Point Cloud Sequences

While current 3D object recognition research mostly focuses on the real-...