Attention-based Depth Distillation with 3D-Aware Positional Encoding for Monocular 3D Object Detection

11/30/2022
by   Zizhang Wu, et al.
0

Monocular 3D object detection is a low-cost but challenging task, as it requires generating accurate 3D localization solely from a single image input. Recent developed depth-assisted methods show promising results by using explicit depth maps as intermediate features, which are either precomputed by monocular depth estimation networks or jointly evaluated with 3D object detection. However, inevitable errors from estimated depth priors may lead to misaligned semantic information and 3D localization, hence resulting in feature smearing and suboptimal predictions. To mitigate this issue, we propose ADD, an Attention-based Depth knowledge Distillation framework with 3D-aware positional encoding. Unlike previous knowledge distillation frameworks that adopt stereo- or LiDAR-based teachers, we build up our teacher with identical architecture as the student but with extra ground-truth depth as input. Credit to our teacher design, our framework is seamless, domain-gap free, easily implementable, and is compatible with object-wise ground-truth depth. Specifically, we leverage intermediate features and responses for knowledge distillation. Considering long-range 3D dependencies, we propose 3D-aware self-attention and target-aware cross-attention modules for student adaptation. Extensive experiments are performed to verify the effectiveness of our framework on the challenging KITTI 3D object detection benchmark. We implement our framework on three representative monocular detectors, and we achieve state-of-the-art performance with no additional inference computational cost relative to baseline models. Our code is available at https://github.com/rockywind/ADD.

READ FULL TEXT
research
03/21/2022

MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer

Monocular 3D object detection is an important yet challenging task in au...
research
01/04/2023

StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection

In this paper, we propose a cross-modal distillation method named Stereo...
research
11/17/2022

BEVDistill: Cross-Modal BEV Distillation for Multi-View 3D Object Detection

3D object detection from multiple image views is a fundamental and chall...
research
08/28/2023

Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection

Knowledge distillation (KD) has shown potential for learning compact mod...
research
01/26/2022

MonoDistill: Learning Spatial Features for Monocular 3D Object Detection

3D object detection is a fundamental and challenging task for 3D scene u...
research
11/14/2022

Cross-Modality Knowledge Distillation Network for Monocular 3D Object Detection

Leveraging LiDAR-based detectors or real LiDAR point data to guide monoc...
research
07/16/2020

Defocus Blur Detection via Depth Distillation

Defocus Blur Detection(DBD) aims to separate in-focus and out-of-focus r...

Please sign up or login with your details

Forgot password? Click here to reset