QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection

08/21/2023
by   Yifan Zhang, et al.
0

Multi-view 3D detection based on BEV (bird-eye-view) has recently achieved significant improvements. However, the huge memory consumption of state-of-the-art models makes it hard to deploy them on vehicles, and the non-trivial latency will affect the real-time perception of streaming applications. Despite the wide application of quantization to lighten models, we show in our paper that directly applying quantization in BEV tasks will 1) make the training unstable, and 2) lead to intolerable performance degradation. To solve these issues, our method QD-BEV enables a novel view-guided distillation (VGD) objective, which can stabilize the quantization-aware training (QAT) while enhancing the model performance by leveraging both image features and BEV features. Our experiments show that QD-BEV achieves similar or even better accuracy than previous methods with significant efficiency gains. On the nuScenes datasets, the 4-bit weight and 6-bit activation quantized QD-BEV-Tiny model achieves 37.2 outperforming BevFormer-Tiny by 1.8 and Base variants, QD-BEV models also perform superbly and achieve 47.9 (28.2 MB) and 50.9

READ FULL TEXT

page 2

page 8

research
06/11/2019

Data-Free Quantization through Weight Equalization and Bias Correction

We introduce a data-free quantization method for deep neural networks th...
research
07/20/2023

Quantized Feature Distillation for Network Quantization

Neural network quantization aims to accelerate and trim full-precision n...
research
09/06/2023

Norm Tweaking: High-performance Low-bit Quantization of Large Language Models

As the size of large language models (LLMs) continues to grow, model com...
research
10/13/2022

SQuAT: Sharpness- and Quantization-Aware Training for BERT

Quantization is an effective technique to reduce memory footprint, infer...
research
03/10/2022

PETR: Position Embedding Transformation for Multi-View 3D Object Detection

In this paper, we develop position embedding transformation (PETR) for m...
research
07/01/2023

Variation-aware Vision Transformer Quantization

Despite the remarkable performance of Vision Transformers (ViTs) in vari...
research
08/19/2023

Analyzing Quantization in TVM

There has been many papers in academic literature on quantizing weight t...

Please sign up or login with your details

Forgot password? Click here to reset