1 Introduction to the Challenge
The Waymo Open Dataset Challenges at CVPR 2020 which starts from March 19 2020 and ends on May 31 2020, is the largest and the most exciting competition of the challenging perception tasks in autonomous driving. The Waymo Open Dataset  was recently released with high-quality data collected from both LiDAR and camera sensors in real self-driving scenarios and enables many new exciting researches. In the 3D detection track, the challenge requires the algorithm to detect the objects as a set of 3D bounding boxes. The domain adaptation track is similar to the 3D detection track except that the data was collected from a different location.
In this section, we present the details of our 3D detector in the challenge. We use our AFDet , which is one-stage, anchor-free and NMS-free, as a strong baseline 3D point cloud detector. Significant improvements were made to this baseline during the challenges. Our proposed AFDet consists of four modules: point cloud encoder, 3D Feature Extractor, Region Proposal Network (RPN), and anchor-free detector. We mainly describe the difference between the original AFDet and the one we used in this section.
2.1 Point Cloud Encoder
We use a simple point cloud encoder  to voxelize the point cloud. Specifically, we use grid size , , along , , axis respectively to convert the raw point cloud into voxel presentation. In each voxel we calculate the mean of all points inside it and feed the result to 3D feature extractor. The resulting feature map will be reshaped to form a top-down view pseudo image, as illustrated in Figure 3.
2.2 3D Feature Extractor and RPN
We introduce two different 3D Feature Extractor FE-v1, FE-v2, and three different RPN [18, 21] RPN-v1, RPN-v2, RPN-v3. Their combinations B1, B2 and B3 are used in the final submissions during the challenge.
SpMiddleFHD in  are employed as the 3D Feature Extractor in FE-v1. It contains four phases, and each phase contains several submanifold convolutional layers  and one sparse convolutional layer  to perform downsampling along the -axis. The feature map is downsampled 4 after FE-v1.
For the 3D Feature Extractor in FE-v2, we use a similar extractor to 
with auxiliary branch removed. It has four sparse 3D convolution blocks, each of which is composed of sub-manifold convolutions with the kernel size of 3. The last three blocks use an additional sparse convolution with a stride of 2 to downsample feature map size. Then the feature map is reshaped to obtain the Bird’s-Eye View (BEV) representation. Eight standard 33 convolutions are applied to further abstract the feature. The final downsample factor for FE-v2 is 8.
RPN has several downsample and upsample blocks. In RPN-v1, we use the same structure as in [8, 18]. RPN-v1 has 3 cascaded downsample blocks, each of which downsamples the feature map by 2. So the feature map of each downsample blocks is downsampled by 2, 4, 8, respectively. Then three upsample blocks are used separately to upsample the feature map by 1, 2, 4. We concatenate them to get the output of the RPN-v1. So the final feature map of RPN-v1 is downsampled 2.
The RPN-v2 shares the same structure with RPN-v1 except for two differences. The first downsample block uses a downsample factor of 1 instead of 2, and thus the final downsample factor is 1 for RPN-v2. The other difference is that the channels of each block are doubled compared with RPN-v1.
We further improves RPN-v3 over RPN-v1. Based on RPN-v1, the first downsample block uses a factor of 1 instead of 2. Then we reduce the number of filters for each layer to 3/4 (i.e. from 128 to 96, from 256 to 192). Inspired by , we add a simplified version of BiFPN with 4 repeating blocks after the downsample blocks in the RPN-v3. Unlike the original BiFPN  with different number of filters for each repeat, we use three separate convolutional layers to produce the same number of filters (96) for the output of each downsample block. Then we repeat the block four times with the same number of filters. Our simple BiFPN does not deploy weighted fusion mechanism . The final downsample factor of RPN-v3 is 1.
We use three different network, which are B1, B2 and B3, during the challenge. The 3D Feature Extractor and RPN combinations of B1, B2 and B3 can be found in Table 1. Their parameters and detailed illustration are shown in Figure 4
. In 3D Feature Extractor and RPN, each convolution is followed by a batch normalization
and ReLU non-linearity.
|3D Feature Extractor||RPN||Downsample Factor|
2.3 Anchor Free Base Detector
We briefly introduce the anchor free base detector in the report. For more details, please refer to . The AFDet  consists of five sub-heads, including the keypoint heatmap head, the local offset head, the -axis location head, the 3D object size head, and the orientation head. Figure 2 shows the details of the anchor free detector.
Five Sub-Heads. The heatmap head and the offset head predict a keypoint heatmap and a local offset regression map respectively, with the number of keypoint types. The keypoint heatmap helps to locate the object center (- coordinates) in BEV. The offset regression map compensates the discretization error due to voxelization, and helps to recover more accurate object locations in BEV. The -axis location head regresses the -axis values. Additionally, we regress the object sizes directly. For orientation prediction, we use the MultiBin method following [12, 20].
Loss. The heatmap head training uses the modified focal loss . For the orientation prediction head, the classification part is trained with softmax while the offset part is trained with loss. loss is employed in the local offset head, the -axis location head, and the 3D object size head training. The overall training objective is
where represents the weight for each sub-task. For all regression sub-tasks including local offset, -axis location, size and orientation, we only regress objects which are in the detection range.
Gather Indices and Decode. From the resultant keypoint heatmap, we can easily decode the center of each object, along with the local offset correcting the discretization error. Other information such as orientation and dimension can be obtained from the regression results .
The network architecture has been explained in details in the previous section. In this section, we first introduce our input format and data augmentation strategies. Then we explain a novel painting method using 2D detection, test time augmentation, and model ensemble.
3.1 Input and Data Augmentation
As in , we accumulate LiDAR sweeps to utilize temporal information and to densify the LiDAR point cloud. To distinguish points from different sweeps, time difference is attached to the point cloud as an additional attribute. In Waymo challenge, we use past four frames combined with the current frame as our input point cloud. A Detailed ablation study can be found in Table 2.
Waymo Open Dataset  provides five kinds of LiDAR sweeps. To fully utilize the information, we combine all the first and second returns point cloud generated by all five LiDARs. Specifically, considering frame densification and painting, our input format should be , with being the time difference as mentioned above.
We use data augmentation strategy following [18, 8]. First, we generate an annotation database containing labels and associated point cloud data. During training, we randomly select 6, 8 and 10 ground truth samples for vehicle, pedestrian and cyclist respectively, and place them into the current frame. Second, we do randomly flipping along -axis , global rotation following , global scaling following and global translation along -axis following [21, 18, 8].
Inspired by PointPainting , we leverage camera information by projecting each point to the image space and gathering its image-based perception information (i.e. 2D detections and semantic segmentation labels) which will be appended/painted to the point, as illustrated in Figure 5.
First we use 2D bounding boxes to paint the points. Specifically, points are projected from the vehicle coordinate to the corresponding image frame. If a certain point falls inside a 2D bounding box, the classification information will be appended to the point as extra attributes. Otherwise, zeros will be filled in additional channels. The training set is painted with ground truth 2D bounding boxes, while during inference painting uses 2D detections produced by a single Cascade R-CNN based detector used in our 2D detection track.
Second, we use semantic segmentation information to paint point clouds. We start with a WideResNet38 pretrained on ImageNet and finetune the network on KITTI . Only around half of the points can be painted in a frame because Waymo Open Dataset only provides images from front and side cameras. The improvements of introducing point painting can be seen in Table 2.
3.3 Further Improvements
Test Time Augmentation (TTA). We perform several different test-time augmentations, including point cloud rotation around pitch, roll and yaw axis, point cloud global scaling and point cloud translation along -axis, which is similar to the data augmentation in the training process. We adapted the Weighted Boxes Fusion (WBF)  to a 3D variant to merge different groups of detections into the final result. We use for both pitch and roll rotation, [, , , ,,, ] for yaw rotation, [0.95, 1.05] for global scaling, and for translation along -axis. Our experiments show that the rotation yaw is the most effective one among all the augmentation methods. Therefore we use it in combination with the other augmentation methods.
Ensemble Method and Naive Grid Search. As mentioned above, we use a 3D version of WBF  to ensemble different models with test time augmentation. Equal weights are assigned to AFDet-B1, AFDet-B2 and AFDet-B3. We replace 3D IoU with BEV IoU to speed up the ensembling process. A naive grid search is conducted to select the best thresholds used in the ensembling. We search three variables including (box associating IoU threshold), (pre-wbf box skipping threshold) and (post-wbf box skipping threshold). Search spaces for , and are , , respectively. Search intervals for , and are 0.05, 0.05 and 0.01 respectively. is searched from 0.01 instead of 0.00 to accelerate the search process. All grid searches are performed on 1/10 of the validation set. We use different thresholds for different classes as shown in Table 3.
|B1 + B2||✗||✗||67.97||65.71|
|B1 + B3||✗||✗||69.72||67.16|
|B1 + B2 + B3||✗||✗||71.08||68.96|
|B1 + B2 + B3||✓||✗||73.65||72.14|
|B1 + B2 + B3||✓||✓||75.21||73.63|
|L2 mAP||L2 mAPH||L2 mAP||L2 mAPH||L2 mAP||L2 mAPH||L2 mAP||L2 mAPH|
|Simple Baseline v2||66.44||65.91||66.00||60.93||65.13||64.10||65.84||63.65|
|Models||VEHICLE||PEDESTRIAN||CYCLIST||ALL||DOMAIN GAP (ALL)|
|L2 mAP||L2 mAPH||L2 mAP||L2 mAPH||L2 mAP||L2 mAPH||L2 mAP||L2 mAPH||L2 mAP||L2 mAPH|
|Simple Baseline v2||48.47||47.92||46.13||43.21||18.84||18.71||37.81||36.62||28.03||27.02|
4 Experiment Settings
We implemented our detector based on the Det3D framework . To reduce the GPU memory usage, we train our detector within range [ ( -76.8, 76.8), ( -51.2, 51.2), ( -1, 3)] respect to -axis, while enlarging the detection range to [ ( -80, 80), ( -80, 80), ( -1, 3)] at inference. The max number of objects is set to . For all of our models, we set max point per voxel to , max voxel num to . For the offset regression head, we use as default to regress a square area with side length
. We use max pooling with the kernel size, stride 1, and apply AND operation between the feature map before and after the max pooling to get the peaks of the keypoint heatmaps at the inference stage. Therefore there is no need to suppress overlapped detections. The weight we use for different sub-losses are , , and . All parameters we list here are their default values unless explicitly stated otherwise.
We use AdamW  optimizer with one-cycle policy . We set learning rate max to , division factor to 2, momentum ranges from 0.95 to 0.85, fixed weight decay to 0.01 to achieve convergence. To quickly verify our idea, we sample the training set every 20 frames, and validation set every 10 frames according to their timestamps. Unless we explicitly indicate, all models used for ablation study, including Table 2 and Table 4
, are trained on 1/20 training data, tested on 1/10 validation data. In the final submission, we first train 60 epochs on 1/20 training data and finetune 10 epochs on the whole trainval data for AFDet-B1 and AFDet-B2. We only train 60 epochs on 1/20 training data and finetune 3 epochs on the whole trainval data for AFDet-B3. Besides, we adopt half-precision (FP16)[11, 13] for B3 model to reduce GPU memory usage and speed up the training process. We ensemble AFDet-B1, AFDet-B2 and AFDet-B3 for 3D detection and AFDet-B1 and AFDet-B2 for domain adaptation in our final submission.
|3D training||3D validation||Domain training||Domain validation|
3D Detection. To study the effect of each module used in our solution, we conduct ablation experiments on the 1/10 validation set as shown in Table 2 and Table 4. Specifically, Table 2 demonstrates the contributions made by densification and point painting, while Table 4 shows the improvements introduced by ensembling the networks and using TTA and WBF. The final results on the official 3D detection leaderboard are shown in Table 5. As can be seen from Table 5, our method outperformed the 2 place solution by and the 3 place solution by in terms of mAPH L2.
Domain Adaptation. Waymo Open Dataset  also provides additional data collected from different locations with only a small subset annotated to test the robustness against domain diversity. We directly apply our models trained for the 3D track to the domain adaptation data. For the final submission, AFDet-B1 and AFDet-B2 were applied with test time augmentation and the results were merged by WBF. , and for our submission are listed in Table 3. The results show that our solution outperforms other methods in domain adaptation by a big margin as shown in Table 6.
This result demonstrated the benefits of using anchor-free model and the better generalizability of our method and its robustness against domain diversity. Another reason might be due to the fact that most other methods finetuned their models using annotated domain adaptation data, which is highly imbalanced. As shown in Table 7, we compared the statistics computed from 3D detection and domain adaptation datasets respectively. For example, only 0.07% of the objects are cyclists in the domain adaptation training set, which is 10 less than the 0.78% in the 3D detection training set. Fine-tuning on the domain adaptation dataset might compromise the performance of the resulting detector, especially for rare classes such as cyclist. Leveraging image information may also contributed to the robustness of our solution against domain diversity.
A state-of-the-art 3D detection framework is proposed and achieved the 1 place on both 3D detection track and domain adaptation track in the Waymo Open Dataset Challenges at CVPR 2020.
-  Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027, 2019.
-  Runzhou Ge, Zhuangzhuang Ding, Yihan Hu, Yu Wang, Sijia Chen, Li Huang, and Yuan Li. Afdet: Anchor free one stage 3d object detection. In CVPR Workshops, 2020.
-  Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012.
-  Sylvain Gugger. The 1cycle policy. https://sgugger.github.io/the-1cycle-policy.html, 2018.
-  Chenhang He, Hui Zeng, Jianqiang Huang, Xian-Sheng Hua, and Lei Zhang. Structure aware single-stage 3d object detection from point cloud. In CVPR, 2020.
-  Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.In NeurIPS, 2012.
-  Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In CVPR, 2019.
-  Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In CVPR, 2017.
-  Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019.
-  Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
-  Arsalan Mousavian, Dragomir Anguelov, John Flynn, and Jana Kosecka. In CVPR, 2017.
-  NVIDIA. Apex. https://github.com/NVIDIA/apex.
-  Roman Solovyev and Weimin Wang. Weighted boxes fusion: ensembling boxes for object detection models. arXiv preprint arXiv:1910.13302, 2019.
-  Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. arXiv preprint arXiv:1912.04838, 2019.
-  Mingxing Tan, Ruoming Pang, and Quoc V Le. Efficientdet: Scalable and efficient object detection. In CVPR, 2020.
-  Sourabh Vora, Alex H Lang, Bassam Helou, and Oscar Beijbom. Pointpainting: Sequential fusion for 3d object detection. In CVPR, 2020.
-  Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18(10):3337, 2018.
-  Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Real-time 3d object detection from point clouds. In CVPR, 2018.
-  Xingyi Zhou, Dequan Wang, and Philipp Krähenbühl. Objects as points. arXiv preprint arXiv:1904.07850, 2019.
-  Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In CVPR, 2018.
-  Benjin Zhu and Bingqi Ma. Det3d. https://github.com/poodarchu/Det3D.