Part-level Car Parsing and Reconstruction from Single Street View

11/27/2018 ∙ by Qichuan Geng, et al. ∙ Beihang University Baidu, Inc. 0

In this paper, we make the first attempt to build a framework to simultaneously estimate semantic parts, shape, translation, and orientation of cars from single street view. Our framework contains three major contributions. Firstly, a novel domain adaptation approach based on the class consistency loss is developed to transfer our part segmentation model from the synthesized images to the real images. Secondly, we propose a novel network structure that leverages part-level features from street views and 3D losses for pose and shape estimation. Thirdly, we construct a high quality dataset that contains more than 300 different car models with physical dimensions and part-level annotations based on global and local deformations. We have conducted experiments on both synthesized data and real images. Our results show that the domain adaptation approach can bring 35.5 percentage point performance improvement in terms of mean intersection-over-union score (mIoU) comparing with the baseline network using domain randomization only. Our network for translation and orientation estimation achieves competitive performance on highly complex street views (e.g., 11 cars per image on average). Moreover, our network is able to reconstruct a list of 3D car models with part-level details from street views, which could benefit various applications such as fine-grained car recognition, vehicle re-identification, and traffic simulation.



There are no comments yet.


page 3

page 4

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: An illustration of pose and shape estimation based on instance masks, landmarks, and semantic parts. Pose and shape results are outputted from [30] and our network. Green and red arrows represent ground truth and predicted axes respectively. (1st row) Image patches cropped for visualization. Red boxes enclose the target cars that are either occluded or low-resolution. (2nd row) Instance masks and estimated car pose and shape. (3rd row) 2D landmarks and estimated car pose and shape. (4th row) Semantic parts and estimated pose and shape.

Given a single image without additional 3D information such as LiDAR scans and depth maps, 3D pose and shape estimation is essentially an ill-posed problem. In the field of autonomous driving, a variety of learning methods based on single image have been proposed recently to advance the state-of-the-art for car pose and/or shape estimation [46, 45, 25, 38, 26, 33, 6, 24, 3, 41]

. Roughly speaking, most of existing approaches rely on the features extracted or learned from rectangular bounding boxes, instance masks, and landmarks, which work well when the entire car shape is visible. However, these features are less reliable when occlusions or truncations present as shown in right image in Figure 

1. Moreover, an instance mask could contain large pose ambiguities caused by object symmetries (e.g., two similar masks could have quite different orientations). On the other hand, landmarks of cars are often not well defined. For instance, car lights could have different shapes even for the same car model with different years. More importantly, it is hard to detect accurate landmarks on low-resolution cars (see left image in Figure 1).

Notice that it is natural for human visual systems to partition shapes into parts for object understanding according to the part theory [13] and the recognition-by-components theory [2]. Particularly, parts are advantageous for representing objects that are partially occluded, viewed from different angles, and non-rigidly deformed. In recent years, part-based models have been applied in fine-grain recognition [14, 44, 39], object detection and segmentation [38, 5], human pose estimation [37, 11], and face parsing [20, 16, 29]. Moreover, a large-scale dataset (i.e., PartNet) of 3D indoor objects with part-level annotations has been introduced in [23] to enable future research along this direction. However, we found that very limited research has been done in part-level car understanding that could benefit applications including not only autonomous driving but also fine-grained car recognition, vehicle re-identification, and car damage assessment.

In this paper, we demonstrate how to generate part-level annotations and leverage part features to infer comprehensive and accurate car information in street views. There are two major contributions in this paper.

  1. We make the first attempt to build a two-stage framework to simultaneously estimate shape, translation, orientation, and semantic parts of cars in 3D space from a single street view. For the part segmentation stage, we propose a specific approach to implicitly transfer part features from the synthesized images to the real street views through class consistency loss. For the stage of pose and shape estimation, we propose a novel network that integrates part features and directly minimizes the losses such as car center translation and per-vertex geometric errors.

  2. We are the first to build a high-quality dataset that contains 348 car models with part-level annotations and physical dimensions. We apply global and local deformations to build dense correspondences among point clouds so that we can transfer textures and part annotations across these models. We further synthesized 60K images with randomization of orientation, illumination, occlusion, and texture based on our 3D models. Both our 3D dataset and 2D synthesized images will be made public.

2 Related Work

In this section, we discuss two related research areas. The first is the usage of semantic parts for solving vision-based tasks. The second is the pose and/or shape estimation from a single image. Notice that there are also various approaches that have been proposed for pose estimation of general objects (e.g., PoseCNN in [43]). However, we found that many of them focus on the small indoor objects such as boxes and cylinders and applications such as robotic control. Due to page limitation and different application domains, in the second part, we mainly focus on the estimation of car pose and/or shape.

Semantic Parts These semantic parts could be represented by landmarks, rectangles, and regions. Landmarks and rectangles could be considered as compact representations and are commonly used in many tasks. In this paper, we mainly focus on the regions annotated at pixel-level that have been emerged in recent years.

Semantic parts could be easily generated for human faces based on facial landmarks and contours and used for face parsing algorithms [20, 16, 29]. In the field of human body parsing, Varol et al. proposed the SURREAL dataset (synthetic humans for real tasks) that contains synthetic 2D/3D human poses, depth maps, part segments, and normal maps [37]

. A stacked hourglass network is adopted to segment 14 human parts. Liang et al. proposed a local-global long short-term memory architecture for clothes segmentation in 

[19]. In [5], Chen et al. provided a new dataset with annotated body parts of animals in PASCAL VOC 2010 [8]. A network is proposed so that body parts could be ignored when they cannot be reliably detected. Liu et al. further extend the PASCAL dataset to the PASCAL semantic part dataset (PASPart) that have part-level annotations of 20 categories [21]. In [31], Song et al. propose a 2-stream FCN network to extract 3D geometric features to segment car parts in the PASPart dataset. Although there are few hundreds of images containing cars in the dataset, there are two major differences comparing with our 2D and 3D datasets. Firstly, each car image in PASCAL contains a very limited number of high-resolution cars (e.g., one car per image), which make the dataset unsuitable for the autonomous driving application. Second, there are only 2D parts annotated in the PASPart while 3D poses and shapes are not available. In [23], Mo et al. presented the PartNet that is a large-scale dataset of 3D indoor objects with 3D part information. PartNet further shows that 3D data with part-level annotations are in high demand. However, outdoor objects such as cars are not included in the dataset.

Car Pose/Shape Estimation Car pose estimation could be done by using LiDAR scans, depth maps, image frames, and fusion of these modalities. In this paper, we focus on the 3D pose estimation using a single street view and skip the works using additional LiDAR scans or depth maps.

In [46], Zhu et al. trained a set of 2D part descriptors corresponding to selected 3D landmarks. These part descriptors are used to estimate 3D car shapes based on global geometric consistency. Zhou et al. proposed a convex relaxation approach to estimate 3D shape given a set of 2D key points [45] and a stacked hourglass network to localize these semantic key points [25]. Wang et al. proposed a network framework for 3D pose estimation with the purpose of fine-grained car categorization [38]. Fine-grained 3D pose datasets for cars are built that contain 2D images and 3D models. Poirson et al. propose a network for detection and coarse pose estimation from a single shot without using parts or initial bounding boxes [26]. Su et al. proposed an image synthesis pipeline and deep networks for viewpoint estimation [33]. We found that many existing networks are designed for pose estimation of general objects. Although car could be one of these objects, cars in most images are clear, non-occluded, and have relatively large resolutions that could be quite different from cars captured in street views.

The research on car pose and/or shape estimation from single street views is still limited [6, 24, 3, 41, 17]. Xiang et al. proposed a novel representation, 3D voxel pattern, to encode appearance, shape, pose, and other properties [41]. In [6], Chen et al. densely searches candidate bounding boxes in the 3D space and use labeled class/instance semantic, contour, shape, location, and so on as prior to score the candidate boxes. Chabot et al. [3] proposed a DeepMANTA network to estimate 3D vehicle pose and shape. This network is a landmark-based approach as it is learned based on pre-defined 3D landmarks and 2D landmarks annotated on real street views. The car parts are represented by regions enclosed by the 3D landmarks. Mousavian proposed a deep network with two branches for 3D bounding box estimation [24]. One branch estimates 3D orientation based on a discrete-continuous loss and another branch regresses the 3D dimensions.

The most related work is the 3D-RCNN network [17] that estimates both 3D shapes and orientations. The key innovation is a differentiable render-to-compare loss that allows supervision from 2D annotations. There are three major differences between 3D-RCNN and our work. Firstly, the 3D translation that could be the most key information in autonomous driving is not estimated and evaluated in 3D-RCNN. Secondly, 3D-RCNN is an approach based on instance masks. Semantic parts of cars are not estimated in 3D-RCNN. Thirdly, as the render-to-compare loss in 3D-RCNN uses depth maps when available (e.g., Kitti dataset [10]), it remains unclear whether the network could still have the same performance when depth maps are not used. Instead of relying on a large set of depth maps, our network minimizes 3D losses based on our 3D car models and outputs 3D translation directly.

3 System Overview

As it is a highly time-consuming and error-prone task to annotate car semantic parts on real street views, the first challenge that we need to address is the generation of part-level annotations. We propose a unique pipeline to generate 3D models and 2D synthesized images as described in Section 4. Our reconstruction framework consists of two stages as shown in Figure 2. In the first stage, we train a part segmentation network with part-level knowledge transferred from synthesised images to real street views based on the class consistency loss (Section 5

). We train our pose and shape network in the second stage. The instance features and part features are concatenated for the estimation of pose and shape. 3D losses are designed to further refine the estimation results. Part probability maps and their original

coordinates are two important ingredients in our part features. Inter-part relations are implicitly extracted from the probability maps by convolutional blocks. coordinates further impose the geometric information on these parts. The details are presented in Section 6.

Figure 2: An overview of our framework. It consists of two stages, part segmentation and pose/shape estimation enclosed by green and grey boxes respectively. indicates the feature concatenation.

4 Data Generation

Our dataset has two unique characteristics comparing with many existing car datasets (e.g., ShapeNet [4] and synthesized datasets [28, 27]). Firstly, we annotate the 3D car models with semantic part information. We decompose each car into 70 exterior parts, from large parts such as front door and roof, to small parts such as door handle and car logo. Second, all the 3D car models in our dataset are accurately aligned with major physical dimensions including wheelbase, front/rear overhang, track width, overall width/height/length, and so on.

In order to further reduce the cost to annotate 3D parts, we propose a procedure as shown in Figure ~3 to transfer parts and textures from annotated models to un-annotated models and then further extend the dataset with more models so that the dataset could cover a majority of car shapes on streets.

Template Selection We manually divide a large set of vehicles into categories based on vehicle geometries. For instance, we group most of cars, SUVs and minivans with four doors into one category and then select a commonly seen car as a template for this category. We find that any vehicle within the same category could be served as the template and would not affect the remaining steps in this procedure.

Dense Correspondences

We develop an algorithm to align each model in a category to the selected template. Most of car models are built with non-uniform mesh grids. Therefore, in our first step, we repair and re-mesh the car models so that point clouds are uniformly distributed with the point-to-point distance around one centimeter. Second, we apply a global alignment (i.e., iterative closest point 

[1]) to align two sets of points clouds based on translation, rotation, and scaling operations. This is a rough alignment between two sets of point clouds. Third, we apply embedded deformation [34, 32] to construct the deformation graph and align the model with the template. After these three steps, we are able to build dense correspondences between the template and the point cloud of another model in the category. As a result, we are able to transfer part and texture information from 40 annotated models to more than 300 un-annotated models.

Figure 3: Illustration of procedure to build dense correspondences.

3D Shape Space We use PCA to find a dimensional shape basis for each category where is the number of points of one car model. New car models can be generated by providing new 22 dimensional PCA parameters. Based on the procedure to estimate dense correspondence, we are also able to transfer part and texture information.

Image Synthesis As we focus on segmentation on street views, we need to take a number of factors into consideration, such as illumination, occlusion, texture, and other related objects. Specifically, for each image, we randomly select a number of car models from our 3D dataset and other objects like pedestrians, cyclists, and traffic cones, which is similar to the approach proposed in [35]. After applying collision detection, we randomly place them on the ground plane. Background images are selected from a large image collection that includes some background images cropped from real street views. We generate 5 to 20 lighting sources for each scene and place them in random locations with random orientations. Figure 4 shows examples of our 3D/2D datasets and distributions of orientations and center distances.

Figure 4: (left) Examples of 3D car models. (right) Examples of synthesized images. (bottom) Distributions of orientations and center distances.

5 Part Segmentation

We first choose ResNet-38 network [40]

pre-trained on the ImageNet as our backbone network to train the part segmentation on the synthesized images. As there still is domain discrepancy between synthesized images and real street views, the network performance on real street views is poor. One straightforward solution is to explicitly measure domain discrepancy such as maximum mean discrepancy (MMD) and domain classifier 

[22, 36, 9]. However, as mentioned in [36], one major drawback of GAN-based approaches is that it could be difficult to train the network.

Based on the experiments on our synthesized data, we find that the feature encoder trained for a binary classifier for the car class cannot be used directly as the feature encoder for the classifier of parts. However, we find that a pixel that has been classified as one of parts is likely to be classified as a car class. This conveys that the features learned at the part-level often can be used as the features at a higher hierarchical level, which is not true on the contrary.

Although we do not have part-level annotations on real images, we have real images annotated at a higher hierarchical level, i.e., car class. The class-level annotations could be easily obtained from open datasets such as Cityscapes [7] and ApolloScape [15]. As a result, instead of using general approaches such as GAN-based networks, we propose an approach that implicitly transfers part features from source to target domain through two specific tasks (i.e., part and instance segmentation), which makes our network easy-to-train and lightweight. Specifically, we bridge the differences between real data and simulated data based on the class consistency loss.

Our implicit part transfer approach is shown in Figure 2. res5c from the ResNet-38 network provides the encoded features. Let us denote as the input images, as class-level annotations in source (synthesized) domain and target (real) domain respectively, and is the part-level annotations in the source domain. We minimize the loss:


where denotes the shared parameters of the feature encoder, and represent the parameters for car classifier and part classifier respectively, is the part loss in the source domain, and and are the class consistency losses in both source and target domains. and are the weights for and , which are set to 1s in our training stage.

6 Pose Estimation and Shape Reconstruction

The network architecture is shown in Figure 2 with details shown in Figure 5. The Mask-RCNN network [12] is used as our backbone network. Instance features are the feature maps outputted from the ROI align layer. Our part features consists of three components, instance probability, part probability, and part coordinates. Instance probability maps are the output from the deconv. layer in the backbone network. Part probability maps are predicted by our part segmentation network. In order to reduce the interferences of similar parts from surrounding cars, we impose instance probability map as a soft constraint on parts, which is done by concatenation of these two types of maps. Convolutional blocks are then applied in order to learn inter-part relation. The ROI align layer is modified to record the original coordinates of semantic parts that are further normalized to with the origin located at the image center. coordinates further provide 2D geometric information for the parts, which could be particularly useful for learning 3D pose. The final dimension of concatenated feature maps is .

Figure 5: The details of network for estimation of translation, rotation, shape, and semantic parts in 3D space. Blue boxes represent network layers and modules, yellow boxes represent data entities, and orange boxes represent losses. indicates the addition and indicates the multiplication.

As mentioned in [17], it is fundamentally ill-posed to estimate 3D distance of car center from cropped and resized ROI features. As a result, distance is not estimated and evaluated in [17]. Both Deep3DBox [24] and DeepMANTA [3] estimate 3D distance by a post-processing step. In [3], 3D distance is estimated by a post-processing step based on the classic PnP algorithm [18]. In [24], 3D distance is estimated by the SVD decomposition assuming projection of a 3D bounding box fits tightly to its 2D detection box. However, this assumption requires very accurate 2D detection results that could not be satisfied in general due to occlusions and truncations. As our 3D car models have physical dimensions and our part features contain inter-part relation and part geometric information, our network is designed to estimate 3D car distance directly.

Similar to [17, 24], we apply classification first by dividing the parameter spaces into multiple bins and apply regression on each bin. The direct loss is given by


In the first term, and represent 2D projections of car centers from the network and ground truth respectively. The second term is the loss of 22 dimensional shape parameters where is the network output and is the ground truth parameters. The third term and fourth term are the losses for pose angles and distances of car centers. The confidence loss is equal to the softmax loss of the confidences of each angle or distance bin. in the third term is the average value in a bin, where are the network outputs representing the regressed values for each bin, and is the ground truth angles. represents the azimuth, elevation, and tilt angles. In the fourth term, is the bin average distance, is the regressed value for the bin, and is the ground truth distance.

As the loss could not be enough for the final pose and shape estimation, we propose the 3D losses that are given by


where and represent the 3D coordinates of -th estimated and ground truth vertex of the car model with vertices. is computed from the estimated shape parameters and the PCA basis built from 348 car models. is the estimated 3D translation and is the ground truth 3D translation. Similar to the allocentric representation in [17], rotation matrix is decomposed into during the training where is the rotation from the camera principal axis to the ray passing through the projection of car center . and are the corresponding ground-truth matrices. The final loss is the weighted sum of and .

7 Experiments

As our network aims to output more complete car information at the same time, it requires a more comprehensive dataset for training and testing. The Kitti dataset provides 3D information, however, around 200 instance annotations may not be sufficient to train a deep network with good performance. The Cityscapes dataset contains 25,000 2D images with instance-level annotations, however, 3D information is not available. As a result, we choose the recent released ApolloScape dataset that contains both 3D information and 2D images with instance-level annotations [15, 30]. We also found that there are 3.8 cars per image on average in the Kitti 7,481 training images. There are 11.1 cars per image on average in the 4,236 training images of the ApolloCar3D dataset. In general, more cars in one image implies higher scene complexities. We conduct following experiments to demonstrate the superior performance of our work.

7.1 Part-level Segmentation

In the training stage, we randomly selected 5,000 synthesized images with part-level annotations and 5,000 images with car instance annotations in the ApolloScape dataset. In order to evaluate our part transfer approach, we first select 1,700 synthesized images for testing. We further selected 200 real images from the ApolloScape dataset as testing images and manually annotate them at the part-level. In these 200 real images, there are around 8 cars per image, and minimum and maximum car heights are 23 and 500 pixels respectively. We use the pre-trained model of the Resnet-38 network [40]

and train our network end-to-end with 100 epochs with all parameters un-fixed. 13 semantic parts are used in training and testing, which are

front light, front part, tail light, rear part, door, roof, roof rack, hood, mirror, side window, front window, rear window, and wheel/tire. The front part is a region including front bumper, front car logo, grilles and so on. Similarly, the rear part includes a set of parts in the rear region of a car.

The intersection-over-union scores (IoU) for individual parts are given in Table 1. When we train the network without using our transfer approach, the parsing performance on real images is poor. One of possible reasons is that the lighting conditions on synthesized data and real images are still quite different, which enlarges the domain gap. With our transfer approach, the segmentation performance (mIoU) can be improved by 35.5%. We illustrate some examples of our results in Figure 6.

We also observe that rear part, roof, and rear window have the highest IoUs (e.g., 0.739, 0.654, and 0.675) that are relatively closer to the IoUs evaluated on synthesized data (e.g., 0.878, 0.753, and 0.859). The reason is that these three parts could be the parts with very high frequencies in the real domain. As a result, the domain knowledge for them can be easily transferred. This could be considered as a data imbalance problem that can be mitigated by re-weighting and re-sampling strategies, which could be one of our future works.

To our knowledge, our work is the first study of part segmentation of cars in street views including both proposed problem and its solution based on the implicit part transfer. As a result, it is difficult to compare with other approaches at the same level. In terms of part segmentation network itself, as shown in Table 1, we re-trained the part grouping network for human body part segmentation [11] on our dataset for 100 epochs. Without part transfer, this network can obtain mIoU 11.1 that is slightly lower than 11.7 from our segmentation backbone without transfer.

train syn. syn. syn. syn.+PT
test syn. real [11] real real
front light 78.1 7.7 8.2 25.1
front part 86.8 7.0 9.7 52.6
tail light 81.5 22.4 18.4 35.9
rear part 87.8 22.8 17.3 73.9
door 86.9 11.8 8.0 52.3
roof 75.3 6.2 15.8 65.4
roof rack 86.2 0.8 3.4 17.4
hood 84.8 9.6 14.2 47.9
mirror 71.9 2.6 3.3 40.4
side window 88.6 15.5 15.2 40.1
front window 86.0 16.5 15.5 54.0
rear window 85.9 19.1 23.1 67.5
wheel/tire 86.1 2.1 0.4 42.1
mIoU 83.5 11.1 11.7 47.2
Table 1: The IoU scores for part-level parsing. The second column contains results tested on the synthesized images. The third and fourth columns are the results on real images from [11] and our backbone network without applying our part transfer approach. The fifth column contains results based on our part transfer approach (syn.+PT).
Figure 6: The examples of part parsing results. The first column contains the input images selected from ApolloScape dataset (cropped for visualization). The second column contains parsing results without part transfer. The third column contains results based on our part transfer approach. The fourth column contains the ground truth annotations.

7.2 Pose and Shape Estimation

The ApolloCar3D dataset [30] contains 5,277 images (4,036 for training, 200 for validation, and 1,041 for testing), which is a part of the ApolloScape dataset [15]. Figure 7 shows some results generated by our pose and shape network.

Figure 7: Examples of results generated by our network for pose estimation and shape reconstruction. The 1st column contains the original input images (cropped for visualization). The remaining columns contains the overlay effects of the 3D models on the input images for 3D-RCNN (2nd column), DeepMANTA (3rd column), and our network (4th column) [30, 17, 3].

As mentioned in [30], average orientation similarity (AOS), 3D bounding box IoU, and average viewpoint precision (AVP) [10, 42]

measure very coarse 3D properties, thus, a new evaluation metric (e.g., ‘‘A3DP-Abs" and ‘‘A3DP-Rel") in ApolloCar3D dataset is proposed to jointly measure object shape, rotation, and translation based on a set of thresholds. ‘‘Abs" and ‘‘Rel" indicate absolute and relative translation thresholds respectively. ‘‘c-l" and ‘‘c-s" indicate loose and strict criterions respectively (refer to 

[30] for more details). Table 2 shows the ablation studies on validation set. We remove part-level representations and 3D losses from our framework as the baseline model. Notice that part-level representations greatly improve the performance of pose and shape estimation and 3D losses are also useful that further improve the estimation results.


Methods A3DP-Abs A3DP-Rel
mean c-l c-s mean c-l c-s
Baseline 23.07 42.58 32.67 17.23 32.67 22.77
Baseline+Part 29.11 51.49 40.59 23.07 41.58 32.67
Baseline+Part+3D 32.87 50.50 41.58 25.45 41.58 31.68


Table 2: Ablation results on the validation set. ‘‘c-l" and ‘‘c-s" indicate loose and strict criterions respectively as defined in [30].

Table 3 shows the comparison results with the state-of-the-art algorithms [17, 3, 30] on testing images. Notice that the original 3D-RCNN [17] that does not estimate 3D translation has been modified in [30] to provide regression towards translation, rotation, and shape. The ‘‘human" performance in [30] is generated by trained workers and context-aware 3D solver. Our network significantly outperforms 3D-RCNN and DeepMANTA algorithms (e.g., 12.57 and 8.91 percentage points in terms of mean A3DP-Abs). Notice that results of ‘‘human", 3D-RCNN, and DeepMANTA are based on ground truth instance masks while our network is based on predicted instance masks.


Methods A3DP-Abs A3DP-Rel
mean c-l c-s mean c-l c-s
Human 38.22 56.44 49.50 33.27 51.49 41.58
3D-RCNN 16.44 29.70 19.80 10.79 17.82 11.88
DeepMANTA 20.10 30.69 23.76 16.04 23.76 19.80
Our network 29.01 50.50 40.59 23.66 41.58 32.67


Table 3: Comparison with ‘‘human" performance, 3D-RCNN and DeepMANTA algorithms [17, 3, 30] on 1,041 testing images. ‘‘human", 3D-RCNN, and DeepMANTA are based on ground truth instance masks while our network is based on predicted instance masks. ‘‘human" and DeepMANTA also used predicted 2D landmarks.

We further adopt AOS and orientation score (OS) to evaluate car orientation for a complementary comparison. Table 4 shows the comparison results with 3D-RCNN. As all possible occlusion levels and truncations are included, our comparison could be more challenging than the hard level in Kitti benchmark (e.g., maximum truncation is set to 50%) [10]. For cars we also require an overlap of 70% for a detection. In order to obtain a fair comparison, we fixed 2D bounding box detection results with . Our network obtains 4.19 and 5.27 percentage points performance improvement respectively in terms of AOS and OS.


Methods AP AOS OS
3D-RCNN 79.58 73.70 92.61
Our Network 79.58 77.89 97.88


Table 4: Evaluation of car orientation using average precision (AP) for detection, average orientation similarity (AOS), and orientation score (OS). AP for 3D detection is fixed for a fair comparison. All the occlusions and truncations are included.

Notice that our pose and shape network not only outputs translation and orientation but also reconstructs 3D car shapes that could be in the form of point clouds or triangle meshes along with 70 original semantic parts.

8 Conclusions and Future Works

We are the first to propose a unique procedure to create car fine-grained annotations at the part-level and a novel framework to utilize part-level representations for both domain transfer and car pose and shape estimation. We demonstrate that models leveraging part-level representations could be more robust and accurate than the approaches only based on instance-level features, key points, and 2D bounding boxes, especially when occlusions and truncations present. In our future work, we plan to explore more approaches to improve part-level segmentation such as using inter-part relation. we also would like to work on different problems, such as fine-grained car recognition, based on the part-level information.


  • [1] P. J. Besl and N. D. McKay (1992) Method for registration of 3-d shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, Vol. 1611, pp. 586–607. Cited by: §4.
  • [2] I. Biederman (1987) Recognition-by-components: a theory of human image understanding.. Psychological review 94 (2), pp. 115. Cited by: §1.
  • [3] F. Chabot, M. Chaouch, J. Rabarisoa, C. Teulière, and T. Chateau (2017) Deep manta: a coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image. In

    Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR)

    pp. 2040–2049. Cited by: Part-level Car Parsing and Reconstruction from a Single Street View, §1, §2, §6, Figure 7, §7.2, Table 3.
  • [4] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu (2015) ShapeNet: An Information-Rich 3D Model Repository. Technical report Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago. Cited by: §4.
  • [5] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille (2014) Detect what you can: detecting and representing objects using holistic models and body parts. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 1971–1978. Cited by: §1, §2.
  • [6] X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler, and R. Urtasun (2016) Monocular 3d object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2147–2156. Cited by: §1, §2.
  • [7] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223. Cited by: §5.
  • [8] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §2.
  • [9] Y. Ganin and V. Lempitsky (2014)

    Unsupervised domain adaptation by backpropagation

    arXiv preprint arXiv:1409.7495. Cited by: §5.
  • [10] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §2, §7.2, §7.2.
  • [11] K. Gong, X. Liang, Y. Li, Y. Chen, M. Yang, and L. Lin (2018) Instance-level human parsing via part grouping network. arXiv preprint arXiv:1808.00157. Cited by: §1, §7.1, Table 1.
  • [12] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2980–2988. Cited by: §6.
  • [13] D. D. Hoffman and W. A. Richards (1984) Parts of recognition. Cognition 18 (1-3), pp. 65–96. Cited by: §1.
  • [14] S. Huang, Z. Xu, D. Tao, and Y. Zhang (2016) Part-stacked cnn for fine-grained visual categorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1173–1182. Cited by: §1.
  • [15] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, and R. Yang (2018) The apolloscape dataset for autonomous driving. arXiv: 1803.06184. Cited by: §5, §7.2, §7.
  • [16] M. M. Kalayeh, B. Gong, and M. Shah (2017) Improving facial attribute prediction using semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 4227–4235. Cited by: §1, §2.
  • [17] A. Kundu, Y. Li, and J. M. Rehg (2018) 3D-rcnn: instance-level 3d object reconstruction via render-and-compare. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3559–3568. Cited by: Part-level Car Parsing and Reconstruction from a Single Street View, §2, §2, §6, §6, §6, Figure 7, §7.2, Table 3.
  • [18] V. Lepetit, F. Moreno-Noguer, and P. Fua (2009) Epnp: an accurate o (n) solution to the pnp problem. International journal of computer vision 81 (2), pp. 155. Cited by: §6.
  • [19] X. Liang, X. Shen, D. Xiang, J. Feng, L. Lin, and S. Yan (2016) Semantic object parsing with local-global long short-term memory. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3185–3193. Cited by: §2.
  • [20] S. Liu, J. Shi, J. Liang, and M. Yang (2017) Face parsing via recurrent propagation. arXiv preprint arXiv:1708.01936. Cited by: §1, §2.
  • [21] X. Liu, N. Cho, P. Wang, X. Lian, J. Mao, A. Yuille, and S. Lee (2014) PASCAL Semantic Part: Dataset and Benchmark. Note:[Online; accessed 16-November-2018] Cited by: §2.
  • [22] M. Long, Y. Cao, J. Wang, and M. I. Jordan (2015) Learning transferable features with deep adaptation networks. arXiv preprint arXiv:1502.02791. Cited by: §5.
  • [23] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su (2019) PartNet: a large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. IEEE Conference on Computer Vision and Pattern Recognition. Cited by: §1, §2.
  • [24] A. Mousavian, D. Anguelov, J. Flynn, and J. Košecká (2017) 3d bounding box estimation using deep learning and geometry. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 5632–5640. Cited by: §1, §2, §6, §6.
  • [25] G. Pavlakos, X. Zhou, A. Chan, K. G. Derpanis, and K. Daniilidis (2017) 6-dof object pose from semantic keypoints. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 2011–2018. Cited by: §1, §2.
  • [26] P. Poirson, P. Ammirato, C. Fu, W. Liu, J. Kosecka, and A. C. Berg (2016) Fast single shot detection and pose estimation. In 3D Vision (3DV), 2016 Fourth International Conference on, pp. 676–684. Cited by: §1, §2.
  • [27] S. R. Richter, Z. Hayder, and V. Koltun (2017) Playing for benchmarks. In International Conference on Computer Vision (ICCV), Cited by: §4.
  • [28] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez (2016) The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234–3243. Cited by: §4.
  • [29] B. M. Smith, L. Zhang, J. Brandt, Z. Lin, and J. Yang (2013) Exemplar-based face parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3484–3491. Cited by: §1, §2.
  • [30] X. Song, P. Wang, D. Zhou, R. Zhu, C. Guan, Y. Dai, H. Su, H. Li, and R. Yang (2019) ApolloCar3D: a large 3d car instance understanding benchmark for autonomous driving. IEEE Conference on Computer Vision and Pattern Recognition. Cited by: Part-level Car Parsing and Reconstruction from a Single Street View, Figure 1, Figure 7, §7.2, §7.2, §7.2, Table 2, Table 3, §7.
  • [31] Y. Song, X. Chen, J. Li, and Q. Zhao (2017) Embedding 3d geometric features for rigid object part segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 580–588. Cited by: §2.
  • [32] O. Sorkine and M. Alexa (2007) As-rigid-as-possible surface modeling. In Symposium on Geometry processing, Vol. 4, pp. 109–116. Cited by: §4.
  • [33] H. Su, C. R. Qi, Y. Li, and L. J. Guibas (2015) Render for cnn: viewpoint estimation in images using cnns trained with rendered 3d model views. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2686–2694. Cited by: §1, §2.
  • [34] R. W. Sumner, J. Schmid, and M. Pauly (2007) Embedded deformation for shape manipulation. In ACM Transactions on Graphics (TOG), Vol. 26, pp. 80. Cited by: §4.
  • [35] J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield (2018) Training deep networks with synthetic data: bridging the reality gap by domain randomization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop on Autinomous Driving. Cited by: §4.
  • [36] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §5.
  • [37] G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid (2017) Learning from synthetic humans. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 4627–4635. Cited by: §1, §2.
  • [38] P. Wang, X. Shen, Z. Lin, S. Cohen, B. Price, and A. L. Yuille (2015) Joint object and part segmentation using deep learned potentials. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1573–1581. Cited by: §1, §1, §2.
  • [39] Y. Wang, X. Tan, Y. Yang, X. Liu, E. Ding, F. Zhou, and L. S. Davis (2018) 3D pose estimation for fine-grained object categories. arXiv preprint arXiv:1806.04314. Cited by: §1.
  • [40] Z. Wu, C. Shen, and A. v. d. Hengel (2016) Wider or deeper: revisiting the resnet model for visual recognition. arXiv preprint arXiv:1611.10080. Cited by: §5, §7.1.
  • [41] Y. Xiang, W. Choi, Y. Lin, and S. Savarese (2015) Data-driven 3d voxel patterns for object category recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1903–1911. Cited by: §1, §2.
  • [42] Y. Xiang, R. Mottaghi, and S. Savarese (2014) Beyond pascal: a benchmark for 3d object detection in the wild. In IEEE Winter Conference on Applications of Computer Vision, pp. 75–82. Cited by: §7.2.
  • [43] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox (2017)

    Posecnn: a convolutional neural network for 6d object pose estimation in cluttered scenes

    arXiv preprint arXiv:1711.00199. Cited by: §2.
  • [44] L. Yang, P. Luo, C. Change Loy, and X. Tang (2015) A large-scale car dataset for fine-grained categorization and verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3973–3981. Cited by: §1.
  • [45] X. Zhou, M. Zhu, S. Leonardos, and K. Daniilidis (2017) Sparse representation for 3d shape estimation: a convex relaxation approach. IEEE transactions on pattern analysis and machine intelligence 39 (8), pp. 1648–1661. Cited by: §1, §2.
  • [46] M. Zhu, X. Zhou, and K. Daniilidis (2015) Single image pop-up from discriminatively learned parts. In Proceedings of the IEEE International Conference on Computer Vision, pp. 927–935. Cited by: §1, §2.

Supplementary Material

1 Semantic Parts

In this section, we provide a complete list of 70 semantic parts annotated in the 3D dataset. There are four major categories, light, body, window, and other parts. Table 5 shows the details of these parts.


category class
light left headlight, left fog light, right headlight, right fog light, left tail light, right tail light
body front left door, front right door, rear left door, rear right door, left side sill, right side sill, roof, hood, tailgate, front bumper, rear bumper, fuel door, left mirror, right mirror, front left fender, front right fender, rear left fender, rear right fender, front left door handle, front right door handle, rear left door handle, rear right door handle, front car logo, rear car logo, A/B pillar, chassis, grilles
window windscreen wiper, rear window wiper, windscreen, rear window, front left door window, rear left side window, rear left quarter glass, rear right side window, front right door window, rear right quarter glass, rear left quarter glass on door, rear right quarter glass on door
others front left wheel/tire, rear left wheel/tire, front right wheel/tire, rear right wheel/tire, antenna, exhaust(pipe), spare tire, roof rack/taxi display, left side step pedal, right side step pedal, rear left fender II, rear left door II, rear left spoiler, rear right spoiler, rear right fender II, rear right door II, rear heat sink, left A pillar II, right A pillar II


Table 5: A complete list of 70 semantic parts.

2 Results of Pose and Shape Estimation

In Figure 8 and 9, we show more results of pose and shape estimation. Comparing with the Kitti and Cityscapes datasets, the images from the ApolloScape dataset often contain more cars and illumination variations.

Figure 8: Results of pose and shape estimation. The first and third images are the testing images from the ApolloCar3D dataset (cropped for visualization). The second and forth images contain estimated 3D models with semantic parts.
Figure 9: More results of pose and shape estimation. The first column contains the testing images from the ApolloCar3D dataset (cropped for visualization). The second column contains estimated 3D models with semantic parts.