Recent significant success on visual recognition, such as image classification , semantic segmentation , object detection , and instance segmentation , has been achieved by the advent of deep convolutional neural networks (CNNs). Their capability of handling geometric transformations mostly comes from the extensive data augmentation and the large model capacity [19, 15, 31], having limited ability to deal with severe geometric variations, e.g., object scale, viewpoints and part deformations. To realize this, several modules have been proposed to explicitly handle geometric deformations. Formally, they transform the input data by modeling spatial transformation [16, 3, 20], e.g., affine transformation, or by learning the offset of sampling locations in the convolutional operators [42, 4]. However, all of these works only use a visible feature to handle geometric deformation in the 2D space, while viewpoint variations occur in the 3D space.
To solve the problems of viewpoint variations, joint object detection and viewpoint estimation using CNNs [36, 35, 26, 6] has recently attracted the interest. This involves first estimating the location and category of objects in an image, and then predicting the relative rigid transformation between the camera coordinate in the 3D space and each image coordinate in the 2D space. However, category classification and viewpoint estimation problems are inherently contradictory, since the former requires a view-invariant feature representation while the latter requires a view-specific feature representation. Therefore, incorporating viewpoint estimation networks to a conventional object detector in a multi-task fashion does not help each other, as demonstrated in several works [26, 7].
Recent studies on 3D object recognition have shown that object viewpoint information can improve the recognition performance. Typically, they first represent a 3D object with a set of 2D rendered images, extract the features of each image from different viewpoints, and then aggregate them for object category classification [34, 1, 37]. By using multiple features with a set of predefined viewpoints, they effectively model shape deformations with respect to the viewpoints. However, in real-world scenarios, they are not applicable because we cannot access the invisible side of an object without 3D model.
In this paper, we propose cylindrical convolutional networks (CCNs) for extracting view-specific features and using them to estimate object categories and viewpoints simultaneously, unlike conventional methods that share representation of feature for both object category [30, 23, 21] and viewpoint estimation [35, 26, 6]. As illustrated in Fig. 1, the key idea is to extract the view-specific feature conditioned on the object viewpoint (i.e., azimuth) that encodes structural information at each viewpoint as in 3D object recognition methods [34, 1, 37]. In addition, we present a new and differentiable argmax operator called sinusoidal soft-argmax that can manage sinusoidal properties of the viewpoint to predict continuous values from the discretized viewpoint bins. We demonstrate the effectiveness of the proposed cylindrical convolutional networks on joint object detection and viewpoint estimation task, achieving large improvements on Pascal 3D+  and KITTI  datasets.
2 Related Work
2D Geometric Invariance.
provided limited performance due to geometric variations. To deal with geometric variations within CNNs, spatial transformer networks (STNs) offered a way to provide geometric invariance by warping features through a global transformation. Lin and Lucey  proposed inverse compositional STNs that replace the feature warping with transformation parameter propagation, but it has a limited capability of handling local transformations. Therefore, several methods have been introduced by applying convolutional STNs for each location , estimating locally-varying geometric fields , and estimating spatial transformation in a recursive manner . Furthermore, to handle adaptive determination of scales or receptive field for visual recognition with fine localization, Dai et al.  introduced two new modules, namely, deformable convolution and deformable ROI pooling that can model geometric transformation for each object. As all of these techniques model geometric deformation in the projected 2D image only with visible appearance feature, there is a lack of robustness to viewpoint variation, and they still only rely on extensive data augmentation.
Joint Category and Viewpoint Estimation.
Since viewpoint of 3D object is a continuous quantity, a natural way to estimate it is to setup a viewpoint regression problem. Wang et al.  tried to directly regress viewpoint to manage the periodic characteristic with a mean square loss. However, the regression approach cannot represent the ambiguities well that exist between different viewpoints of objects with symmetries or near symmetries . Thus, other works [36, 35] divide the angles into non-overlapping bins and solve the prediction of viewpoint as a classification problem, while relying on object localization using conventional methods (i.e. Fast R-CNN ). Divon and Tal  further proposed a unified framework that combines the task of object localization, categorization, and viewpoint estimation. However, all of these methods focus on accurate viewpoint prediction, which does not play a role in improving object detection performance .
Another main issue is a scarcity of real images with accurate viewpoint annotation, due to the high cost of manual annotation. Pascal 3D+ 
, the largest 3D image dataset still is limited in scale compare to object classification datasets (e.g. ImageNet). Therefore, several methods [35, 38, 6] tried to solve this problem by rendering 3D CAD models  into background images, but they are unrealistic and do not match real image statistics, which can lead to domain discrepancy.
3D Object Recognition.
There have been several attempts to recognize 3D shapes from a collection of their rendered views on 2D images. Su et al. 
first proposed multi-view CNNs, which project a 3D object into multiple views and extract view-specific features through CNNs to use informative views by max-pooling. GIFT also extracted view-specific features, but instead of pooling them, it obtained the similarity between two 3D objects by view-wise matching. Several methods to improve performance have been proposed, by recurrently clustering the views into multiple sets  or aggregating local features through bilinear pooling . Kanezaki et al.  further proposed RotationNet, which takes multi view images as an input and jointly estimates object’s category and viewpoint. It treats the viewpoint labels as latent variables, enabling usage of only a partial set of multi-view images for both training and testing.
3 Proposed Method
3.1 Problem Statement and Motivation
Given a single image of objects, our objective is to jointly estimate object category and viewpoint to model viewpoint variation of each object in the 2D space. Let us denote as the number of object classes, where the class is determined from each benchmark and is determined by the number of discretized viewpoint bins. In particular, since the variation of elevation and tilt is small on real-scenes , we focus on estimation of the azimuth.
Object categorization requires a view-agnostic representation of an input so as to recognize the object category regardless of viewpoint variations. In contrast, viewpoint estimation requires a representation that preserves shape characteristic of the object in order to distinguish their viewpoint. Conventional CNNs based methods [26, 6] extract a view-agnostic feature, followed by task-specific sub-networks, i.e., object categorization and viewpoint estimation, as shown in Fig. 2 (a). They, however, do not leverage the complementary characteristics of the two tasks, thus showing a limited performance. Unlike these methods, some methods on 3D object recognition have shown that view-specific features for each viewpoint can encode structural information [34, 1], and thus they use these feature to facilitate the object categorization task as shown in Fig. 2 (b). Since they require multi-view images of pre-defined viewpoints, their applicability is limited to 3D object recognition (i.e. ModelNet 40 ).
To extract the view-specific
features from a single image, we present cylindrical convolutional networks that exploit a cylindrical convolutionial kernel, where each subset is a view-specific kernel to capture structural information at each viewpoint. By utilizing view-specific feature followed by object classifiers, we estimate an object category likelihood at each viewpoint and select a viewpoint kernel that predicts to maximize object categorization probability.
3.2 Cylindrical Convolutional Networks
Let us denote an intermediate CNN feature map of Region of Interest (ROI)  as , with spatial resolution and channels. Conventional viewpoint estimation methods [26, 6] apply a view-agnostic convolutional kernel in order to preserve position sensitive information for extracting feature , where is the number of output channels. Since the structural information of projected images varies with different viewpoints, we aim to apply a view-specific convolutional kernel at a predefined set of viewpoints. The most straightforward way for realizing this is to define variants of kernel. This strategy, however, cannot consider structural similarity between nearby viewpoints, and would be inefficient.
We instead model a cylindrical convolutional kernel with weight parameters as illustrated in Fig. 3. Each kernel extracted along horizontal axis on in a sliding window fashion can be seen as a view-specific kernel . We then obtain variants of a view-specific feature as
where is an offset on cylindrical kernel for each viewpoint . The position varies within in the window . Different from view-specific features on Fig. 2 (b) extracted from multi-view images, our view-specific feature benefit from structural similarity between nearby viewpoints. Therefore, each view-specific kernel can be trained to discriminate shape variation from different viewpoints.
3.3 Joint Category and Viewpoint Estimation
In this section, we propose a framework to jointly estimate object category and viewpoint using the view-specific features . We design convolutional layers with parameters to produce score map such that . Since each element of represents the probability of object belong to each category and viewpoint , the category and viewpoint can be predicted by just finding the maximum score from
. However, it is not differentiable along viewpoint distribution, and only predicts discretized viewpoints. Instead, we propose sinusoidal soft-argmax function, enabling the network to predict continuous viewpoints with periodic properties. To obtain the probability distribution, we normalizeacross the viewpoint axis with a softmax operation such that . In the following, we describe how we estimate object categories and viewpoints.
We compute the final category classification score using a weighted sum of category likelihood for each viewpoint, , with viewpoint probability distribution, , as follows:
where represents an final classification score along category . Since the category classification is essentially viewpoint invariant, the gradient from will emphasize correct viewpoint’s probability, while suppressing others as attention mechanism . It enables the back-propagation of supervisory signal along viewpoints.
Perhaps the most straightforward way to estimate a viewpoint within CCNs is to choose the best performing view-specific feature from predefined viewpoints to identify object category. In order to predict the continuous viewpoint with periodic properties, we further introduce a sinusoidal soft-argmax, enabling regression from as shown in Fig. 4.
Specifically, we make use of two representative indices, and , extracted by applying sinusoidal function to each viewpoint bin (i.e. 0°, 15°,… for ). We then take sum of each representative index with its probability, followed by function to predict object viewpoint for each class as follows:
to estimate posterior probabilities, enabling better training of deep networks, while considering the periodic characteristic of viewpoints as regression-based approaches. The final viewpoint estimation selects with corresponding class through category classification (2).
Bounding Box Regression.
To estimate fine-detailed location, we apply additional convolutional layers for bounding box regression with to produce bounding box offsets, denoted as . Each set of 4 values encodes bounding box transformation parameters  from initial location for one of the sets. This leads to use different sets of boxes for each category and viewpoint bin, which can be shown as an extended version of class-specific bounding box regression [12, 30].
Our total loss function defined on each feature is the summation of classification loss, bounding box regression loss , and viewpoint estimation loss as follows:
using ground-truth object category , bounding box regression target and viewpoint . Iverson bracket indicator function evaluates to 1 when it is true and 0 otherwise. For background, , there is no ground-truth bounding box and viewpoint, hence and are ignored. We train the viewpoint loss in a semi-supervised manner, using the sets with ground-truth viewpoint (
) for supervised learning. For the datasets without viewpoint annotation (), is ignored and viewpoint estimation task is trained in an unsupervised manner. We use cross-entropy for , and smooth L1 for both and , following conventional works [12, 30].
3.4 Implementation and Training Details
For cylindrical kernel , we apply additional constraint to preserve a reflectinoal symmetry of 3D objects. We first divide the parameters into four groups as front, rear, left-side, and right-side, and make the parameters of left-side and the right-side to be reflective using horizontal flip operation such that , where parameters of each groups are concatenated horizontally. We set the spatial resolution of and as , and as . Therefore, can preserve horizontal reflectional symmetry and saves the network memory.
In order to make
defined on a 3D space to be implemented in a 2D space, periodicity along the azimuth has to be preserved. Therefore, we horizontally padof parameters from the left end to the right side using flip operation, and vice versa, where denotes floor function that outputs the greatest integer less than or equal to input. It allows to be used as periodic parameters.
We adopt two stage object detection framework, Faster R-CNN  that first processes the whole image by standard fully convolutional networks [15, 21], followed by Region Proposal Network (RPN)  to produce a set of bounding boxes. We then use ROI Align  layer to extract fixed size feature for each Region of Interest (ROI).
In both training and inference, images are resized so that the shorter side is 800 pixels, using anchors of 5 scales and 3 aspect ratios with FPN, and 3 scales and 3 aspect ratios without FPN are utilized. 2k and 1k region proposals are generated using non-maximum suppression threshold of 0.7 at both training and inference respectively. We trained on 2 GPUs with 4 images per GPU (effective mini batch size of 8). The backbones of all models are pretrained on ImageNet classification , and additional parameters are randomly initialized using He initialization 
. The learning rate is initialized to 0.02 with FPN, 0.002 without FPN, and decays by a factor of 10 at the 9th and 11th epochs. All models are trained for 12 epochs using SGD with a weight decay of 0.0001 and momentum of 0.9, respectively.
4.1 Experimental Settings
Our experiments are mainly based on maskrcnn-benchmark 
using PyTorch. We use the standard configuration of Faster R-CNN  based on ResNet-101  as a backbone. We implement two kinds of network, with and without using FPN . For the network without using FPN, we remove the last pooling layer to preserve spatial information of each ROI feature. We set following conventional works, and set unless stated otherwise. The choice of other hyper-parameters keeps the same with the default settings in .
We evaluate our joint object detection and viewpoint estimation framework on the Pascal 3D+  and KITTI dataset . The Pascal 3D+ dataset  consists of images from Pascal VOC 2012  and images of subset from ImageNet  for 12 different categories that are annotated with its viewpoint. Note that the bottle category is omitted, since it is often symmetric across different azimuth . On the other hand, the KITTI dataset  consists of 7,481 training images and 7,518 test images that are annotated with its observation angle and 2D location. For KITTI dataset, we focused our experiment on the Car object category.
Pascal 3D+ dataset.
) for supervised learning only, denoted as CCNs, and semi-supervised learning with additional subset oftrainval35k with overlapping classes of COCO dataset , denoted as CCNs*. The evaluation is done on the val set of Pascal 3D+  using Average Precision (AP) metric  and Average Viewpoint Precision (AVP) , where we focus on AVP24 metric. Furthermore, we also evaluate our CCNs using minival split of COCO dataset  using COCO-style Average Precision (AP) and Average Recall (AR) metric  on objects of small, medium, and large sizes.
Visualization of learned deep feature through Grad-CAM: (from top to bottom) inputs, attention maps trained without CCNs, and with CCNs. Note that red color indicates attentive regions and blue color indicates suppressed regions.
In this experiment, we followed train/val setting of Xiang et al. , which guarantees that images from the training and validation set are from different videos. For evaluation using KITTI dataset , we use Average Precision (AP) metric with overlap threshold (AP@IOU0.7), and Average Orientation Similarity (AOS) . Results are evaluated based on three levels of difficulty: Easy, Moderate, and Hard, which are defined according to the minimum bounding box height, occlusion, and truncation grade.
4.2 Ablation Study
Analysis of the CCNs components.
We analyzed our CCNs with the ablation evaluations with respect to various setting of and the effectiveness of the proposed view-specific convolutional kernel. In order to evaluate performance independent of factors such as mis-localization, we tackle the problem of joint category classification and viewpoint estimation with ground-truth bounding box using ResNet-101 . For a fair comparison, a view-agnostic convolutional kernels are implemented for joint object category classification and viewpoint estimation, which outputs score map following conventional work . In order to compare the viewpoint estimation accurately, we applied sinusoidal soft-argmax to regress the continuous viewpoint. We evaluated the top-1 and top-3 error rates for object category classification performance, and use median error (MedErr) and for viewpoint estimation performance .
As shown in Table 1, CCNs have shown better performance in both object category classification and viewpoint estimation compared to the conventional method using view-agnostic kernel. The result shows that view-specific kernel effectively leverage the complementary characteristics of the two tasks. Since the result with has shown the best performance in both category classification and viewpoint estimation, we set for remaining experiments. Note that the number of parameters in cylindrical kernel is , while the baseline uses . The number of additional parameters is marginal () compared to the total number of network parameters, while performance is significantly improved.
For the qualitative analysis, we applied the Grad-CAM  to visualize attention maps based on gradients from output category predictions. We compared the visualization results of CCNs with view-specific kernel and baseline with view-agnostic kernel. In Fig. 5, the attention map of the CCNs covers the overall regions in target object, while conventional category classifier tends to focus on the discriminative part of an object. From the observations, we conjecture that the view-specific convolutional kernel leads the network to capture the shape characteristic of object viewpoint.
|Massa et al. ||77.1||70.4||51.0||77.4||63.0||24.7||44.6||76.9||51.9||76.2||64.6||61.6|
|Poirson et al. ||76.6||67.7||42.7||76.1||59.7||15.5||51.7||73.6||50.6||77.7||60.7||59.3|
|Faster R-CNN w/ ||79.8||78.6||64.4||79.6||75.9||48.2||51.9||80.5||49.8||77.9||79.2||69.6|
|Faster R-CNN w/ ||82.7||78.3||71.8||78.7||76.0||50.8||53.3||83.3||50.7||82.6||77.2||71.4|
|CCNs w/ ||82.5||79.2||64.4||80.3||76.7||49.4||50.9||81.4||48.2||79.5||78.9||70.2|
|CCNs* w/ ||82.9||81.4||63.7||86.6||79.7||43.6||51.7||81.6||52.5||81.0||82.1||71.5|
|CCNs w/ ||82.6||80.6||69.3||84.9||78.8||50.9||50.7||83.4||50.3||82.2||80.0||72.2|
|CCNs* w/ ||83.7||82.8||71.4||88.1||81.2||46.3||51.1||85.9||52.7||83.8||84.0||73.7|
|Su et al. ||21.5||22.0||4.1||38.6||25.5||7.4||11.0||24.4||15.0||28.0||19.8||19.8|
|Tulsani Malik ||37.0||33.4||10.0||54.1||40.0||17.5||19.9||34.3||28.9||43.9||22.7||31.1|
|Massa et al. ||43.2||39.4||16.8||61.0||44.2||13.5||29.4||37.5||33.5||46.6||32.5||36.1|
|Poirson et al. ||33.4||29.4||9.2||54.7||35.7||5.5||23.0||30.3||27.6||44.1||34.3||28.8|
|Divon Tal ||46.6||41.1||23.9||72.6||53.5||22.5||42.6||42.0||44.2||54.6||44.8||44.4|
|CCNs w/ ||39.0||45.9||22.6||74.5||54.7||19.6||38.9||44.2||41.5||55.3||46.8||43.9|
|CCNs* w/ ||39.4||47.0||23.2||76.6||55.5||20.3||39.5||44.5||41.8||56.1||45.5||44.5|
|CCNs w/ ||45.1||47.4||23.1||77.8||55.2||19.9||39.6||45.3||43.4||58.0||47.8||45.7|
|CCNs* w/ ||46.1||48.8||24.2||78.0||55.9||20.9||41.0||45.3||43.7||59.5||49.0||46.6|
Pascal 3D+ dataset.
In the following, we evaluated our CCNs and CCNs* in comparison to the state-of-the-art methods. Object detection methods are compared such as DPM , RCNN , Faster R-CNN  with ResNet-101  and FPN . Joint object detection and viewpoint estimation methods are also compared, including hand-crafted modules such as VDPM , DPM-VOC+VP , methods using off-the-shelf 2D object detectors for viewpoint estimation such as Su et al. , Tulsani and Malik , Massa et al. , and unified methods such as Poirson et al. , Divon and Tal .
As shown in Table 2 and Table 3, our CCNs* with FPN  outperformed conventional methods in terms of both object detection (mAP) and joint object detection and viewpoint estimation (mAVP) on Pascal 3D+ dataset . It is noticeable that conventional methods for joint object detection and viewpoint estimation actually lowered the classification performance at , while ours improved the performance compared to the original Faster R-CNN [30, 21]. Furthermore, our semi-supervised learning scheme using real datasets  shows performance improvement, indicating that (2) enables the supervisory signal for viewpoint estimation. Note that other viewpoint estimation methods used synthetic images with ground-truth viewpoint annotation [35, 26, 6] or keypoint annotation . In Fig. 6, we show the examples of our joint object detection and viewpoint estimation on Pascal 3D+ dataset .
Table 4 validates the effect of CCNs on the standard object detection dataset. Compared to the baseline without using CCNs, object detection performance (AP) has increased by applying view-specific convolutional kernel. Furthermore, the localization performance (AR) has also increased, indicating that our view-specific convolutional kernel can effectively encode structural information of input objects.
We further evaluated our CCNs in KITTI object detection benchmark . Since the other methods aim to find 3D bounding boxes from monocular image, we conducted the experiment to validate the effectiveness of CCNs. As shown in Table 5, our CCNs have shown better results compare to original Faster-RCNN  by adapting view-specific convolutional kernel. On the other hand, joint training of object detection and viewpoint estimation without using CCNs actually lowered the object detection performance. This results share the same properties as previous studies [26, 7], indicating that proper modeling of geometric relationship is to be determined. In Fig. 7, we show the examples of our joint object detection and viewpoint estimation on KITTI dataset .
Estimating the viewpoint of deformable categories is an open problem. We thus experimented our cylindrical convolutional networks for visual recognition on rigid categories only . However, our key idea using view-specific convolutional kernel can be generalized with suitable modeling of deformable transformation (e.g., deformable convolution 
) at the kernel space. We believe that the modeling pose or keypoint of non-rigid categories (e.g., human pose estimation) with our CCNs can be alternative to the current limitation, and leave it as future work.
We have introduced cylindrical convolutional networks (CCNs) for joint object detection and viewpoint estimation. The key idea is to exploit view-specific convolutional kernels, sampled from a cylindrical convolutional kernel in a sliding window fashion, to predict an object category likelihood at each viewpoint. With this likelihood, we simultaneously estimate object category and viewpoint using the proposed sinusoidal soft-argmax module, resulting state-of-the-art performance on the task of joint object detection and viewpoint estimation. In the future, we aim to extend view-specific convolutional kernel into non-rigid categories.
-  (2016) Gift: a real-time and scalable 3d shape search engine. In , pp. 5023–5032. Cited by: §1, §1, Figure 2, §2, §3.1.
-  (2015) Shapenet: an information-rich 3d model repository. Cited by: §2.
-  (2016) Universal correspondence network. In Advances in Neural Information Processing Systems, pp. 2414–2422. Cited by: §1, §2.
-  (2017) Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 764–773. Cited by: §1, §2, §4.4.
-  (2009) Imagenet: a large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. Cited by: §2, §3.4, §4.1, §4.1.
-  (2018) Viewpoint estimation—insights & model. In Proceedings of the European Conference on Computer Vision, pp. 252–268. Cited by: §1, §1, Figure 2, §2, §2, §3.1, §3.2, §4.2, §4.3, §4.3, Table 3.
A comparative analysis and study of multiview cnn models for joint object categorization and pose estimation.
International Conference on Machine learning, pp. 888–897. Cited by: §1, §4.3.
-  (2015) The pascal visual object classes challenge: a retrospective. International Journal of Computer Vision 111 (1), pp. 98–136. Cited by: §4.1, §4.1.
-  (2009) Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (9), pp. 1627–1645. Cited by: §4.3, Table 2.
-  (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. Cited by: §1, Figure 7, §4.1, §4.1, §4.3, §4.3, Table 5.
-  (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587. Cited by: §1, §2, §3.3, §4.3, Table 2.
-  (2015) Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448. Cited by: §2, §3.3, §3.3.
-  (2017) Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969. Cited by: §1, §3.2, §3.4.
-  (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034. Cited by: §3.4.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §1, §3.4, §4.1, §4.2, §4.3, Table 2, Table 3, Table 4.
-  (2015) Spatial transformer networks. In Advances in Neural Information Processing Systems, pp. 2017–2025. Cited by: §1, §2, §3.3.
-  (2018) Rotationnet: joint object categorization and pose estimation using multiviews from unsupervised viewpoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5010–5019. Cited by: §2.
-  (2018) Recurrent transformer networks for semantic correspondence. In Advances in Neural Information Processing Systems, pp. 6126–6136. Cited by: §2.
-  (2012) Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097–1105. Cited by: §1.
-  (2017) Inverse compositional spatial transformer networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2568–2576. Cited by: §1, §2.
-  (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125. Cited by: §1, §3.4, §4.1, §4.3, §4.3, Table 2, Table 3, Table 4.
-  (2014) Microsoft coco: common objects in context. In Proceedings of the European Conference on Computer Vision, pp. 740–755. Cited by: §4.1, §4.3, Table 4.
-  (2016) Ssd: single shot multibox detector. In Proceedings of the European Conference on Computer Vision, pp. 21–37. Cited by: §1.
-  (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. Cited by: §1, §2.
-  (2018) maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch. Note: https://github.com/facebookresearch/maskrcnn-benchmark Cited by: §4.1.
-  (2016) Crafting a multi-task cnn for viewpoint estimation. In Proceedings of the British Machine Vision Conference, pp. 91.1–91.12. Cited by: §1, §1, Figure 2, §2, §3.1, §3.2, §4.3, §4.3, §4.3, Table 2, Table 3.
-  (2017) Automatic differentiation in pytorch. Cited by: §4.1.
-  (2012) Teaching 3d geometry to deformable part models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3362–3369. Cited by: §4.3, Table 2, Table 3.
-  (2016) Fast single shot detection and pose estimation. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 676–684. Cited by: §4.3, Table 2, Table 3.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pp. 91–99. Cited by: §1, §3.3, §3.3, §3.4, §4.1, §4.3, §4.3, §4.3, Table 5.
-  (2017) Dynamic routing between capsules. In Advances in neural information processing systems, pp. 3856–3866. Cited by: §1.
-  (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: Figure 5, §4.2.
-  (2014) Very deep convolutional networks for large-scale image recognition. In arXiv preprint arXiv:1409.1556, Cited by: §1, §2.
-  (2015) Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 945–953. Cited by: §1, §1, Figure 2, §2, §3.1.
-  (2015) Render for cnn: viewpoint estimation in images using cnns trained with rendered 3d model views. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2686–2694. Cited by: §1, §1, §2, §2, §3.3, §4.3, §4.3, Table 3.
-  (2015) Viewpoints and keypoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1510–1519. Cited by: §1, §2, §3.3, §4.2, §4.3, §4.3, Table 3.
-  (2017) Dominant set clustering and pooling for multi-view 3d object recognition. In Proceedings of the British Machine Vision Conference, pp. 61.4–61.12. Cited by: §1, §1, §2.
-  (2016) Viewpoint estimation for objects with convolutional neural network trained on synthetic images. In Pacific Rim Conference on Multimedia, pp. 169–179. Cited by: §2, §2, §3.3.
-  (2015) 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920. Cited by: §3.1.
-  (2017) Subcategory-aware convolutional neural networks for object proposals and detection. In IEEE Winter Conference on Applications of Computer Vision, pp. 924–933. Cited by: §4.1.
-  (2014) Beyond pascal: a benchmark for 3d object detection in the wild. In IEEE Winter Conference on Applications of Computer Vision, pp. 75–82. Cited by: §1, §2, §3.1, Figure 6, §4.1, §4.1, §4.3, §4.3, §4.4, Table 1, Table 2, Table 3.
-  (2016) Lift: learned invariant feature transform. In Proceedings of the European Conference on Computer Vision, pp. 467–483. Cited by: §1, §2.
-  (2018) Multi-view harmonized bilinear network for 3d object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 186–194. Cited by: §2.