Code for "Category-Specific Object Reconstruction from a Single Image" CVPR 2015.
Object reconstruction from a single image -- in the wild -- is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.READ FULL TEXT VIEW PDF
While data has certainly taken the center stage in computer vision in re...
Single-view 3D is the task of recovering 3D properties such as depth and...
All that structure from motion algorithms "see" are sets of 2D points. W...
Given a picture of a chair, could we extract the 3-D shape of the chair,...
Recent years have seen the development of mature solutions for reconstru...
Generic 3D reconstruction from a single image is a difficult problem. A ...
Single-image 3D shape reconstruction is an important and long-standing
Code for "Category-Specific Object Reconstruction from a Single Image" CVPR 2015.
Consider the car in Figure 1. As humans, not only can we infer at a glance that the image contains a car, we also construct a rich internal representation of it such as its location and 3D pose. Moreover, we have a guess of its 3D shape, even though we might never have have seen this particular car. We can do this because we don’t experience the image of this car tabula rasa, but in the context of our “remembrance of things past”. Previously seen cars enable us to develop a notion of the 3D shape of cars, which we can project to this particular instance. We also specialize our representation to this particular instance (e.g. any custom decorations it might have), signalling that both top-down and bottom-up cues influence our percept .
A key component in such a process would be a mechanism to build 3D shape models from past visual experiences. We have developed an algorithm that can build category-specific shape models from just images with 2D annotations (segmentation masks and a small set of keypoints) present in modern computer vision datasets (e.g. PASCAL VOC). These models are then used to guide the top down 3D shape reconstruction of novel 2D car images. We complement our top-down shape inference algorithm with a bottom-up module that further refines our shape estimate for a particular instance. Finally, building upon the rapid recent progress in recognition modules [2, 11, 17, 20, 34] (object detection, segmentation and pose estimation), we demonstrate that our learnt models are robust when applied “in the wild” enabling fully automatic reconstructions with just images as inputs.
The recent method of Vicente et al.  reconstructs 3D models from similar annotations as we do but it has a different focus: it aims to reconstruct a fully annotated image set while making strong assumptions about the quality of the segmentations it fits to and is hence inappropriate for reconstruction in an unconstrained setting. Our approach can work in such settings, partly because it uses explicit 3D shape models. Our work also has connections to that of Kemelmacher-Shlizerman et al. [23, 32] which aims to learn morphable models for faces from 2D images, but we focus on richer shapes in unconstrained settings, at the expense of lower resolution reconstructions.
In the history of computer vision, model-based object reconstruction from a single image has reflected varying preferences on model representations. Generalized cylinders  resulted in very compact descriptions for certain classes of shapes, and can be used for category level descriptions, but the fitting problem for general shapes in challenging. Polyhedral models [18, 40], which trace back to the early work of Roberts , and CAD models [25, 31] provide crude approximations of shape and given a set of point correspondences can be quite effective for determining instance viewpoints. Here we pursue more expressive basis shape models [1, 7, 42] which establish a balance between the two extremes as they can deform but only along class-specific modes of variation. In contrast to previous work (e.g. ), we fit them to automatic figure-ground object segmentations.
Our paper is organized as follows: in Section 2 we describe our model learning pipeline where we estimate camera viewpoints for all training objects (Section 2.1) followed by our shape model formulation (Section 2.2) to learn 3D models. Section 3 describes our testing pipeline where we use our learnt models to reconstruct novel instances without assuming any annotations. We evaluate our reconstructions under various settings in Section 4 and provide sample reconstructions in the wild.
We are interested in 3D shape models that can be robustly aligned to noisy object segmentations by incorporating top-down class-specific knowledge of how shapes from the class typically project into the image. We want to learn such models from just 2D training images, aided by ground truth segmentations and a few keypoints, similar to . Our approach operates by first estimating the viewpoints of all objects in a class using a structure-from-motion approach, followed by optimizing over a deformation basis of representative 3D shapes that best explain all silhouettes, conditioned on the viewpoints. We describe these two stages of model learning in the following subsections. Figure 2 illustrates this training pipeline of ours.
We use the framework of NRSfM  to jointly estimate the camera viewpoints (rotation, translation and scale) for all training instances in each class. Originally proposed for recovering shape and deformations from video [6, 33, 16, 10], NRSfM is a natural choice for viewpoint estimation from sparse correspondences as intra-class variation may become a confounding factor if not modeled explicitly. However, the performance of such algorithms has only been explored on simple categories, such as SUV’s  or flower petal and clown fish . Closer to our work, Hejrati and Ramanan  used NRSfM on a larger class (cars) but need a predictive detector to fill-in missing data (occluded keypoints) which we do not assume to have here.
We closely follow the EM-PPCA formulation of Torresani et al.  and propose a simple extension to the algorithm that incorporates silhouette information in addition to keypoint correspondences to robustly recover cameras and shape bases. Energies similar to ours have been proposed in the shape-from-silhouette literature  and with rigid structure-from-motion  but, to the best of our knowledge, not in conjunction with NRSfM.
NRSfM Model. Given keypoint correspondences per instance , our adaptation of the NRSfM algorithm in  corresponds to maximizing the likelihood of the following model:
Here, is the 2D projection of the 3D shape
with white noiseand the rigid transformation given by the orthographic projection matrix , scale and 2D translation . The shape is parameterized as a factored Gaussian with a mean shape ,
basis vectorsand latent deformation parameters . Our key modification is constraint (2) where denotes the Chamfer distance field of the instance’s binary mask and says that all keypoints of instance should lie inside its binary mask. We observed that this results in more accurate viewpoints as well as more meaningful shape bases learnt from the data.
Learning. The likelihood of the above model is maximized using the EM algorithm. Missing data (occluded keypoints) is dealt with by “filling-in” the values using the forward equations after the E-step. The algorithm computes shape parameters , rigid body transformations as well as the deformation parameters for each training instance . In practice, we augment the data using horizontally mirrored images to exploit bilateral symmetry in the object classes considered. We also precompute the Chamfer distance fields for the whole set to speed up computation. As shown in Figure 3, NRSfM allows us to reliably predict viewpoint while being robust to intraclass variations.
Equipped with camera projection parameters and keypoint correspondences (lifted to 3D by NRSfM) on the whole training set, we proceed to build deformable 3D shape models from object silhouettes within a class. 3D shape reconstruction from multiple silhouettes projected from a single object in calibrated settings has been widely studied. Two prominent approaches are visual hulls  and variational methods derived from snakes e.g [14, 30] which deform a surface mesh iteratively until convergence. Some interesting recent papers have extended variational approaches to handle categories [12, 13] but typically require some form of 3D annotations to bootstrap models. A recently proposed visual-hull based approach  requires only 2D annotations as we do for class-based reconstruction and it was successfully demonstrated on PASCAL VOC but does not serve our purposes as it makes strong assumptions about the accuracy of the segmentation and will in fact fill entirely any segmentation with a voxel layer.
Shape Model Formulation.
We model our category shapes as deformable point clouds – one for each subcategory of the class. The underlying intuition is the following: some types of shape variation may be well explained by a parametric model e.g. a Toyota sedan and a Lexus sedan, but it is unreasonable to expect them to model the variations between sail boats and cruise liners. Such models typically require knowledge of object parts, their spatial arrangements etc. and involve complicated formulations that are difficult to optimize. We instead train separate linear shape models for different subcategories of a class. As in the NRSfM model, we use a linear combination of bases to model these deformations. Note that we learn such models from silhouettes and this is what enables us to learn deformable models without relying on point correspondences between scanned 3D exemplars .
Our shape model comprises of a mean shape and deformation bases learnt from a training set , where is the instance silhouette and is the projection function from world to image coordinates. Note that the we obtain using NRSfM corresponds to orthographic projection but our algorithm could handle perspective projection as well.
Energy Formulation. We formulate our objective function primarily based on image silhouettes. For example, the shape for an instance should always project within its silhouette and should agree with the keypoints (lifted to 3D by NRSfM ). We capture these by defining corresponding energy terms as follows: (here corresponds to the 2D projection of shape , refers to the Chamfer distance field of the binary mask of silhouette and is defined as the squared average distance of point to its nearest neighbors in set )
Silhouette Consistency. Silhouette consistency simply enforces the predicted shape for an instance to project inside its silhouette. This can be achieved by penalizing the points projected outside the instance mask by their distance from the silhouette. In our notation it can be written as follows:
Silhouette Coverage. Using silhouette consistency alone would just drive points projected outside in towards the silhouette. This wouldn’t ensure though that the object silhouette is “filled” - i.e. there might be overcarving. We deal with it by having an energy term that encourages points on the silhouette to pull nearby projected points towards them. Formally, this can be expressed as:
Keypoint Consistency. Our NRSfM algorithm provides us with sparse 3D keypoints along with camera viewpoints. We use these sparse correspondences on the training set to deform the shape to explain these 3D points. The corresponding energy term penalizes deviation of the shape from the 3D keypoints for each instance. Specifically, this can be written as:
Local Consistency. In addition to the above data terms, we use a simple shape regularizer to restrict arbitrary deformations by imposing a quadratic deformation penalty between every point and its neighbors. We also impose a similar penalty on deformations to ensure local smoothness. The parameter represents the mean squared displacement between neighboring points and it encourages all faces to have similar size. Here is the point in the basis.
Normal Smoothness. Shapes occurring in the natural world tend to be locally smooth. We capture this prior on shapes by placing a cost on the variation of normal directions in a local neighborhood in the shape. Our normal smoothness energy is formulated as
Here, represents the normal for the point in shape which is computed by fitting planes to local point neighborhoods. Our prior essentially states that local point neighborhoods should be flat. Note that this, in conjunction with our previous energies automatically enforces the commonly used prior that normals should be perpendicular to the viewing direction at the occluding contour .
Our total energy is given in equation 8. In addition to the above smoothness priors we also penalize the norm of the deformation parameters to prevent unnaturally large deformations.
Learning. We solve the optimization problem in equation 9 to obtain our shape model . The mean shape and deformation basis are inferred via block-coordinate descent on and using sub-gradient computations over the training set. We restrict to be a constant to address the scale ambiguity between and
in our formulation. In order to deal with imperfect segmentations and wrongly estimated keypoints, we use truncated versions of the above energies that reduce the impact of outliers. The mean shapes learnt using our algorithm for 9 rigid categories in PASCAL VOC are shown in Figure4. Note that in addition to representing the coarse shape details of a category, the model also learns finer structures like chair legs and bicycle handles, which become more prominent with deformations.
Our training objective is highly non-convex and non-smooth and is susceptible to initialization. We follow the suggestion of  and initialize our mean shape with a soft visual hull computed using all training instances. The deformation bases and deformation weights are initialized randomly.
We approach object reconstruction from the big picture downward - like a sculptor first hammering out the big chunks and then chiseling out the details. After detecting and segmenting objects in the scene, we infer their coarse 3D poses and use them to fit our top-down shape models to the noisy segmentation masks. Finally, we recover high frequency shape details from shading cues. We will now explain these components one at a time.
Initialization. During inference, we first detect and segment the object in the image  and then predict viewpoint (rotation matrix) and subcategory for the object using a CNN based system similar to  (augmented to predict subcategories). Our learnt models are at a canonical bounding box scale - all objects are first resized to a particular width during training. Given the predicted bounding box, we scale the learnt mean shape of the predicted subcategory accordingly. Finally, the mean shape is rotated as per the predicted viewpoint and translated to the center of the predicted bounding box.
Shape Inference. After initialization, we solve for the deformation weights (initialized to ) as well as all the camera projection parameters (scale, translation and rotation) by optimizing equation (9) for fixed . Note that we do not have access to annotated keypoint locations at test time, the ‘Keypoint Consistency’ energy is ignored during the optimization.
Bottom-up Shape Refinement. The above optimization results in a top-down 3D reconstruction based on the category-level models, inferred object silhouette, viewpoint and our shape priors. We propose an additional processing step to recover high frequency shape information by adapting the intrinsic images algorithm of Barron and Malik [5, 4], SIRFS, which exploits statistical regularities between shapes, reflectance and illumination Formally, SIRFS is formulated as the following optimization problem:
where is a log-reflectance image, is a depth map and is a spherical-harmonic model of illumination. is a rendering engine which produces a log shading image with the illumination . , and
are the loss functions corresponding to reflectance, shape and illumination respectively.
We incorporate our current coarse estimate of shape into SIRFS through an additional loss term:
where is the initial coarse shape and a parameter added to make the loss differentiable everywhere. We obtain for an object by rendering a depth map of our fitted 3D shape model which guides the optimization of this highly non-convex cost function. The outputs from this bottom-up refinement are reflectance, shape and illumination maps of which we retain the shape.
Implementation Details. The gradients involved in our optimization for shape and projection parameters are extremely efficient to compute. We use approximate nearest neighbors computed using k-d tree to implement the ‘Silhouette Coverage’ gradients and leverage Chamfer distance fields for obtaining ‘Silhouette Consistency’ gradients. Our overall computation takes only about 2 sec to reconstruct a novel instance using a single CPU core. Our training pipeline is also equally efficient - taking only a few minutes to learn a shape model for a given object category.
Experiments were performed to assess two things: 1) how expressive our learned 3D models are by evaluating how well they matched the underlying 3D shapes of the training data 2) study their sensitivity when fit to images using noisy automatic segmentations and pose predictions.
Datasets. For all our experiments, we consider images from the challenging PASCAL VOC 2012 dataset  which contain objects from the 10 rigid object categories (as listed in Table 1). We use the publicly available ground truth class-specific keypoints  and object segmentations . Since ground truth 3D shapes are unavailable for PASCAL VOC and most other detection datasets, we evaluated the expressiveness of our learned 3D models on the next best thing we managed to obtain: the PASCAL3D+ dataset  which has up to 10 3D CAD models for the rigid categories in PASCAL VOC. PASCAL3D+ provides between 4 different models for “tvmonitor” and “train” and 10 for “car” and “chair”. The different meshes primarily distinguish between subcategories but may also be redundant (e.g., there are more than 3 meshes for sedans in “car”). We obtain our subcategory labels on the training data by merging some of these cases, which also helps us in tackling data sparsity for some subcategories. The subset of PASCAL we considered after filtering occluded instances, which we do not tackle in this paper, had between 70 images for “sofa” and 500 images for classes “aeroplanes” and “cars”. We will make all our image sets available along with our implementation.
Metrics. We quantify the quality of our 3D models by comparing against the PASCAL 3D+ models using two metrics - 1) the Hausdorff distance normalized by the 3D bounding box size of the ground truth model  and 2) a depth map error to evaluate the quality of the reconstructed visible object surface, measured as the mean absolute distance between reconstructed and ground truth depth:
where and represent predicted and ground truth depth maps respectively. Analytically, can be computed as the median of and is a normalization factor to account for absolute object size for which we use the bounding box diagonal. Note that our depth map error is translation and scale invariant.
We learn and fit our 3D models on the same whole dataset (no train/test split), following the setup of Vicente et al . Table 1 compares our reconstructions on PASCAL VOC with those of this recently proposed method which is specialized for this task (e.g. it is not designed for fitting to noisy data), as well as to a state of the art class-agnostic shape inflation method that reconstructs also from a single silhouette. We demonstrate competitive performance on both benchmarks with our models showing greater robustnes to perspective foreshortening effects on “trains” and “buses”. Category-agnostic methods – Puffball and SIRFS
– consistently perform worse on the benchmark by themselves. Certain classes like “boat” and “tvmonitor” are especially hard because of large intraclass variance and data sparsity respectively.
In order to analyze sensitivity of our models to noisy inputs we reconstructed held-out test instances using our models given just ground truth bounding boxes. We compare various versions of our method using ground truth(Mask)/imperfect segmentations(SDS) and keypoints(KP)/our pose predictor(PP) for viewpoint estimation respectively. For pose prediction, we use the CNN-based system of  and augment it to predict subtypes at test time. This is achieved by training the system as described in  with additional subcategory labels obtained from PASCAL 3D+ as described above. To obtain an approximate segmentation from the bounding box, we use the refinement stage of the state-of-the-art joint detection and segmentation system proposed in .
Here, we use a train/test setting where our models are trained on only a subset of the data and used to reconstruct the held out data from bounding boxes. Table 2 shows that our results degrade gracefully from the fully annotated to the fully automatic setting. Our method is robust to some mis-segmentation owing to our shape model that prevents shapes from bending unnaturally to explain noisy silhouettes. Our reconstructions degrade slightly with imperfect pose initializations even though our projection parameter optimization deals with it to some extent. With predicted poses, we observe that sometimes even when our reconstructions look plausible, the errors can be high as the metrics are sensitive to bad alignment. The data sparsity issue is especially visible in the case of sofas where in a train/test setting in Table 2 the numbers drop significantly with less training data (only 34 instances). Note we do not evaluate our bottom-up component as the PASCAL 3D+ meshes provided do not share the same high frequency shape details as the instance. We will show qualitative results in the next subsection.
We qualitatively demonstrate reconstructions on automatically detected and segmented instances with 0.5 IoU overlap with the ground truth in whole images in PASCAL VOC using  in Figure 5. We can see that our method is able to deal with some degree of mis-segmentation. Some of our major failure modes include not being able to capture the correct scale and pose of the object and thus badly fitting to the silhouette in some cases. Our subtype prediction also fails on some instances (e.g. CRT vs flat screen “tvmonitors”) leading to incorrect reconstructions. We include more such images in the supplementary material for the reader to peruse.
We have proposed what may be the first approach to perform fully automatic object reconstruction from a single image on a large and realistic dataset. Critically, our deformable 3D shape model can be bootstrapped from easily acquired ground-truth 2D annotations thereby bypassing the need for a-priori manual mesh design or 3D scanning and making it possible for convenient use of these types of models on large real-world datasets (e.g. PASCAL VOC). We report an extensive evaluation of the quality of the learned 3D models on a recent 3D benchmarking dataset for PASCAL VOC  showing competitive results with models that specialize in shape reconstruction from ground truth segmentations inputs while demonstrating that our method is equally capable in the wild, on top of automatic object detectors.
Much research lies ahead, both in terms of improving the quality and the robustness of reconstruction at test time (both bottom-up and top-down components), developing benchmarks for joint recognition and reconstruction and relaxing the need for annotations during training: all of these constitute interesting and important directions for future work. More expressive non-linear shape models  may prove helpful, as well as a tighter integration between segmentation and reconstruction.
This work was supported in part by NSF Award IIS-1212798 and ONR MURI-N00014-10-1-0933. Shubham Tulsiani was supported by the Berkeley fellowship and João Carreira was supported by the Portuguese Science Foundation, FCT, under grant SFRH/BPD/84194/2012. We gratefully acknowledge NVIDIA corporation for the donation of Tesla GPUs for this research.
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8, June 2008.