1 Introduction
Understanding the 3D world is at the heart of successful computer vision applications in robotics, rendering and modeling Szeliski (2010). It is especially important to solve this problem using the most convenient visual sensory data: 2D images. In this paper, we propose an endtoend solution to the challenging problem of predicting the underlying true shape of an object given an arbitrary single image observation of it. This problem definition embodies a fundamental challenge: Imagery observations of 3D shapes are interleaved representations of intrinsic properties of the shape itself (e.g., geometry, material), as well as its extrinsic properties that depend on its interaction with the observer and the environment (e.g., orientation, position, and illumination). Physically principled shape understanding should be able to efficiently disentangle such interleaved factors.
This observation leads to insight that an endtoend solution to this problem from the perspective of learning agents (neural networks) should involve the following properties: 1) the agent should understand the physical meaning of how a 2D observation is generated from the 3D shape, and 2) the agent should be conscious about the outcome of its interaction with the object; more specifically, by moving around the object, the agent should be able to correspond the observations to the viewpoint change. If such properties are embodied in a learning agent, it will be able to disentangle the shape from the extrinsic factors because these factors are trivial to understand in the 3D world. To enable the agent with these capabilities, we introduce a builtin camera system that can transform the 3D object into 2D images innetwork. Additionally, we architect the network such that the latent representation disentangles the shape from view changes. More specifically, our network takes as input an object image and predicts its volumetric 3D shape so that the perspective transformations of predicted shape match well with corresponding 2D observations.
We implement this neural network based on a combination of image encoder, volume decoder and perspective transformer (similar to spatial transformer as introduced by Jaderberg et al. Jaderberg et al. (2015)). During training, the volumetric 3D shape is gradually learned from singleview input and the feedback of other views through backpropagation. Thus at test time, the 3D shape can be directly generated from a single image. We conduct experimental evaluations using a subset of 3D models from ShapeNetCore Chang et al. (2015). Results from singleclass and multiclass training demonstrate excellent performance of our network for volumetric 3D reconstruction. Our main contributions are summarized below.

We show that neural networks are able to predict 3D shape from singleview without using the ground truth 3D volumetric data for training. This is made possible by introducing a 2D silhouette loss function based on perspective transformations.

We train a single network for multiclass 3D object volumetric reconstruction and show its generalization potential to unseen categories.

Compared to training with full azimuth angles, we demonstrate comparatively similar results when training with partial views.
2 Related Work
Representation learning for 3D objects.
Recently, advances have been made in learning deep neural networks for 3D objects using largescale CAD databases Wu et al. (2015); Chang et al. (2015). Wu et al. Wu et al. (2015)
proposed a deep generative model that extends the convolutional deep belief network
Lee et al. (2009) to model volumetric 3D shapes. Different from Wu et al. (2015) that uses volumetric 3D representation, Su et al. Su et al. (2015) proposed a multiview convolutional network for 3D shape categorization with a viewpooling mechanism. These methods focus more on 3D shape recognition instead of 3D shape reconstruction. Recent work Tatarchenko et al. (2015); Qi et al. (2016); Girdhar et al. (2016); Choy et al. (2016) attempt to learn a joint representation for both 2D images and 3D shapes. Tatarchenko et al. Tatarchenko et al. (2015) developed a convolutional network to synthesize unseen 3D views from a single image and demonstrated the synthesized images can be used them to reconstruct 3D shape. Qi et al. Qi et al. (2016) introduced a joint embedding by combining volumetric representation and multiview representation together to improve 3D shape recognition performance. Girdhar et al. Girdhar et al. (2016) proposed a generative model for 3D volumetric data and combined it with a 2D image embedding network for singleview 3D shape generation. Choy et al. Choy et al. (2016)introduce a 3D recurrent neural network (3DR2N2) based on longshort term memory (LSTM) to predict the 3D shape of an object from a single view or multiple views. Compared to these singleview methods, our 3D reconstruction network is learned endtoend and the network can be even trained without ground truth volumes.
Concurrent to our work, Renzede et al. Rezende et al. (2016) introduced a general framework to learn 3D structures from 2D observations with 3D2D projection mechanism. Their 3D2D projection mechanism either has learnable parameters or adopts nondifferentiable component using MCMC, while our perspective projection network is both differentiable and parameterfree.
Representation learning by transformations.
Learning from transformed sensory data has gained attention Memisevic and Hinton (2007); Hinton et al. (2011); Reed et al. (2014); Michalski et al. (2014); Yang et al. (2015); Jaderberg et al. (2015); Yumer and Mitra (2016) in recent years. Memisevic and Hinton Memisevic and Hinton (2007)
introduced a gated Boltzmann machine that models the transformations between image pairs using multiplicative interaction. Reed et al.
Reed et al. (2014) showed that a disentangled hidden unit representations of Boltzmann Machines (disBM) could be learned based on the transformations on data manifold. Yang et al. Yang et al. (2015) learned outofplane rotation of rendered images to obtain disentangled identity and viewpoint units by curriculum learning. Kulkarni et al. Kulkarni et al. (2015) proposed to learn a semantically interpretable latent representation from 3D rendered images using variational autoencoders Kingma and Welling (2013) by including specific transformations in minibatches. Complimentary to convolutional networks, Jaderberg et al. Jaderberg et al. (2015) introduced a differentiable sampling layer that directly incorporates geometric transformations into representation learning. Concurrent to our work, Wu et al. Wu et al. (2016) proposed a 3D2D projection layer that enables the learning of 3D object structures using 2D keypoints as annotation.3 Problem Formulation
In this section, we develop neural networks for reconstructing 3D objects. From the perspective of a learning agent (e.g., neural network), a natural way to understand one 3D object is from its 2D views by transformations. By moving around the 3D object, the agent should be able to recognize its unique features and eventually build a 3D mental model of it as illustrated in Figure 1(a). Assume that is the 2D image from the th viewpoint by projection , or rendering in graphics. An object in a certain scene is the entanglement of shape, color and texture (its intrinsic properties) and the image is the further entanglement with viewpoint and illumination (extrinsic parameters). The general goal of understanding 3D objects can be viewed as disentangling intrinsic properties and extrinsic parameters from a single image.
In this paper, we focus on the 3D shape learning by ignoring the color and texture factors, and we further simplify the problem by making the following assumptions: 1) the scene is clean white background; 2) the illumination is constant natural lighting. We use the volumetric representation of 3d shape where each voxel is a binary unit. In other words, the voxel equals to one, i.e., , if the th voxel sapce is occupied by the shape; otherwise . Assuming the 2D silhouette is obtained from the th image , we can specify the 3D2D projection
. Note that 2D silhouette estimation is typically solved by object segmentation in realworld but it becomes trivial in our case due to the white background.
In the following subsections, we propose a formulation for learning to predict the volumetric 3D shape from an image with and without the 3D volume supervision.
3.1 Learning to Reconstruct Volumetric 3D Shape from SingleView
We consider singleview volumetric 3D reconstruction as a dense prediction problem and develop a convolutional encoderdecoder network for this learning task denoted by . The encoder network learns a viewpointinvariant latent representation which is then used by the decoder to generate the volume . In case the ground truth volumetric shapes are available, the problem can be easily considered as learning volumetric 3D shapes with a regular reconstruction objective in 3D space: .
In practice, however, the ground truth volumetric 3D shapes may not be available for training. For example, the agent observes the 2D silhouette via its builtin camera without accessing the volumetric 3D shape. Inspired by the space carving theory Kutulakos and Seitz (2000), we propose a silhouettebased volumetric loss function. In particular, we build on the premise that a 2D silhouette projected from the generated volume under certain camera viewpoint should match the ground truth 2D silhouette from image observations. In other words, if all the generated silhouettes match well with their corresponding ground truth silhouettes for all ’s, then we hypothesize that the generated volume should be as good as one instance of visual hull equivalent class of the ground truth volume Kutulakos and Seitz (2000). Therefore, we formulate the learning objective for the th image as
(1) 
where is the index of output 2D silhouettes, is the number of silhouettes used for each input image and is the 3D2D projection function. Note that the above training objective Eq. (1) enables training without using groundtruth volumes. The network diagram is illustrated in Figure 1(b). A more general learning objective is given by a combination of both objectives:
(2) 
where and are constants that control the tradeoff between the two losses.
3.2 Perspective Transformer Networks
As defined previously, 2D silhouette is obtained via perspective projection given input 3D volume and specific camera viewpoint . In this work, we implement the perspective projection (see Figure 1(c)) with a 4by4 transformation matrix , where is camera calibration matrix and is extrinsic parameters.
(3) 
For each point in 3D world frame, we compute the corresponding point in camera frame (plus disparity ) using the inverse of perspective transformation matrix:
. Similar to the spatial transformer network introduced in
Jaderberg et al. (2015), we implement a novel perspective transformation operator that performs dense sampling from input volume (in 3D world frame) to output volume (in camera frame). To obtain the 2D silhouettes from 3D volume, we propose a simple approach using operator that flattens the 3D spatial output across disparity dimension. This operation can be treated as an approximation to the raytracing algorithm.In the experiment, we assume that transformation matrix is always given as input, parametrized by the viewpoint . Again, the 3D point in input volume and corresponding point in output volume is linked by perspective transformation matrix . Here, and are the width, height and depth of input and output volume, respectively.
We summarize the dense sampling step and channelwise flattening step as follows.
(4) 
Here, is the th voxel value corresponding to the point (where . Note that we use the operator for projection instead of summation along one dimension since the volume is represented as a binary cube where the solid voxels have value 1 and empty voxels have value 0. Intuitively, we have the following two observations: (1) each empty voxel will not contribute to the foreground pixel of from any viewpoint; (2) each solid voxel can contribute to the foreground pixel of only if it is visible from a specific viewpoint.
3.3 Training
As the same volumetric 3D shape is expected to be generated from different images of the object, the encoder network is required to learn a 3D viewinvariant latent representation
(5) 
This subproblem itself is a challenging task in computer vision Yang et al. (2015); Kulkarni et al. (2015). Thus, we adopt a twostage training procedure: first, we learn the encoder network for a 3D viewinvariant latent representation and then train the volumetric decoder with perspective transformer networks. As shown in Yang et al. (2015), a disentangled representation of 2D synthetic images can be learned from consecutive rotations with a recurrent network, we pretrain the encoder of our network using a similar curriculum strategy so that the latent representation only contains 3D viewinvariant identity information of the object. Once we obtain an encoder network that recognizes the identity of singleview images, we next learn the volume generator regularized by the perspective transformer networks. To encourage the volume decoder to learn a consistent 3D volume from different viewpoints, we include the projections from neighboring viewpoints in each minibatch so that the network has relatively sufficient information to reconstruct the 3D shape.
4 Experiments
ShapeNetCore.
This dataset contains about 51,300 unique 3D models from 55 common object categories Chang et al. (2015). Each 3D model is rendered from 24 azimuth angles (with steps of 15) with fixed elevation angles (30) under the same camera and lighting setup. We then crop and rescale the centering region of each image to pixels. For each ground truth 3D shape, we create a volume of voxels from its canonical orientation ().
Network Architecture.
As shown in Figure 2, our encoderdecoder network has three components: a 2D convolutional encoder, a 3D upconvolutional decoder and a perspective transformer networks. The 2D convolutional encoder consists of 3 convolution layers, followed by 3 fullyconnected layers (convolution layers have 64, 128 and 256 channels with fixed filter size of
; the three fullyconnected layers have 1024, 1024 and 512 neurons, respectively). The 3D convolutional decoder consists of one fullyconnected layer, followed by 3 convolution layers (the fullyconnected layer have
neurons; convolution layers have 256, 96 and 1 channels with filter size of , and ). For perspective transformer networks, we used perspective transformation to project 3D volume to 2D silhouette where the transformation matrix is parametrized by 16 variables and sampling grid is set to . We use the same network architecture for all the experiments.Implementation Details.
We used the ADAM Kingma and Ba (2014) solver for stochastic optimization in all the experiments. During the pretraining stage (for encoder), we used minibatch of size 32, 32, 8, 4, 3 and 2 for training the RNN1, RNN2, RNN4, RNN8, RNN12 and RNN16 as used in Yang et al. Yang et al. (2015). We used the learning rate for RNN1, and for the rest of recurrent neural networks. During the finetuning stage (for volume decoder), we used minibatch of size 6 and learning rate
. For each object in a minibatch, we include projections from all 24 views as supervision. The models including the perspective transformer nets are implemented using Torch
Collobert et al. (2011). To download the code, please refer to the project webpage: http://goo.gl/YEJ2H6.Experimental Design.
As mentioned in the formulation, there are several variants of the model depending on the hyperparameters of learning objectives and . In the experimental section, we denote the model trained with projection loss only, volume loss only, and combined loss as PTNProj (PR), CNNVol (VO), and PTNComb (CO), respectively.
In the experiments, we address the following questions: (1) Will the model trained with combined loss achieve better singleview 3D reconstruction performance over model trained on volume loss only (PTNComb vs. CNNVol)? (2) What is the performance gap between the models with and without groundtruth volumes (PTNComb vs. PTNProj)? (3) How do the three models generalize to instances from unseen categories which are not present in the training set? To answer the questions, we trained the three models under two experimental settings: single category and multiple categories.
4.1 Training on a single category



Method Evaluation Set  chair  chairN  
training  test  training  test  
PTNProj:single (no vol. supervision)  0.5712  0.5027  0.4882  0.4583 
PTNComb:single (vol. supervision)  0.6435  0.5067  0.5564  0.4429 
CNNVol:single (vol. supervision)  0.6390  0.4983  0.5518  0.4380 
NN search (vol. supervision)  —  0.3557  —  0.3073 
We select chair category as the training set for single category experiment. For model comparisons, we first conduct quantitative evaluations on the generated 3D volumes from the test set singleview images. For each instance in the test set, we generate one volume per view image (24 volumes generated in total). Given a pair of groundtruth volume and our generated volume (threshold is 0.5), we computed its intersectionoverunion (IU) score and the average IU score is calculated over 24 volumes of all the instances in the test set. In addition, we provide a baseline method based on nearest neighbor (NN) search. Specifically, for each of the test image, we extract VGG feature from fc6
layer (4096dim vector)
Simonyan and Zisserman (2014) and retrieve the nearest training example using Euclidean distance in the feature space. The groundtruth 3D volume corresponds to the nearest training example is naturally regarded as the retrieval result.As shown in Table 1, the model trained without volume supervision (projection loss) performs as good as model trained with volume supervision (volume loss) on the chair category (testing set). In addition to the comparisons of overall IU, we measured the viewdependent IU for each model. As shown in Figure 4, the average prediction error (mean IU) changes as we gradually move from the first view to the last view (15 to 360).
For visual comparisons, we provide a sidebyside analysis for each of the three models we trained. As shown in Figure 3, each row shows an independent comparison. The first column is the 2D image we used as input of the model. The second and third column show the groundtruth 3D volume (same volume rendered from two views for better visualization purpose). Similarly, we list the model trained with projection loss only (PTNProj), combined loss (PTNComb) and volume loss only (CNNVol) from the fourth column up to the ninth column. The volumes predicted by PTNProj and PTNComb faithfully represent the shape. However, the volumes predicted by CNNVol do not form a solid chair shape in some cases.
Training with partial views. We also conduct control experiments where each object is only observable from a narrow range of azimuth angles (e.g., 8 out of 24 views such as 0, 15, , 105). We include the detailed description in the supplementary materials. As shown in Table 1 (last two columns), performances of all three models drop a little bit but the conclusion is similar: the proposed network (1) learns better 3D shape with projection regularization and (2) is capable of learning the 3D shape by providing 2D observations only.
4.2 Training on multiple categories
Test Category  airplane  bench  dresser  car  chair  display  lamp 

PTNProj:multi  0.5556  0.4924  0.6823  0.7123  0.4494  0.5395  0.4223 
PTNComb:multi  0.5836  0.5079  0.7109  0.7381  0.4702  0.5473  0.4158 
CNNVol:multi  0.5747  0.5142  0.6975  0.7348  0.4451  0.5390  0.3865 
NN search  0.5564  0.4875  0.5713  0.6519  0.3512  0.3958  0.2905 
Test Category  loudspeaker  rifle  sofa  table  telephone  vessel  
PTNProj:multi  0.5868  0.5987  0.6221  0.4938  0.7504  0.5507  
PTNComb:multi  0.5675  0.6097  0.6534  0.5146  0.7728  0.5399  
CNNVol:multi  0.5478  0.6031  0.6467  0.5136  0.7692  0.5445  
NN search  0.4600  0.5133  0.5314  0.3097  0.6696  0.4078 



We conducted multiclass experiment using the same setup in the singleclass experiment. For multicategory experiment, the training set includes 13 major categories: airplane, bench, dresser, car, chair, display, lamp, loudspeaker, rifle, sofa, table, telephone and vessel. We preserved 20% of instances from each category as testing data. As shown in Table 2, the quantitative results demonstrate (1) model trained with combined loss is superior to volume loss in most cases and (2) model trained with projection loss perform as good as volume/combined loss. From the visualization results shown in Figure 5, all three models predict volumes reasonably well. There is only subtle performance difference in object part such as the wing of airplane.
4.3 OutofCategory Tests
Method / Test Category  bed  bookshelf  cabinet  motorbike  train 

PTNProj:single (no vol. supervision)  0.1801  0.1707  0.3937  0.1189  0.1550 
PTNComb:single (vol. supervision)  0.1507  0.1186  0.2626  0.0643  0.1044 
CNNVol:single (vol. supervision)  0.1558  0.1183  0.2588  0.0580  0.0956 
PTNProj:multi (no vol. supervision)  0.1944  0.3448  0.6484  0.3216  0.3670 
PTNComb:multi (vol. supervision)  0.1647  0.3195  0.5257  0.1914  0.3744 
CNNVol:multi (vol. supervision)  0.1586  0.3037  0.4977  0.2253  0.3740 
Ideally, an intelligent agent should have the ability to generalize the knowledge learned from previously seen categories to unseen categories. To this end, we design outofcategory tests for both models trained on a single category and multiple categories, as described in Section 4.1 and Section 4.2, respectively. We select 5 unseen categories from ShapeNetCore: bed, bookshelf, cabinet, motorbike and train for outofcategory tests. Here, the two categories cabinet and train are relatively easier than other categories since there might be instances in the training set with similar shapes (e.g., dresser, vessel, and airplane). But the bed,bookshelf and motorbike can be considered as completely novel categories in terms of shape.
We summarized the quantitative results in Table 3. Surprisingly, the model trained on multiple categories still achieves reasonably good overall IU. As shown in Figure 6, the proposed projection loss generalizes better than model trained using combined loss or volume loss on train, motorbike and cabinet. The observations from the outofcategory tests suggest that (1) generalization from a single category is very challenging, but training from multiple categories can significantly improve generalization, and (2) the projection regularization can help learning a robust representation for better generalization on unseen categories.



5 Conclusions
In this paper, we investigate the problem of singleview 3D shape reconstruction from a learning agent’s perspective. By formulating the learning procedure as the interaction between 3D shape and 2D observation, we propose to learn an encoderdecoder network which takes advantage of the projection transformation as regularization. Experimental results demonstrate (1) excellent performance of the proposed model in reconstructing the object even without groundtruth 3D volume as supervision and (2) the generalization potential of the proposed model to unseen categories.
Acknowledgments
This work was supported in part by NSF CAREER IIS1453651, ONR N000141310762, Sloan Research Fellowship, and a gift from Adobe. We acknowledge NVIDIA for the donation of GPUs. We also thank Yuting Zhang, Scott Reed, Junhyuk Oh, Ruben Villegas, Seunghoon Hong, Wenling Shang, Kibok Lee, Lajanugen Logeswaran, Rui Zhang and Yi Zhang for helpful comments and discussions.
References
 Chang et al. [2015] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An informationrich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
 Choy et al. [2016] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3dr2n2: A unified approach for single and multiview 3d object reconstruction. In ECCV, 2016.

Collobert et al. [2011]
R. Collobert, K. Kavukcuoglu, and C. Farabet.
Torch7: A matlablike environment for machine learning.
In BigLearn, NIPS Workshop, 2011.  Girdhar et al. [2016] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta. Learning a predictable and generative vector representation for objects. In ECCV, 2016.
 Hinton et al. [2011] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming autoencoders. In ICANN. Springer, 2011.
 Jaderberg et al. [2015] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, 2015.
 Kingma and Ba [2014] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kingma and Welling [2013] D. P. Kingma and M. Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kulkarni et al. [2015] T. D. Kulkarni, W. Whitney, P. Kohli, and J. B. Tenenbaum. Deep convolutional inverse graphics network. In NIPS, 2015.
 Kutulakos and Seitz [2000] K. N. Kutulakos and S. M. Seitz. A theory of shape by space carving. International Journal of Computer Vision, 38(3):199–218, 2000.
 Lee et al. [2009] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009.
 Memisevic and Hinton [2007] R. Memisevic and G. Hinton. Unsupervised learning of image transformations. In CVPR, 2007.
 Michalski et al. [2014] V. Michalski, R. Memisevic, and K. Konda. Modeling deep temporal dependencies with recurrent grammar cells"". In NIPS, 2014.
 Qi et al. [2016] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multiview cnns for object classification on 3d data. In CVPR, 2016.
 Reed et al. [2014] S. Reed, K. Sohn, Y. Zhang, and H. Lee. Learning to disentangle factors of variation with manifold interaction. In ICML, 2014.
 Rezende et al. [2016] D. J. Rezende, S. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016.
 Simonyan and Zisserman [2014] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.

Su et al. [2015]
H. Su, S. Maji, E. Kalogerakis, and E. LearnedMiller.
Multiview convolutional neural networks for 3d shape recognition.
In ICCV, 2015.  Szeliski [2010] R. Szeliski. Computer vision: algorithms and applications. 2010.
 Tatarchenko et al. [2015] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Singleview to multiview: Reconstructing unseen views with a convolutional network. arXiv preprint arXiv:1511.06702, 2015.
 Wu et al. [2016] J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman. Single image 3d interpreter network. In ECCV, 2016.
 Wu et al. [2015] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In CVPR, 2015.
 Yang et al. [2015] J. Yang, S. E. Reed, M.H. Yang, and H. Lee. Weaklysupervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015.
 Yumer and Mitra [2016] E. Yumer and N. J. Mitra. Learning semantic deformation flows with 3d convolutional networks. In ECCV, 2016.
Appendix A Details regarding perspective transformer network
As defined in the main text, 2D silhouette is obtained via perspective transformation given input 3D volume and specific camera viewpoint .
Perspective Projection.
In this work, we implement the perspective projection (see Figure 7) with a 4by4 transformation matrix , where is camera calibration matrix and is extrinsic parameters.
(6) 
For each point in 3D world frame, we compute the corresponding point in camera frame (plus disparity ) using the inverse perspective transformation matrix: .
Similar to the spatial transformer network introduced in Jaderberg et al. (2015), we implement a novel perspective transformation operator that performs dense sampling from input volume (in 3D world frame) to output volume (in camer frame). In the experiment, we assume that transformation matrix is always given as input, parametrized by the viewpoint .
(7) 
In addition, we obtain normalized coordinates by , and , where is the disparity.
Differentiable Volume Sampling.
To obtain the 2D silhouettes from 3D volume, we propose a simple approach using operator that flattens the 3D spatial output across disparity dimension. This operation can be treated as an approximation to the raytracing algorithm. To make the entire sampling process differentiable, we adopt the similar sampling strategy as proposed in Jaderberg et al. (2015). That is, each point defines a spatial location where a sampling kernel is applied to get the value at a particular voxel in the output volume .
(8) 
Here, , and are parameters of a generic sampling kernel
which defines the interpolation method. We implement bilinear sampling kernel
in this work.Finally, we summarize the dense sampling step and channelwise flattening step as follows.
(9) 
Note that we use the operator for projection instead of summation along one dimension since the volume is represented as a binary cube where the solid voxels have value 1 and empty voxels have value 0. Intuitively, we have the following two observations: (1) each empty voxel will not contribute to the foreground pixel of from any viewpoint; (2) each solid voxel can contribute to the foreground pixel of only if it is visible from specific viewpoint.
Appendix B Details regarding learning from partial views
In our experiments, we have access to 2D projections from the entire 24 azimuth angles for each object in the training set. A natural but more challenging setting is to learn 3D reconstruction given only partial views for each object. To evaluate the performance gap of using partial views during training, we train the model in two different ways: 1) using a narrow range of azimuths and 2) using sparse azimuths. For the first one, we constrain the azimuth range of 105 (8 out of 24 views). For the second one, we provide 8 views which form full 360 rotation but with a larger stepsize of 45.
For both tasks, we conduct the training based on the new constraints. More specifically, we pretrain the encoder using the method proposed by Yang et al. Yang et al. (2015) with similar curriculum learning strategy: RNN1, RNN2, RNN4 and finally RNN7 (since only 8 views are available during training). For finetuning step, we limit the number of input views based on the constraint. For evaluation in the test set, we use all the views so that the numbers are comparable with original setting. As shown in Table 4, performances of all three models drop a little bit. Overall, the proposed network (1) learns better 3D shape with projection regularization and (2) is capable of learning the 3D shape by providing 2D observations only. Note that the partial view experiments are conducted on single category only, but we believe the results will be consistent in multiple categories.
Method Evaluation Set  chair  chairN  chairS  
training  test  training  test  training  testing  
PTNProj:single  0.5712  0.5027  0.4882  0.4583  0.5201  0.4869 
PTNComb:single  0.6435  0.5067  0.5564  0.4429  0.6037  0.4682 
CNNVol:single  0.6390  0.4983  0.5518  0.4380  0.5712  0.4646 
NN search (vol. supervision)  —  0.3557  —  0.3073  —  0.3084 
Appendix C Additional visualization results on 3D volumetric shape reconstruction
As shown in Figure 8, Figure 9, Figure 10 and Figure 11, we provide additional sidebyside analysis for each of the three models we trained. In each figure, each row is an independent comparison. The first column is the 2D image we used as input of the model. The second and third column show the groundtruth 3D volume (same volume rendered from two views for better visualization purpose). Similarly, we list the model trained with projection loss only (PTNProj), combined loss (PTNComb) and volume loss only (CNNVol) from the fourth column up to the ninth column.











