1 Introduction
In recent years, learningbased 3D reconstruction approaches have achieved impressive results [45, 51, 12, 13, 17, 24, 39, 59, 74, 46]. By using rich prior knowledge obtained during the training process, they are able to infer a 3D model from as little as a single image. However, most learningbased methods are restricted to synthetic data, mainly because they require accurate 3D ground truth models as supervision for training.
To overcome this barrier, recent works have investigated approaches which require only 2D supervision in the form of depth maps or multiview images. Most existing approaches achieve this by modifying the rendering process to make it differentiable [4, 53, 73, 41, 81, 40, 44, 31, 21, 47, 34, 15, 70, 69, 57, 54, 11]. While yielding compelling results, they are restricted to specific 3D representations (e.g. voxels or meshes) which suffer from discretization artifacts. Moreover, the computational cost limits them to small resolutions or deforming a fixed template mesh. At the same time, implicit representations [45, 51, 12] for shape and texture [49, 61] have been proposed which do not require discretization during training and have a constant memory footprint. However, existing approaches using implicit representations require 3D ground truth for training and it remains unclear how to learn implicit representations from image data alone.
Contribution: In this work, we introduce Differentiable Volumetric Rendering (DVR). Our key insight is that we can derive analytic gradients for the predicted depth map with respect to the network parameters of the implicit shape and texture representation (see Fig. 1). This insight enables us to design a differentiable renderer for implicit shape and texture representations and allows us to learn these representations solely from multiview images and object masks. Since our method does not have to store volumetric data in the forward pass, its memory footprint is independent of the sampling accuracy of the depth prediction step. We show that our formulation can be used for various tasks such as single and multiview reconstruction, and works with synthetic and real data. In contrast to [49], we do not need to condition the texture representation on the geometry, but learn a single model with shared parameters that represents both geometry and texture.
2 Related Work
3D Representations: Learningbased 3D reconstruction approaches can be categorized in terms of the 3D representation they use. Common representations are voxels [13, 8, 76, 56, 19, 67, 59, 77], point clouds [17, 2, 71, 78, 29, 38], meshes [39, 24, 74, 30, 50], and implicit representations [45, 12, 51, 75, 61, 46, 22, 28, 3].
Voxels can be easily processed by standard deep learning architectures, but even when operating on sparse data structures
[23, 59, 68], they are limited to relatively small resolution of or voxels. While pointbased approaches [17, 2, 71, 78, 38] are more memoryefficient, they require intensive postprocessing because of missing connectivity information. Meshbased methods do not perform additional postprocessing, but most approaches require a deformable template mesh [74] or represent geometry as a collection of 3D patches [24] which leads to selfintersections and 3D meshes that are not watertight.To mitigate these problems, implicit 3D representations have recently gained popularity [45, 12, 51, 75, 61, 46, 22, 28, 48, 49, 3, 66]
. By describing 3D geometry and texture implicitly, e.g., as the decision boundary of a binary classifier
[45, 12], they do not require discretization and have a fixed memory footprint.In this work, we show that the volumetric rendering step for implicit representations is inherently differentiable. In contrast to previous works, this allows us to learn implicit 3D shape and texture representations using 2D supervision.
3D Reconstruction:
Recovering 3D information which is lost during the image capturing process is one of the longstanding goals of computer vision
[25]. Classic multiview stereo (MVS) methods [64, 5, 20, 63, 6, 7, 35, 65, 55] usually match features between neighboring views [5, 20, 63] or reconstruct the 3D shape in a voxel grid [6, 7, 35, 65, 55]. While the former methods produce depth maps as output which have to be fused in a lossy postprocessing step, e.g., using volumetric fusion [14], the latter approaches are limited by the excessive memory requirements of 3D voxel grids. In contrast to these highly engineered approaches, our generic method directly outputs a consistent representation in 3D space which can be easily converted into a watertight mesh while having a constant memory footprint.Recently, learningbased approaches [58, 53, 16, 37, 27, 79, 80] have been proposed that either learn to match image features [37], refine or fuse depth maps [16, 58], optimize parts of the classical MVS pipeline [52]
, or replace the entire MVS pipeline with neural networks that are trained endtoend
[27, 79, 80]. In contrast to these learningbased approaches, our method can be supervised from 2D images alone and outputs a consistent 3D representation.Differentiable Rendering: A large number of recent works focus on making the rendering process differentiable. The different approaches can again be categorized by the underlying representation of 3D geometry that they use.
Loper et al. [44] propose OpenDR which approximates the backward pass of the traditional meshbased graphics pipeline and has inspired several followup works [31, 41, 81, 21, 11]. Liu et al. [41] replace the rasterization step with a soft version to make it differentiable. While yielding compelling results in reconstruction tasks, these approaches are restricted to mesh representations. They require a template mesh for training, restricting the topology of the output.
Another line of work operates on 3D voxel grids [52, 73, 43, 47]. Paschalidou et al. [52] and Tulsiani et al. [73]
propose a probabilistic ray potential formulation. While providing a solid mathematical framework, all calculations performed during the forward pass need to be saved for backpropagation, restricting these approaches to relatively smallresolution voxel grids.
In concurrent work [42]^{1}^{1}1First Arxiv version published on November 2, 2019.
, Liu et al. propose to infer implicit representations from multiview silhouettes by performing maxpooling over the intersections of rays with a sparse number of supporting regions around anchor points. In contrast, we use texture information enabling us to improve over the visual hull and to reconstruct concave shapes. Besides, their approach requires to save intermediate evaluations for computing gradients. In contrast, we show that volumetric rendering is
inherently differentiable for implicit representations. Thus, no intermediate results need to be saved for the backward pass.3 Method
In this section, we describe our Differentiable Volumetric Rendering (DVR) approach. We first define the implicit neural representation which we use for representing 3D shape and texture. Afterwards, we provide a formal description of DVR and all relevant implementation details. An overview of our approach is provided in Fig. 2.
3.1 Shape and Texture Representation
Shape: In contrast to discrete voxel and pointbased representations, we represent the 3D shape of an object implicitly using the occupancy network introduced in [45]:
(1) 
An occupancy network
assigns a probability of occupancy to every point
in 3D space. Optionally,can be conditioned on a latent vector
, e.g., for inferring 3D shape from a single 2D image. The 3D surface of an object is implicitly determined by the level set for a threshold parameter and can be extracted at arbitrary resolution using isosurface extraction techniques.^{2}^{2}2See Mescheder et al. [45] for details.Texture: Similarly, we can describe the texture of a 3D object using a texture field [49]
(2) 
which regresses an RGB color value for every point in 3D space. Again, can be conditioned on a latent embedding of the object. The texture of an object is given by the values of on the object’s surface (). In this work, we implement and as a single neural network with two shallow heads.
Supervision: Recent works [45, 12, 51, 49, 61] have shown that it is possible to learn and with 3D supervision (i.e., ground truth 3D models). However, ground truth 3D data is often very expensive or even impossible to obtain for realworld datasets. In the next section, we will therefore introduce DVR, an alternative approach that enables us to learn both and from 2D images alone. For clarity, we drop the condition variable in the following.
3.2 Differentiable Volumetric Rendering
Our goal is to learn and from 2D image observations. Consider a single image observation. We define a photometric reconstruction loss
(3) 
which we aim to optimize. Here, denotes the observed image and is the image rendered by our implicit model.^{3}^{3}3Note that the rendered image depends on through and . We have dropped this dependency here to avoid clutter in the notation. Moreover, denotes the RGB value of the observation at pixel and is a (robust) photoconsistency measure such as the norm. To minimize the reconstruction loss wrt. the network parameters using gradientbased optimization techniques, we must be able to (i) render given and and (ii) compute gradients of wrt. the network parameters . Our core contribution is to provide solutions to both problems, leading to an efficient algorithm for learning implicit 3D representations from 2D images.
Rendering: For a camera located at we can predict the color at pixel by casting a ray from through and determining the first point of intersection with the isosurface as illustrated in Fig. 3. The color value is then given by . We refer the reader to Section 3.3 for details on the ray casting process.
Gradients: To obtain gradients of with respect to
, we first use the multivariate chain rule:
(4) 
Here, denotes the Jacobian matrix for a vectorvalued function with vectorvalued argument and indicates matrix multiplication. By exploiting , we obtain
(5) 
since both as well as depend on . Because is defined implicitly, calculating is nontrivial. We first exploit that lies on the ray from through . For any pixel , this ray can be described by where is the vector connecting and (see Fig. 3). Since must lie on , there exists a depth value , such that . We call the surface depth. This enables us to rewrite as
(6) 
For computing the gradient of the surface depth with respect to we exploit implicit differentiation [3, 60]. Differentiating on both sides wrt. , we obtain:
(7) 
Rearranging (7), we arrive at the following closed form expression for the gradient of the surface depth :
(8) 
We remark that calculating the gradient of the surface depth wrt. the network parameters only involves calculating the gradient of at wrt. the network parameters and the surface point . Thus, in contrast to voxelbased approaches [73, 53], we do not have to store intermediate results (e.g., volumetric data) for computing the gradient of the loss wrt. the parameters, resulting in a memoryefficient algorithm. In the next section, we describe our implementation of DVR which makes use of reversemode automatic differentiation to compute the full gradient (4).
3.3 Implementation
In order to use automatic differentiation, we have to implement the forward and backward pass for the surface depth prediction step . In the following, we describe how both passes are implemented. For more details, we refer the reader to the supplementary material.
Forward Pass: As visualized in Fig. 3, we can determine by finding the first occupancy change on the ray . To detect an occupancy change, we evaluate the occupancy network at equallyspaced samples on the ray . Using a step size of , we can express the coordinates of these point in worldcoordinates as
(9) 
where determines the closest possible surface point. We first find the smallest for which changes from free space () to occupied space ():
(10) 
We obtain an approximation to the surface depth by applying the iterative secant method to the interval . In practice, we compute the surface depth for a batch of points in parallel. It is important to note that we do not need to unroll the forward pass or store any intermediate results as we exploit implicit differentiation to directly obtain the gradient of wrt. .
Backward Pass: The input to the backward pass is the gradient of the loss wrt. a single surface depth prediction. The output of the backward pass is , which can be computed using (8). In practice, however, we would like to implement the backward pass not only for a single surface depth , but for a whole batch of depth values.
We can implement this efficiently by rewriting as
(11) 
Importantly, the left term in (11) corresponds to a normal backward operation applied to the neural network and the right term in (11) is just an (elementwise) scalar multiplication for all elements in the batch. We can hence conveniently compute the backward pass of the operator by first multiplying the incoming gradient elementwise with a factor and then backpropagating the result through the operator . Both operations can be efficiently parallelized in common deep learning frameworks.
3.4 Training
During training, we assume that we are given images together with corresponding camera intrinsics, extrinsics, and object masks . As our experiments show, our method works with as little as one image per object. In addition, our method can also incorporate depth information , if available.
For training and , we randomly sample an image and points on the image plane. We distinguish the following three cases: First, let denote the set of points that lie inside the object mask and for which the occupancy network predicts a finite surface depth as described in Section 3.3. For these points we can define a loss directly on the predicted image . Moreover, let denote the points which lie outside the object mask . While we cannot define a photometric loss for these points, we can define a loss that encourages the network to remove spurious geometry along corresponding rays. Finally, let denote the set of points which lie inside the object mask , but for which the occupancy network does not predict a finite surface depth . Again, we cannot use a photometric loss for these points, but we can define a loss that encourages the network to produce a finite surface depth.
RGB Loss: For each point in , we detect the predicted surface depth as described in Section 3.3. We define a photoconsistency loss for the points as
(12) 
where computes image features and defines a robust error metric. In practice, we use RGBvalues and (optionally) image gradients as features and an loss for .
Depth Loss: When the depth is also given, we can directly incorporate an loss on the predicted surface depth as
(13) 
where indicates the ground truth depth value of the sampled image point and denotes the predicted surface depth for pixel .
Freespace Loss: If a point lies outside the object mask but the predicted surface depth is finite, the network falsely predicts surface point . Therefore, we penalize this occupancy with
(14) 
where BCE is the binary cross entropy. When no surface depth is predicted, we apply the freespace loss to a randomly sampled point on the ray.
Occupancy Loss: If a point lies inside the object mask but the predicted surface depth is infinite, the network falsely predicts no surface points on ray . To encourage predicting occupied space on this ray, we uniformly sample depth values and define
(15) 
If we have additional depth supervision , we use instead of for the occupancy loss. Intuitively, encourages the network to occupy space along the respective rays which can then be used by in (12) and in (13) to refine the initial occupancy.
3.5 Implementation Details
We implement the combined network with 5 fullyconnected ResNet [26]
blocks, ReLU activation, and a hidden dimension of
. The output dimension of the last layer is , one dimension for the occupancy probability and three dimensions for the texture. For the singleview reconstruction experiments, we encode the input image with an ResNet18 [26] encoder network which outputs a dimensional latent code z. To facilitate training, we start with a ray sampling accuracy of which we iteratively increase to by doubling after , , and thousand iterations. We choose the sampling interval such that it covers the volume of interest for each object. We train with a batch size of images with random pixels each. We use the Adam optimizer [33] with learning rate which we decrease by a factor of after and epochs, respectively.4 Experiments
We conduct two different types of experiments to validate our approach. First, we investigate how well our approach reconstructs 3D shape and texture from a single RGB image when trained on a large collection of RGB or RGBD images. Here, we consider both the case where we have access to multiview supervision and the case where we use only a single RGBD image per object during training. Next, we apply our approach to the challenging task of multiview reconstruction, where the goal is to reconstruct complex 3D objects from real world multiview imagery.
4.1 SingleView Reconstruction
First, we investigate to which degree our method can infer a 3D shape and texture representation from singleviews.
Datasets: To adhere to community standards [45, 74, 13], we use the Choy et al. [13] subset ( classes) of the ShapeNet dataset [10] with the training, validation, and test split from [45]. While we use the renderings from Choy et al. [13] as input, we additionally render images of resolution with depth maps and object masks per object which we use for supervision. To get a large variety of viewpoints in the supervision, we randomly sample the viewpoint on the northern hemisphere as well as the distance of the camera to the object.
Baselines: We compare against the stateoftheart 3D supervised methods 3DR2N2 [13] (voxelbased), Pixel2Mesh [74] (meshbased), and ONet [45] (implicit representation) that produce watertight meshes as output. We further compare against both the 2D and the 2.5D supervised version of Differentiable Ray Consistency (DRC) [73] (voxelbased) and the 2D supervised Soft Rasterizer (SoftRas) [41] (meshbased). For 3DR2N2, we use the pretrained model from [45] which was shown to produce better results than the original model from [13]. For the other baselines we use the pretrained models^{4}^{4}4Unfortunately, we cannot show texture results for DRC and SoftRas as texture prediction is not part of the official code repositories. from the authors.
4.1.1 MultiView Supervision
We first consider the case where we have access to multiview supervision with images and corresponding object masks. In addition, we also investigate the case when ground truth depth maps are given.
Results: We evaluate the results using the Chamfer distance from [45]. In contrast to previous works [45, 41, 73, 13], we compare directly wrt. to the ground truth shape models, not the voxelized or watertight versions.
2D Supervision  2.5D Supervision  3D Supervision  

DRC (Mask) [73]  SoftRas [41]  Ours ()  DRC (Depth) [73]  Ours ()  3D R2N2 [13]  ONet [45]  Pixel2Mesh [74]  
category  
airplane  0.659  0.149  0.162  0.377  0.143  0.215  0.151  0.183 
bench    0.241  0.215    0.165  0.210  0.171  0.191 
cabinet    0.231  0.207    0.181  0.246  0.189  0.194 
car  0.340  0.221  0.212  0.316  0.180  0.250  0.181  0.154 
chair  0.660  0.338  0.281  0.510  0.228  0.282  0.224  0.259 
display    0.284  0.312    0.248  0.323  0.275  0.231 
lamp    0.381  0.469    0.353  0.566  0.380  0.309 
loudspeaker    0.320  0.340    0.292  0.333  0.290  0.284 
rifle    0.155  0.167    0.139  0.199  0.160  0.151 
sofa    0.407  0.234    0.221  0.264  0.217  0.211 
table    0.374  0.233    0.189  0.247  0.185  0.215 
telephone    0.131  0.170    0.132  0.221  0.155  0.145 
vessel    0.233  0.246    0.205  0.248  0.220  0.201 
mean  0.553  0.266  0.250  0.401  0.206  0.277  0.215  0.210 
Input  SoftRas  Ours ()  Pixel2Mesh  Ours () 

In Table 1 and Fig. 4 we show quantitative and qualitative results for our method and various baselines. We can see that our method is able to infer accurate 3D shape and texture representations from singleview images when only trained on multiview images and object masks as supervision signal. Quantitatively (Table 1), our method performs best among the approaches with 2D supervision and rivals the quality of methods with full 3D supervision. When trained with depth, our method performs comparably to the best methods which use full 3D information. Qualitatively (Fig. 4), we see that in contrast to the meshbased approaches, our method is not restricted to certain topologies. When trained with the photoconsistency loss , we see that our approach is able to predict accurate texture information in addition to the 3D shape.
4.1.2 SingleView Supervision
Input  Prediction 
Input  Prediction 
The previous experiment indicates that our model is able to infer accurate shape and texture information without 3D supervision. A natural question to ask is how many images are required during training. To this end, we investigate the case when only a single image with depth and camera information is available. Since we represent the 3D shape in a canonical object coordinate system, the hypothesis is that the model can aggregate the information over multiple training instances, although it sees every object only from one perspective. As the same image is used both as input and supervision signal, we now condition on our renderings instead of the ones provided by Choy et al. [13].
Results: Surprisingly, Fig. 5 shows that our method is able to infer appropriate 3D shape and texture when only a singleview is available per object, confirming our hypothesis. Quantitatively, the Chamfer distance of the model trained with and with only a single view () is comparable to the model trained with with views (). The reason for the numbers being worse than in Section 4.1 is that for our renderings, we do not only sample the view point, but also the distance to the object resulting in a much harder task (see Fig. 5).
4.2 MultiView Reconstruction
Finally, we investigate if our method is also applicable to multiview reconstruction in realworld scenarios. We investigate two cases: First, when multiview images and object masks are given. Second, when additional sparse depth maps are given which can be obtained from classic multiview stereo algorithms [62].
Dataset: We conduct this experiment on scans , , and from the challenging realworld DTU dataset [1]. The dataset contains or images with camera information for each object and baseline and structured light ground truth data. The presented objects are challenging as their appearance changes in different view points due to specularities. Our samplingbased approach allows us to train on the full image resolution of . We label the object masks ourselves and always remove the same images with profound changes in lighting conditions, e.g., caused by the appearance of scanner parts in the background.
Baselines: We compare against classical approaches that have 3D meshes as output. To this end, we run screened Poisson surface reconstruction (sPSR) [32] on the output of the classical MVS algorithms Campbell et al. [9], Furukawa et al. [18], Tola et al. [72], and Colmap [62]. We find that the results on the DTU benchmark for the baselines are highly sensitive to the trim parameter of sPSR and therefore report results for the trim parameters (watertight output), (good qualitative results) and (good quantitative results). For a fair comparison, we use the object masks to remove all points which lie outside the visual hull from the predictions of the baselines before running sPSR.^{5}^{5}5See supplementary material for details. We use the official DTU evaluation script in “surface mode”.
Trim Param.  Accuracy  Completeness  Chamfer  

Tola [72] + sPSR  0  2.409  1.242  1.826 
Furu [18] + sPSR  0  2.146  0.888  1.517 
Colmap [62] + sPSR  0  1.881  0.726  1.303 
Camp [9] + sPSR  0  2.213  0.670  1.441 
Tola [72] + sPSR  5  1.531  1.267  1.399 
Furu [18] + sPSR  5  1.733  0.888  1.311 
Colmap [62] + sPSR  5  1.400  0.782  1.091 
Camp [9] + sPSR  5  1.991  0.670  1.331 
Tola [72] + sPSR  7  0.396  1.424  0.910 
Furu [18] + sPSR  7  0.723  0.955  0.839 
Colmap [62] + sPSR  7  0.446  1.020  0.733 
Camp [9] + sPSR  7  1.466  0.719  1.092 
Ours ()    1.702  1.081  1.391 
Ours ( + )    1.001  0.766  0.884 
Results: We show qualitative and quantitative results in Fig. 6 and Table 2. Qualitatively, we find that our method can be used for multiview 3D reconstruction, directly resulting in watertight meshes. The ability to accurately model cavities of the objects shows that our model uses texture information to improve over the visual hull (Fig. 7). Quantitatively, Table 2 shows that our approach rivals the results from highly tuned MVS algorithms. We note that the DTU ground truth is itself sparse (Fig. 6(c)) and methods are therefore rewarded for trading off completeness for accuracy, which explains the better quantitative performance of the baselines for higher trim parameters (Fig. 8).
5 Conclusion and Future Work
In this work we have presented Differentiable Volumetric Rendering (DVR). Observing that volumetric rendering is inherently differentiable for implicit representations allows us to formulate an analytic expression of the depth with respect to the network parameters. Our experiments show that DVR enables us to learn implicit 3D shape representations from multiview imagery without 3D supervision, rivalling models that are learned with full 3D supervision. Moreover, we found that our model can also be used for multiview 3D reconstruction. We believe that DVR is a useful technique which broadens the scope of applications of implicit shape and texture representations.
In the future, we plan to investigate how to circumvent the need for object masks and camera information, e.g., by predicting soft masks and how to estimate not only texture but also more complex material properties.
Acknowledgments
This work was supported by an NVIDIA research gift. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRSIS) for supporting Michael Niemeyer.
References
 [1] Henrik Aanæs, Rasmus R. Jensen, George Vogiatzis, Engin Tola, and Anders B. Dahl. Largescale data for multipleview stereopsis. International Journal of Computer Vision (IJCV), 120(2):153–168, 2016.

[2]
Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas J. Guibas.
Learning representations and generative models for 3D point clouds.
In
Proc. of the International Conf. on Machine learning (ICML)
, 2018.  [3] Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, and Yaron Lipman. Controlling neural level sets. In Advances in Neural Information Processing Systems (NIPS), 2019.
 [4] Bruce G. Baumgart. Geometric Modeling for Computer Vision. Stanford University, 1974.
 [5] Michael Bleyer, Christoph Rhemann, and Carsten Rother. Patchmatch stereo  stereo matching with slanted support windows. In Proc. of the British Machine Vision Conf. (BMVC), 2011.
 [6] Jeremy S. De Bonet and Paul Viola. Poxels: Probabilistic voxelized volume reconstruction. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 1999.
 [7] Adrian Broadhurst, Tom W. Drummond, and Roberto Cipolla. A probabilistic framework for space carving. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2001.
 [8] André Brock, Theodore Lim, James M. Ritchie, and Nick Weston. Generative and discriminative voxel modeling with convolutional neural networks. arXiv.org, 1608.04236, 2016.
 [9] Neill D.F. Campbell, George Vogiatzis, Carlos Hernández, and Roberto Cipolla. Using multiple hypotheses to improve depthmaps for multiview stereo. In Proc. of the European Conf. on Computer Vision (ECCV), 2008.
 [10] Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, QiXing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An informationrich 3d model repository. arXiv.org, 1512.03012, 2015.

[11]
Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaako Lehtinen, Alec Jacobson,
and Sanja Fidler.
Learning to predict 3d objects with an interpolationbased differentiable renderer.
In Advances in Neural Information Processing Systems (NIPS), 2019. 
[12]
Zhiqin Chen and Hao Zhang.
Learning implicit fields for generative shape modeling.
In
Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)
, 2019.  [13] Christopher B. Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3dr2n2: A unified approach for single and multiview 3d object reconstruction. In Proc. of the European Conf. on Computer Vision (ECCV), 2016.
 [14] Brian Curless and Marc Levoy. A volumetric method for building complex models from range images. In ACM Trans. on Graphics, 1996.
 [15] Valentin Deschaintre, Miika Aittala, Frédo Durand, George Drettakis, and Adrien Bousseau. Singleimage SVBRDF capture with a renderingaware deep network. In ACM Trans. on Graphics, 2018.
 [16] Simon Donné and Andreas Geiger. Learning nonvolumetric depth fusion using successive reprojections. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
 [17] Haoqiang Fan, Hao Su, and Leonidas J. Guibas. A point set generation network for 3d object reconstruction from a single image. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
 [18] Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 32(8):1362–1376, 2010.
 [19] Matheus Gadelha, Subhransu Maji, and Rui Wang. 3d shape induction from 2d views of multiple objects. In Proc. of the International Conf. on 3D Vision (3DV), 2017.
 [20] Silvano Galliani, Katrin Lasinger, and Konrad Schindler. Gipuma: Massively parallel multiview stereo reconstruction. Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e. V, 25:361–369, 2016.
 [21] Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T. Freeman. Unsupervised training for 3d morphable model regression. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [22] Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T. Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.

[23]
Ben Graham.
Sparse 3d convolutional neural networks.
In Proc. of the British Machine Vision Conf. (BMVC), 2015.  [24] Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu Aubry. AtlasNet: A papiermâché approach to learning 3d surface generation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [25] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
 [26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.
 [27] PoHan Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, and JiaBin Huang. Deepmvs: Learning multiview stereopsis. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [28] Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, and Hao Li. Deep volumetric video from very sparse multiview performance capture. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [29] Li Jiang, Shaoshuai Shi, Xiaojuan Qi, and Jiaya Jia. GAL: geometric adversarial loss for singleview 3dobject reconstruction. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [30] Angjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning categoryspecific mesh reconstruction from image collections. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [31] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [32] Michael M. Kazhdan and Hugues Hoppe. Screened poisson surface reconstruction. ACM Trans. on Graphics, 32(3):29, 2013.
 [33] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proc. of the International Conf. on Learning Representations (ICLR), 2015.
 [34] Abhijit Kundu, Yin Li, and James M. Rehg. 3drcnn: Instancelevel 3d object reconstruction via renderandcompare. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [35] Kiriakos N. Kutulakos and Steven M. Seitz. A theory of shape by space carving. International Journal of Computer Vision (IJCV), 38(3):199–218, 2000.
 [36] Aldo Laurentini. The visual hull concept for silhouettebased image understanding. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 16(2):150–162, 1994.
 [37] Vincent Leroy, JeanSébastien Franco, and Edmond Boyer. Shape reconstruction using volume sweeping and learned photoconsistency. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [38] Kejie Li, Trung Pham, Huangying Zhan, and Ian D. Reid. Efficient dense point cloud object reconstruction using deformation vector fields. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [39] Yiyi Liao, Simon Donné, and Andreas Geiger. Deep marching cubes: Learning explicit surface representations. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [40] Guilin Liu, Duygu Ceylan, Ersin Yumer, Jimei Yang, and JyhMing Lien. Material editing using a physically based rendering network. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2017.
 [41] Shichen Liu, Weikai Chen, Tianye Li, and Hao Li. Soft rasterizer: Differentiable rendering for unsupervised singleview mesh reconstruction. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [42] Shichen Liu, Shunsuke Saito, Weikai Chen, and Hao Li. Learning to infer implicit surfaces without 3d supervision. In Advances in Neural Information Processing Systems (NIPS), 2019.
 [43] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. In ACM Trans. on Graphics, 2019.
 [44] Matthew M. Loper and Michael J. Black. Opendr: An approximate differentiable renderer. In Proc. of the European Conf. on Computer Vision (ECCV), 2014.
 [45] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
 [46] Mateusz Michalkiewicz, Jhony K. Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Implicit surface representations as layers in neural networks. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [47] Thu NguyenPhuoc, Chuan Li, Stephen Balaban, and YongLiang Yang. Rendernet: A deep convolutional network for differentiable rendering from 3d shapes. In Advances in Neural Information Processing Systems (NIPS), 2018.
 [48] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Occupancy flow: 4d reconstruction by learning particle dynamics. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [49] Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, and Andreas Geiger. Texture fields: Learning texture representations in function space. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [50] Junyi Pan, Xiaoguang Han, Weikai Chen, Jiapeng Tang, and Kui Jia. Deep mesh reconstruction from single rgb images via topology modification networks. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [51] Jeong J. Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
 [52] Despoina Paschalidou, Ali Osman Ulusoy, and Andreas Geiger. Superquadrics revisited: Learning 3d shape parsing beyond cuboids. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
 [53] Despoina Paschalidou, Ali Osman Ulusoy, Carolin Schmitt, Luc van Gool, and Andreas Geiger. Raynet: Learning volumetric 3d reconstruction with ray potentials. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [54] Felix Petersen, Amit H. Bermano, Oliver Deussen, and Daniel CohenOr. Pix2vex: Imagetogeometry reconstruction using a smooth differentiable renderer. arXiv.org, 2019.
 [55] Andrew Prock and Chuck Dyer. Towards realtime voxel coloring. In Proceedings of the DARPA Image Understanding Workshop, 1998.
 [56] Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In Advances in Neural Information Processing Systems (NIPS), 2016.
 [57] Elad Richardson, Matan Sela, Roy OrEl, and Ron Kimmel. Learning detailed face reconstruction from a single image. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
 [58] Gernot Riegler, Ali Osman Ulusoy, Horst Bischof, and Andreas Geiger. OctNetFusion: Learning depth fusion from data. In Proc. of the International Conf. on 3D Vision (3DV), 2017.
 [59] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
 [60] Walter Rudin et al. Principles of mathematical analysis, volume 3. McGrawhill New York, 1964.
 [61] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixelaligned implicit function for highresolution clothed human digitization. Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [62] Johannes L. Schönberger and JanMichael Frahm. Structurefrommotion revisited. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.
 [63] Johannes L. Schönberger, Enliang Zheng, Marc Pollefeys, and JanMichael Frahm. Pixelwise view selection for unstructured multiview stereo. In Proc. of the European Conf. on Computer Vision (ECCV), 2016.
 [64] Steven M. Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and evaluation of multiview stereo reconstruction algorithms. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2006.
 [65] Steven M. Seitz and Charles R. Dyer. Photorealistic scene reconstruction by voxel coloring. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1997.
 [66] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3dstructureaware neural scene representations. In Advances in Neural Information Processing Systems (NIPS), 2019.
 [67] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 [68] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Octree generating networks: Efficient convolutional architectures for highresolution 3d outputs. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2017.
 [69] Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, and Christian Theobalt. Selfsupervised multilevel face model learning for monocular reconstruction at over 250 hz. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.

[70]
Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian
Bernard, Patrick Pérez, and Christian Theobalt.
Mofa: Modelbased deep convolutional face autoencoder for unsupervised monocular reconstruction.
In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2017.  [71] Hugues Thomas, Charles R. Qi, JeanEmmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J. Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [72] Engin Tola, Christoph Strecha, and Pascal Fua. Efficient largescale multiview stereo for ultra highresolution image sets. Machine Vision and Applications (MVA), 23(5):903–920, 2012.
 [73] Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, and Jitendra Malik. Multiview supervision for singleview reconstruction via differentiable ray consistency. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
 [74] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and YuGang Jiang. Pixel2Mesh: Generating 3D mesh models from single RGB images. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [75] Weiyue Wang, Xu Qiangeng, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. DISN: Deep implicit surface network for highquality singleview 3d reconstruction. Advances in Neural Information Processing Systems (NIPS), 2019.
 [76] Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. In Advances in Neural Information Processing Systems (NIPS), 2016.
 [77] Haozhe Xie, Hongxun Yao, Xiaoshuai Sun, Shangchen Zhou, and Shengping Zhang. Pix2vox: Contextaware 3d reconstruction from single and multiview images. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [78] Guandao Yang, Xun Huang, Zekun Hao, MingYu Liu, Serge Belongie, and Bharath Hariharan. Pointflow: 3d point cloud generation with continuous normalizing flows. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 [79] Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, and Long Quan. Mvsnet: Depth inference for unstructured multiview stereo. In Proc. of the European Conf. on Computer Vision (ECCV), 2018.
 [80] Yao Yao, Zixin Luo, Shiwei Li, Tianwei Shen, Tian Fang, and Long Quan. Recurrent mvsnet for highresolution multiview stereo depth inference. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
 [81] Jacek Zienkiewicz, Andrew J. Davison, and Stefan Leutenegger. Realtime height map fusion using differentiable rendering. In Proc. IEEE International Conf. on Intelligent Robots and Systems (IROS), 2016.
Comments
There are no comments yet.