PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Deep learning has significantly improved 2D image recognition. Extending into 3D may advance many new applications including autonomous vehicles, virtual and augmented reality, authoring 3D content, and even improving 2D recognition. However despite growing interest, 3D deep learning remains relatively underexplored. We believe that some of this disparity is due to the engineering challenges involved in 3D deep learning, such as efficiently processing heterogeneous data and reframing graphics operations to be differentiable. We address these challenges by introducing PyTorch3D, a library of modular, efficient, and differentiable operators for 3D deep learning. It includes a fast, modular differentiable renderer for meshes and point clouds, enabling analysis-by-synthesis approaches. Compared with other differentiable renderers, PyTorch3D is more modular and efficient, allowing users to more easily extend it while also gracefully scaling to large meshes and images. We compare the PyTorch3D operators and renderer with other implementations and demonstrate significant speed and memory improvements. We also use PyTorch3D to improve the state-of-the-art for unsupervised 3D mesh and point cloud prediction from 2D images on ShapeNet. PyTorch3D is open-source and we hope it will help accelerate research in 3D deep learning.READ FULL TEXT VIEW PDF
We present Kaolin, a PyTorch library aiming to accelerate 3D deep learni...
Point cloud is point sets defined in 3D metric space. Point cloud has be...
Although convolutional neural networks have achieved remarkable success ...
This work presents TorchRadon – an open source CUDA library which contai...
We present a Deep Differentiable Simplex Layer (DDSL) for neural network...
We introduce Torch-Points3D, an open-source framework designed to facili...
Applications in virtual and augmented reality create a demand for rapid
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Deform meshes by reinforcement learning
Over the past decade, deep learning has significantly advanced the ability of AI systems to process 2D image data. We can now build high-performing systems for tasks such as object Russakovsky et al. (2015); Krizhevsky et al. (2012); Simonyan and Zisserman (2015); He et al. (2016) and scene Yu et al. (2015); Zhou et al. (2017) classification, object detection Ren et al. (2015), semantic Long et al. (2015) and instance He et al. (2017)
segmentation, and human pose estimationAlp Güler et al. (2018). These systems can operate on complex image data and have been deployed in countless real-world settings. Though sucessful, these methods suffer from a common shortcoming: they process 2D snapshots and ignore the true 3D nature of the world.
Extending deep learning into 3D can unlock many new applications. Recognizing objects in 3D point clouds Qi et al. (2017a, b) can enhance the sensing abilities of autonomous vehicles, or enable new augmented reality experiences. Predicting depth Eigen et al. (2014); Eigen and Fergus (2015), or 3D shape Choy et al. (2016); Fan et al. (2017); Wang et al. (2018); Novotny et al. (2019) can lift 2D images into 3D. Generative models Wu et al. (2016); Yang et al. (2019); Nash et al. (2020) might one day aid artists in authoring 3D content. Image-based tasks like view synthesis can be improved with 3D representations given only 2D supervision Sitzmann et al. (2019); Wiles et al. (2020); Mildenhall et al. (2020). Despite growing interest, 3D deep learning remains relatively underexplored.
We believe that some of this disparity is due to the significant engineering challenges involved in 3D deep learning. One such challenge is heterogeneous data. 2D images are almost universally represented by regular pixel grids. In contrast, 3D data are stored in a variety of structured formats including voxel grids Choy et al. (2016); Tatarchenko et al. (2017), point clouds Qi et al. (2017a); Fan et al. (2017), and meshes Wang et al. (2018); Gkioxari et al. (2019)
which can exhibit per-element heterogeneity. For example, meshes may differ in their number of vertices and faces, and their topology. Such heterogeneity makes it difficult to efficiently implement batched operations on 3D data using the tensor-centric primitives provided by standard deep learning toolkits like PyTorchPaszke et al. (2019)
and TensorflowAbadi et al. (2016). A second key challenge is differentiability. The computer graphics community has developed many methods for efficiently processing 3D data. However, to be embedded in deep learning pipelines, each operation must be revisited to also efficiently compute gradients. Some operations, such as camera transformations, admit gradients trivially via automatic differentiation. Others, such as mesh rendering, must be reformulated via differentiable relaxation Loper and Black (2014); Kato et al. (2018); Liu et al. (2019); Chen et al. (2019).
We address these challenges by introducing PyTorch3D, a library of modular, efficient, and well-tested operators for 3D deep learning built on PyTorch Paszke et al. (2019). All operators are fast and differentiable, and many are implemented with custom CUDA kernels to improve efficiency and minimize memory usage. We provide reusable data structures for managing batches of point cloud and meshes, allowing all PyTorch3D operators to support batches of heterogeneous data.
One key feature of PyTorch3D is a modular and efficient differentiable rendering engine for meshes and point clouds. Differentiable rendering projects 3D data to 2D images, enabling analysis-by-synthesis Grenander (1976-1981) and inverse rendering Marschner and Greenberg (1998); Patow and Pueyo (2003) approaches where 3D predictions can be made using only image-level supervision Loper and Black (2014). Compared to other recent differentiable renderers Kato et al. (2018); Liu et al. (2019), ours is both more modular and more scalable. We achieve modularity by decomposing the rendering pipeline into stages (rasterization, lighting, shading, blending) which can easily be replaced with new user-defined components, allowing users to adapt our renderer to their needs. We improve efficiency via two-stage rasterization, and by limiting the number of primitives that influence each pixel.
We compare the optimized PyTorch3D operators with naïve PyTorch implementations and with those provided in other open-source packages, demonstrating improvements in speed and memory usage by up to 10. We also showcase the flexibility of PyTorch3D by experimenting on the task of unsupervised shape prediction on the ShapeNet Chang et al. (2015) dataset. With our modular differentiable renderer and efficient 3D operators we improve the state-of-the-art for unsupervised 3D mesh and point cloud prediction from 2D images while maintaining high computational throughput.
PyTorch3D is open-source111https://pytorch3d.org/ and will evolve over time. We hope that it will be a valuable tool to the community and help accelerate research in 3D deep learning.
3D deep learning libraries. There are a number of toolkits for 3D deep learning. Fey and Lenssen (2019) focuses on learning on graphs, Valentin et al. (2019) provides differentiable graphics operators, Jatavallabhula et al. (2019) collects commonly used 3D functions. However, they do not provide support for heterogeneous batching of 3D data, crucial for large-scale learning, or modularity for differentiable rendering, crucial for exploration. PyTorch3D introduces data structures that support batches of 3D data with varying sizes and topologies. This key abstraction allows our 3D operators, including rendering, to operate on large heterogeneous batches.
Differentiable renderers. OpenDR Loper and Black (2014) and NMR Kato et al. (2018) perform traditional rasterization in the forward pass and compute approximate gradients in the backward pass. More recently, SoftRas Liu et al. (2019) and DIB-R Chen et al. (2019) propose differentiable renderers by viewing rasterization as a probabilistic process where each pixel’s color depends on multiple mesh faces. Differentiable ray tracing methods, such as Redner Li et al. (2018) and Mitsuba2 Nimier-David et al. (2019), give more photorealistic images at the expense of increased compute.
Differentiable point cloud rendering is explored in Insafutdinov and Dosovitskiy (2018)
which uses ray termination probabilities and stores points in a voxel grid which limits resolution. DSSYifan et al. (2019) renders each point as a disk. SynSin Wiles et al. (2020) also splats a per point sphere from a soft z-buffer of fixed length. Most recently, Pulsar Lassner (2020) uses an unlimited z-buffer for rendering but uses the first few points for gradient propagation.
Differentiable rendering is an active research area. PyTorch3D introduces a modular renderer, inspired by Liu et al. (2019), by redesigning and exposing intermediates computed during rasterization. Unlike other differentiable renderers, users can easily customize the rendering pipeline with PyTorch shaders.
3D shape prediction. In Section 4 we experiment with unsupervised 3D shape prediction using the differentiable silhouette and textured renderers for meshes and point clouds in PyTorch3D. There is a vast line of work on 3D shape prediction, including two-view and multiview methods Scharstein and Szeliski (2002); Hartley and Zisserman (2003), model-based approaches Fischler and Elschlager (1973); Brooks et al. (1979); Brooks (1983); Lowe et al. (1991); Blanz and Vetter (2003); Loper et al. (2015), and recent supervised deep methods that predict voxels Choy et al. (2016); Tatarchenko et al. (2017), meshes Groueix et al. (2018); Wang et al. (2018); Smith et al. (2019); Gkioxari et al. (2019), point clouds Fan et al. (2017) and implicit functions Mescheder et al. (2019); Park et al. (2019); Saito et al. (2019). Differentiable renderers allow for unsupervised shape prediction via 2D re-projection losses Tulsiani et al. (2017); Kato et al. (2018); Kanazawa et al. (2018); Tulsiani et al. (2018); Liu et al. (2019).
This section describes the core features of PyTorch3D. For a 3D deep learning library to be effective, 3D operators need to be efficient when handling complex 3D data. We benchmark the speed and memory usage of key PyTorch3D operators, comparing to pure PyTorch and existing open-source implementations. We show that PyTorch3D achieves speedups up to .
3D data structures. Working with minibatches of data is crucial in deep learning both for stable optimization and computational efficiency. However operating on batches of 3D meshes and point clouds is challenging due to heterogeneity: meshes may have varying numbers of vertices and faces, and point clouds may have varying numbers of points. To overcome this challenge, PyTorch3D provides data structures to manage batches of meshes and point clouds which allow conversion between different tensor-based representations (list, packed, padded) needed for various operations.
Implementation Details. We benchmark with meshes from ShapeNetCoreV1 Chang et al. (2015) using homogeneous and heterogeneous batches. We form point clouds by sampling uniformly from mesh surfaces. All results are averaged over 5 random batches and 10 runs per batch, and are run on a V100 GPU.
We report time and memory usage for a representative set of popular 3D operators, namely Chamfer loss, graph convolution and K nearest neighbors. Other 3D operators in PyTorch3D follow similar trends. We compare to PyTorch and state-of-the-art open-source libraries.
Chamfer loss is a common metric that quantifies agreement between point clouds and . Formally,
where is the set of pairs such that is the nearest neighbor of . A (homogeneously) batched implementation is straightforward in PyTorch, but is inefficient since it requires forming a pairwise distance matrix with elements (where is the batch size). PyTorch3D avoids this inefficiency (and supports heterogeneity) by using our efficient KNN to compute neighbors. Figure 0(a) compares ours against the naïve approach with , , and varying . The naïve approach runs out of memory for k, while ours scales to large point clouds and reduces time and memory use by more than .
. Given feature vectorsfor each vertex , it computes new features where are the neighbors of in the mesh and are learned weight matrices. PyTorch3D implements graph convolution via a fused CUDA kernel for gatherscatter_add. Figure 0(b) shows that this improves speed and memory use by up to compared against a pure PyTorch implementation.
K Nearest Neighbors for -dimensional points are used in Chamfer loss, normal estimation, and other point cloud operations. We implement exact KNN with custom CUDA kernels that natively handle heterogeneous batches. Our implementation is tuned for and , and uses template metaprogramming to individually optimize each pair. We compare against Faiss Johnson et al. (2017), a fast GPU library for KNN that targets a different portion of the design space: it does not handle batching, is optimized for high-dimensional descriptors (), and scales to billions of points. Figures 0(c) and 0(d) show that we outperform Faiss by up to for batched 3D problems.
A renderer inputs scene information (camera, geometry, materials, lights, textures) and outputs an image. A differentiable renderer can also propagate gradients backward from rendered images to scene information Loper and Black (2014), allowing rendering to be embedded into deep learning pipelines Kato et al. (2018); Liu et al. (2019).
PyTorch3D includes a differentiable renderer that operates on heterogeneous batches of triangle meshes. Our renderer follows three core design principles: differentiability, meaning that it computes gradients with respect to all inputs; efficiency, meaning that it runs quickly and scales to large meshes and images; and modularity, meaning that users can easily replace components of the renderer to customize its functionality to their use case and experiment with alternate formulations.
As shown in Figure 1(a), our renderer has two main components: the rasterizer selects the faces affecting each pixel, and the shader computes pixel colors. Through careful design of these components, we improve efficiency and modularity compared to prior differentiable renderers Kato et al. (2018); Liu et al. (2019); Chen et al. (2019).
Rasterizer. The rasterizer first uses a camera to transform meshes from world to view coordinates. Cameras are Python objects and compute gradients via autograd; this aids modularity, as users can easily implement new camera models other than our provided orthographic and perspective cameras.
Next, the core rasterization algorithm finds triangles that intersect each pixel. In traditional rasterization, each pixel is influenced only by its nearest face along the -axis. As shown in Figure 1(b), this can cause step changes in pixel color as faces move along the -axis (due to occlusion) and in the -plane (due to face boundaries). Following Liu et al. (2019) we soften these nondifferentiabilities by blending the influence of multiple faces for each pixel, and decaying a face’s influence toward its boundary.
Our rasterizer departs from Liu et al. (2019) in three ways to improve efficiency and modularity. First, in Liu et al. (2019), pixels are influenced by every face they intersect in the -plane; in contrast we constrain pixels to be influenced by only the nearest faces along the -axis, computed using per-pixel priority queues. Similar to traditional -buffering, this lets us quickly discard many faces for each pixel, improving efficiency. We show in Section 4 that this modification does not harm downstream task performance. Second, Liu et al. (2019) naïvely compares each pixel with each face. We improve efficiency using a two-pass approach similar to Laine and Karras (2011), first working on image tiles to eliminate faces before moving to pixels. Third, Liu et al. (2019) fuses rasterization and shading into a monolithic CUDA kernel. We decouple these, and as shown in Figure 1(a) our rasterizer returns Fragment data about the nearest faces to each pixel: face ID, barycentric coordinates of the pixel in the face, and (signed) pixel-to-face distances along the -axis and in the -plane. This allows shaders to be implemented separately from the rasterizer, significantly improving modularity. This change also improves efficiency, as cached Fragment data can be used to avoid costly recomputation of face-pixel intersections in the backward pass.
Shaders consume the Fragment data produced by the rasterizer, and compute pixel values of the rendered image. They typically involve two stages: first computing values for the pixel (one for each face identified by the Fragment data), then blending them to give a final pixel value.
Shaders are Python objects, and Fragment data are stored in PyTorch tensors.
Shaders can thus work with data using standard PyTorch operators, and compute gradients via autograd.
This design is highly modular, as users can easily implement new shaders to customize the renderer.
For example, Algorithm 1 implements the silhouette renderer from Liu et al. (2019) using a two-line shader:
dists is a tensor of shape giving signed distances in the -plane from each pixel to its nearest faces (part of Fragment data), and
is a hyperparameter. This is simpler thanLiu et al. (2019) where silhouette rendering is one path in a monolithic CUDA kernel and gradients are manually computed. Similarly, Algorithm 2 implements the softmax blending algorithm from Liu et al. (2019) for textured rendering. , are part of the Fragment data, and is the output from the shader. , , , are hyper-parameters defined by the user.
Shaders can implement complex effects using the Fragment data from the rasterizer. Face IDs can be used to fetch per-face data like normals, colors, or texture coordinates; barycentric coordinates can be used to interpolate data over the face;and distances can be used to blend the influence of faces in different ways. Crucially, all texturing, lighting, and blending logic can be written using PyTorch operators, and differentiated using autograd. We provide a variety of shaders implementing silhouette rendering, flat, Gouraud Gouraud (1971), and Phong Phong (1975) shading with per-vertex colors or texture coordinates, and which blend colors using hard assignment (similar to Kato et al. (2018)) or softmax blending (like Liu et al. (2019)).
Performance. In Figure 3 we benchmark the speed and memory usage of our renderer against SoftRas Liu et al. (2019). We implement shaders to reproduce their silhouette rendering and textured mesh rendering using per-vertex textures and Gouraud shading. Ours is significantly faster, especially for large meshes, higher-resolution images, and heterogeneous batches: for textured rendering of heterogenous batches of meshes with mean 50k faces each at , our renderer is more than faster than Liu et al. (2019). Our renderer uses more GPU memory than Liu et al. (2019) since we explicitly store Fragment data. However our absolute memory use ( 2GB for texture at ) is small compared to modern GPU capacity (32GB for V100); we believe our improved modularity offsets our memory use.
PyTorch3D also provides an efficient and modular point cloud renderer following the same design as the mesh renderer. It is similarly factored into a rasterizer that finds the -nearest points to each pixel along the -direction, and shaders written in PyTorch that consume fragment data from the rasterizer to compute pixel colors. We provide shaders for silhouette and textured point cloud rendering, and users can easily implement custom shaders to customize the rendering pipeline. Like the mesh renderer, the point cloud render natively supports heterogeneous batches of points.
Our point cloud renderer uses a similar strategy as our mesh renderer for overcoming the non-differentiabilities discussed in Figure 1(b). Each point is splatted to a circular region in screen-space whose opacity decreases away from the region’s center. The value of each pixel is computed by blending information for the -nearest points in the -axis whose splatted regions overlap the pixel.
In our experiments we consider two blending methods: Alpha-compositing and Normalized weighted sums. Suppose a pixel is overlapped by the splats from points with opacities sorted in increasing -order, and the points are associated with feature vectors . Features might be boolean (for silhouette rendering), RGB colors (for textured rendering), or neural features Insafutdinov and Dosovitskiy (2018); Wiles et al. (2020). The blending methods compute features for the pixel:
Alpha-compositing uses the depth ordering of points so that nearer points contribute more, while Norm ignores the depth order. Both blending functions are differentiable and can propagate gradients from pixel features backward to both point features and opacities. They can be implemented with a few lines of PyTorch code similar to Algorithms 1 and 2.
We benchmark our point cloud renderer by sampling points from the surface of random ShapeNet meshes, then rendering silhouettes using our two blending functions. We vary the point cloud size, points per pixel (), and image size (64, 256). Results are shown in Figure 4.
Our renderer is efficient: rendering a batch of 8 point clouds with points each to a batch of images with
points per pixel takes about 75ms and uses just over 1GB of GPU memory, making it feasible to use the renderer as a differentiable layer when training neural networks. Comparing Figures3(a) and 3(b) shows similar performance when rendering homogenous and heterogeneous batches of comparable size. Comparing Figures 3(b) and 3(c) shows that both blending methods have similar memory usage, but Norm is up to 25% faster for large since it omits the inner cumulative product. Point cloud rendering is generally more efficient than mesh rendering since it requires fewer computations per primitive during rasterization.
Extending supervised learning into 3D is challenging due to the difficulty of obtaining 3D annotations. Extracting 3D via weakly or unsupervised approaches can unlock exciting applications, such as novel view synthesis, 3D content creation for AR/VR and more. Differentiable rendering makes 3D inference via 2D supervision possible. In this section, we experiment with unsupervised 3D shape prediction using PyTorch3D. At test time, models predict an object’s 3D shape (point cloud or mesh) from a single RGB image. During training they receive no 3D supervision, instead relying on re-projection losses via differentiable rendering. We compare to SoftRasLiu et al. (2019) and demonstrate superior shape prediction and speed. The efficiency of our renderer allows us to scale to larger images and more complex meshes, setting a new state-of-the-art for unsupervised 3D shape prediction.
Dataset. We experiment on ShapeNetCoreV1 Chang et al. (2015), using the rendered images and train/test splits from Choy et al. (2016). Images are 137137, and portray instances from 13 object categories from various viewpoints. There are roughly 840K train and 210K test images; we reserve of train images for validation.
Metrics. We follow Wang et al. (2018); Gkioxari et al. (2019) for evaluating 3D meshes. We sample 10k points uniformly at random from the surface of predicted and ground-truth meshes. These are compared using Chamfer distance, normal consistency, and score for various distance thresholds . Refer to Gkioxari et al. (2019) for more details. To fairly extend evaluation to point cloud models, we predict 10k points per cloud.
In this task, we predict 3D object meshes from 2D image inputs with 2D silhouette supervision. Following Liu et al. (2019), we use a 2-view training setup: for each object image on the minibatch we include its corresponding view under a random known transformation. At test time, all models take as input a single image and directly predict the 3D object mesh in camera coordinates. Inspired by recent advances in supervised shape prediction Wang et al. (2018); Gkioxari et al. (2019), we explore the following model designs:
Sphere FC. Closely following Liu et al. (2019), this model learns to deform an initial sphere template with 642 vertices and 1280 faces. The input image is encoded via a CNN backbone followed by two fully connected layers, each of 1024 dimensions, which cast offset predictions for each vertex.
Sphere GCN. Inspired by recent advances in Wang et al. (2018); Gkioxari et al. (2019), this model uses graph convolutions. Each vertex pools its features from the output of the backbone indexed by its 2D projection. A set of graph convs on the mesh graph predict vertex offsets. Similar to Sphere FC, this model deforms a sphere mesh. Unlike Sphere FC, image to feature alignment is preserved. PyTorch3D’s scale efficiency allows us to train a High Res variant which deforms an even larger sphere (2562 verts, 5120 faces).
Voxel GCN. The above models predict shapes homeomorphic to spheres, and are not able to capture varying shape topologies. Recently, Mesh R-CNN Gkioxari et al. (2019) shows that coarse voxel predictions capture instance-specific object topologies. These low fidelity voxel predictions perform poorly as they can’t reconstruct fine structures or smooth surfaces, but when refined with 3D mesh supervision, they become state-of-the-art. To demonstrate PyTorch3D’s flexibility with varying shape topologies and heterogeneous batches, we train a model which makes voxel predictions and refines them via a sequence of graph convs. At train time, this model uses coarse voxel supervision similar to Gkioxari et al. (2019), but unlike Gkioxari et al. (2019) it does not use 3D mesh supervision.
All models minimize the objective . is the negative intersection over union between rendered and ground truth 2D silhouette, as in Liu et al. (2019). and are Laplacian and edge length mesh regularizers, respectively, ensuring that meshes are smooth. We set =19 and =0.2.
All model variants use a ResNet50 He et al. (2016)
backbone, initialized with ImageNet weights. We follow the training schedule fromGkioxari et al. (2019); we use Adam with a constant learning rate of
for 25 epochs. We use a 64 batch size across 8 V100 GPUs (8 images per GPU). Sphere GCN and Voxel GCN use a sequence of 3 graph convs, each with 512 dimensions. The voxel head in Voxel GCN predictsvoxels via a 4 layer CNN, identical to Gkioxari et al. (2019). For all models, the inputs are 137137 images from Choy et al. (2016). We report performance using our mesh renderer and compare to SoftRas Liu et al. (2019).
Table 1 shows our results. We compare to the supervised state-of-the-art Mesh R-CNN Gkioxari et al. (2019) and its Voxel Only variant, the latter being a direct comparison to Voxel GCN as both use the same 3D supervision – coarse voxels but no meshes. We render at 6464, similar to SoftRas Liu et al. (2019), and push to even higher rendering resolution at 128128. Figure 4(a) compares the models qualitatively.
|Model||3D Superv.||Renderer||Metrics||Mesh Size|
|High Res Sphere GCN||✗||✗||128||PyTorch3D||0.281||0.696||26.7||73.8||87.8||2562||5120|
|Voxel Only Gkioxari et al. (2019)||✓||✗||n/a||n/a||0.916||0.595||7.70||33.1||54.9||
|Mesh R-CNN Gkioxari et al. (2019)||✓||✓||n/a||n/a||0.171||0.713||35.1||82.6||93.2||
From Table 1 we observe: (a) Compared to SoftRas, PyTorch3D achieves on par or better performance across models. This validates the design of the Pytorch3D renderer and proves that rendering the K closest faces, instead of all, does not hurt performance, (b) Even though Sphere FC & Sphere GCN deform the same sphere, Sphere GCN is superior across renderers. Figure 4(a) backs this claim qualitatively. Unlike Sphere GCN, Sphere FC is sensitive to rendering size (row 4 vs 6), (c) High Res Sphere GCN significantly outperforms Sphere GCN both quantitatively (row 10 vs 11) and qualitatively (Figure 4(a)) showing the advantages of PyTorch3D’s scale efficiency, (d) Voxel GCN significantly outperforms Voxel Only, both trained with the same 3D supervision. As mentioned in Gkioxari et al. (2019), Voxel Only performs poorly, since voxel predictions are coarse and fail to capture fine shapes. Voxel GCN improves all metrics and reconstructs fine structures with complex topologies as shown in Figure 4(a). Here, PyTorch3D’s efficiency results in a 2 training speedup compared to SoftRas.
Varying K As described in Section 3, the PyTorch3D renderer exposes the K nearest faces per pixel. We experiment with different values of K for Sphere GCN & Voxel GCN at a 128128 resolution in Table 3. We observe that K=50 results in best performance for both models (Chamfer 0.293 for Sphere GCN & 0.277 for Voxel GCN). More interestingly, a small value for K(=20) works well for smaller meshes (Sphere GCN) but results in a performance drop for larger meshes (Voxel GCN) with the same output image size. This is expected since for the same K, for a larger mesh, fewer faces per pixel are rendered in proportion to the mesh size. Finally, increasing K does not improve models further. This empirically validates our design of rendering a fixed finite number of faces per pixel.
Textured rendering In addition to shapes, we reconstruct object textures by extending the above models to predict per vertex values using textured rendering. The models are trained with an additional loss between the rendered and the ground truth image. Table 2 shows our analysis. Simultaneous shape and texture prediction is harder, yet our models achieve high reconstruction quality for both shapes (compared to Table 1) and textures. Figure 4(b) shows qualitative results.
To show the effectiveness of the point cloud renderer, we train unsupervised point cloud models. Our model, called Point Align, deforms 10k points sampled from a sphere by pooling backbone features and predicting per point offsets. Table 4 shows our analysis. Point Align slightly improves shape metrics compared to meshes (Tables 4 vs 1). As with meshes, a finite K(=100) performs best while increasing K does not improve models further. We reconstruct texture by predicting additional values per point via textured rendering (Table 6). Figure 6 shows qualitative results. Finally, we compare Point Align to the supervised PSG Fan et al. (2017) baseline in Table 6; we significantly improve and show slightly worse Chamfer, which is expected as PSG directly minimizes Chamfer with 3D supervision, while our model is not trained to minimize any 3D metric and uses no supervision.
In this paper we introduce PyTorch3D, a library for 3D research which is fully differentiable to enable easy inclusion into existing deep learning pipelines, modular for fast experimentation and extension, optimized and efficient to allow scaling to large 3D data sizes, and with heterogeneous batching capabilities to support the variable shape topologies encountered in real world use cases.
Similar to PyTorch or TensorFlow, which provide fundamental tools for deep learning, PyTorch3D is a library of building blocks optimized for 3D deep learning. Frameworks like PyTorch and PyTorch3D provide a platform for solving a plethora of AI problems including semantic or synthesis tasks. Data-driven solutions to such problems necessitate the community’s caution regarding their potential impact, especially when these models are being deployed in the real world. In this work, our goal is to provide students, researchers and engineers with the best possible tools to accelerate research at the intersection of 3D and deep learning. To this end, we are committed to support and develop PyTorch3D in adherence with the needs of the academic, research and engineering community.
The batches of 3D data used in the benchmarks in Section 3 were sampled from ShapeNetCoreV1 using a uniform sampling strategy. For meshes, homogeneous batches (;
here denotes the variance in size for elements within the batch) consist of one mesh with the specified number of faces, repeatedtimes with being the batch size. For heterogeneous batches (), we sample
values from a uniform distribution with the specifiedand in the number of faces per mesh. For each value, we find the mesh in the dataset with number of faces closest to the desired value.
For point cloud operators, we first sample a random mesh from the dataset and then uniformly sample points from the surface of the mesh. For homogeneous batches, the same number of points is sampled times. For heterogeneous batches, values for the number of points in each point cloud is sampled from a uniform distribution with the specified and .
In Section 4, we experiment with unsupervised 3D shape prediction from a single image using the PyTorch3D renderers. Predicting a 3D shape from an input image is ambiguous as infinite 3D shapes can explain a 2D image. In the absence of ground truth supervision, to resolve shape ambiguity we assume 2 views of the object at train time. This setup is also assumed in SoftRas Liu et al. (2019).
The 2-view training setup is shown in Figure 7(a). When constructing minibatches, for every image input sampled, we assume an additional view under known rotation and translation . The predicted shape from the input is then transformed by and its rendered output is compared against the ground truth silhouette. At test time, the model takes as input a single image and predicts the object’s 3D shape in camera coordinates, as shown in Figure 7(b). In the case of textured rendering, the textured views are additionally compared at train time. We do the same for point clouds.
In the case of meshes, we experiment with three architectures for the 3D shape network, namely Sphere FC, Sphere GCN and Voxel GCN. The first two deform an initial sphere template and use no 3D shape supervision. The third deforms shape topologies cast by the voxel head and tests the effectiveness of differentiable rendering for varying shapes and connectivities as predicted by a neural network. These topologies are different and more diverse than human-defined or simple genus 0 shapes allowing us to stretch test our renderer. Voxel GCN uses coarse voxel supervision to train the voxel head which learns to predict instance-specific, yet coarse, shapes. Note that Voxel GCN is identical to Mesh R-CNN Gkioxari et al. (2019), but unlike Mesh R-CNN it uses no mesh supervision. The details of the architectures for all three models are shown in Figure 8. For each model, we show the network modules as well as the shapes of the intermediate batched tensor outputs. Note that for Voxel GCN, the shapes of the meshes vary within the batch due to the heterogeneity from the voxel predictions.
where and are the predicted and ground truth silhouettes. To enforce smoothness in the predicted shapes we also use shape regularizers. We use an edge length regularizer that minimizes the length of the edges in the predicted mesh, identical to Mesh R-CNN Gkioxari et al. (2019). We also use a laplacian regularizer , defined as follows
where is the Laplacian matrix of shape and are the vertices of shape . We use PyTorch3D’s implementation of this loss that handles heterogeneous batches efficiently. At train time, the loss is averaged across vertices and elements in the batch. This is unlike SoftRas Liu et al. (2019), where it is summed across vertices and across elements in the batch.
More discussion on Table 1 and Figure 4(a) Table 1 compares performance for different shape networks with PyTorch3D and SoftRas and under two rendering resolutions. We replicate the original experiments in SoftRas Liu et al. (2019) which use a Sphere FC model to deform a sphere of 642 vertices and 1280 faces and render at 6464. While PyTorch3D performs better under that setting, we see that the absolute performance for both SoftRas (chamfer 1.475) and PyTorch3D (chamfer 0.989) is not good enough. Going to higher rendering resolutions improves models (chamfer 0.346 with SoftRas vs 0.313 with PyTorch3D). Sphere GCN, a more geometry-aware network architecture, performs much better for all rendering resolutions and all rendering engines (chamfer 0.301 with SoftRas vs chamfer 0.293 with PyTorch3D at 128128). This is also evident in Figure 4(a) where we show predictions for all models at a 128128 rendering resolution. Sphere FC is able to capture a global pose and appearance but fails to capture instance-specific shape details. Sphere GCN, which deforms the same sized sphere template, is able to reconstruct instance-specific shapes, such as chair legs and tables, much more accurately. We take advantage of PyTorch3D’s scale efficiency and use a larger sphere where we immediately see improved reconstruction quality (chamfer 0.281 with PyTorch3D). However, Sphere FC and Sphere GCN can only make predictions homeomorphic to spheres. This forces them to multiple face intersections and irregularly shaped faces in order to capture the complex shape topologies. Our Voxel GCN variant tests the ability of our renderer to make predictions of any genus by refining shape topologies predicted by a neural network. The reconstruction quality improves for Voxel GCN which captures holes and complex shapes while maintaining regular shaped faces. More importantly, Voxel GCN improves the Voxel Only baseline, which also predicts voxels but performs no further refinement. Even though both models are trained with the same supervision, Voxel GCN achieves a chamfer of 0.267 compared to 0.916 for Voxel Only.
More visualizations We show additional mesh predictions via silhouette rendering in Figure 9 and via textured rendering in Figure 10. Our texture model is simple; we predict texture values per vertex. The shaders interpolate the vertex textures into face textures based on their algorithm and the blending accumulates the face textures for each pixel. More sophisticated texture models could involve directly sampling vertex textures from the input image or using GANs for natural looking texture predictions. Despite its simplicity, our texture model does well at reconstructing textures. Textures are more accurate for Voxel GCN mainly because of the regular shaped and sized faces in the predicted mesh. This is in contrast to Sphere GCN, where faces can intersect and largely vary in size in an effort to capture the complex object shape, which in turn affects the interpolated face textures. In addition, simple shading functions, such as flat shading, visibly lead to worse textures, as expected. More sophisticated shaders, like Phong and Gouraud, lead to better texture reconstructions.
Expanding on the brief description of the unsupervised point cloud prediction model in the main paper, here we provide more details and analyze our results further. Our training and inference procedures are similar to meshes; we follow the same 2-view train setup while at test time our model only takes as input a single RGB image (Figure 7). Our Point Align model is similar to Sphere GCN. We start from an initial point cloud of 10k points sampled randomly and uniformly from the surface of a sphere. Each point samples features from the backbone using Vert Align. A sequence of fully connected layers replace Graph Conv- point clouds do not have connectivity patterns - resulting in per point offset predictions (and values, in the case of textured models). The network architecture is shown in Figure 11.
Losses Point Align is trained solely with for silhouette rendering and an additional loss between the rendered and ground truth image for textured rendering. There are no shape regularizers.
Blending In the case of textured rendering, we experiment with two blending (or compositing) functions, Alpha and Norm.
More discussion on Table 4 Our point cloud evaluation in Table 4 is directly comparable to that of meshes in Table 1; we use the same number of points, 10k, to compute chamfer and for meshes (by sampling points from the mesh surface) and point clouds (directly using the points in the cloud). From the comparison with meshes, we observe that our unsupervised point cloud model leads to slightly better reconstruction quality than meshes (chamfer 0.272 for Point Align vs 0.281 for High Res Sphere GCN). This is proof that our point cloud renderer is effective at predicting shapes. The quality of our reconstructions is shown in Figure 12 which provides more shape and texture predictions by our Point Align model with an alpha and norm compositor.
Tensorflow: A system for large-scale machine learning.In OSDI, 2016.
Densepose: Dense human pose estimation in the wild.In CVPR, 2018.
Multiple view geometry in computer vision. Cambridge university press, 2003.
Imagenet classification with deep convolutional neural networks.In NeurIPS, 2012.